query_id
stringlengths 32
32
| query
stringlengths 6
4.09k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 10
100
| subset
stringclasses 7
values |
|---|---|---|---|---|
ec4fbb606738d5a29536fc2630f6dd9a
|
Citation function, polarity and influence classification
|
[
{
"docid": "61d80b5b0c6c2b3feb1ce667babd2236",
"text": "In a recent article published in this journal, Lombard, Snyder-Duch, and Bracken (2002) surveyed 200 content analyses for their reporting of reliability tests; compared the virtues and drawbacks of five popular reliability measures; and proposed guidelines and standards for their use. Their discussion revealed that numerous misconceptions circulate in the content analysis literature regarding how these measures behave and can aid or deceive content analysts in their effort to ensure the reliability of their data. This paper proposes three conditions for statistical measures to serve as indices of the reliability of data and examines the mathematical structure and the behavior of the five coefficients discussed by the authors, plus two others. It compares common beliefs about these coefficients with what they actually do and concludes with alternative recommendations for testing reliability in content analysis and similar data-making efforts. In a recent paper published in a special issue of Human Communication Research devoted to methodological topics (Vol. 28, No. 4), Lombard, Snyder-Duch, and Bracken (2002) presented their findings of how reliability was treated in 200 content analyses indexed in Communication Abstracts between 1994 and 1998. In essence, their results showed that only 69% of the articles report reliabilities. This amounts to no significant improvements in reliability concerns over earlier studies (e.g., Pasadeos et al., 1995; Riffe & Freitag, 1996). Lombard et al. attribute the failure of consistent reporting of reliability of content analysis data to a lack of available guidelines, and they end up proposing such guidelines. Having come to their conclusions by content analytic means, Lombard et al. also report their own reliabilities, using not one, but four, indices for comparison: %-agreement; Scott‟s (1955) (pi); Cohen‟s (1960) (kappa); and Krippendorff‟s (1970, 2004) (alpha). Faulty software 1 initially led the authors to miscalculations, now corrected (Lombard et al., 2003). However, in their original article, the authors cite several common beliefs about these coefficients and make recommendations that I contend can seriously mislead content analysis researchers, thus prompting my corrective response. To put the discussion of the purpose of these indices into a larger perspective, I will have to go beyond the arguments presented in their article. Readers who might find the technical details tedious are invited to go to the conclusion, which is in the form of four recommendations. The Conservative/Liberal Continuum Lombard et al. report “general agreement (in the literature) that indices which do not account for chance agreement (%-agreement and Holsti‟s [1969] CR – actually Osgood‟s [1959, p.44] index) are too liberal while those that do (, , and ) are too conservative” (2002, p. 593). For liberal or “more lenient” coefficients, the authors recommend adopting higher critical values for accepting data as reliable than for conservative or “more stringent” ones (p. 600) – as if differences between these coefficients were merely a problem of locating them on a shared scale. Discussing reliability coefficients in terms of a conservative/liberal continuum is not widespread in the technical literature. It entered the writing on content analysis not so long ago. Neuendorf (2002) used this terminology, but only in passing. Before that, Potter and Lewine-Donnerstein (1999, p. 287) cited Perreault and Leigh‟s (1989, p. 138) assessment of the chance-corrected as being “overly conservative” and “difficult to compare (with) ... Cronbach‟s (1951) alpha,” for example – as if the comparison with a correlation coefficient mattered. I contend that trying to understand diverse agreement coefficients by their numerical results alone, conceptually placing them on a conservative/liberal continuum, is seriously misleading. Statistical coefficients are mathematical functions. They apply to a collection of data (records, values, or numbers) and result in one numerical index intended to inform its users about something – here about whether they can rely on their data. Differences among coefficients are due to responding to (a) different patterns in data and/or (b) the same patterns but in different ways. How these functions respond to which patterns of agreement and how their numerical results relate to the risk of drawing false conclusions from unreliable data – not just the numbers they produce – must be understood before selecting one coefficient over another. Issues of Scale Let me start with the ranges of the two broad classes of agreement coefficients, chancecorrected agreement and raw or %-agreement. While both kinds equal 1.000 or 100% when agreement is perfect, and data are considered reliable, %-agreement is zero when absolutely no agreement is observed; when one coder‟s categories unfailingly differ from the categories used by the other; or disagreement is systematic and extreme. Extreme disagreement is statistically almost as unexpected as perfect agreement. It should not occur, however, when coders apply the same coding instruction to the same set of units of analysis and work independently of each other, as is required when generating data for testing reliability. Where the reliability of data is an issue, the worst situation is not when one coder looks over the shoulder of another coder and selects a non-matching category, but when coders do not understand what they are asked to interpret, categorize by throwing dice, or examine unlike units of analysis, causing research results that are indistinguishable from chance events. While zero %-agreement has no meaningful reliability interpretation, chance-corrected agreement coefficients, by contrast, become zero when coders‟ behavior bears no relation to the phenomena to be coded, leaving researchers clueless as to what their data mean. Thus, the scales of chance-corrected agreement coefficients are anchored at two points of meaningful reliability interpretations, zero and one, whereas %-like agreement indices are anchored in only one, 100%, which renders all deviations from 100% uninterpretable, as far as data reliability is concerned. %-agreement has other undesirable properties; for example, it is limited to nominal data; can compare only two coders 2 ; and high %-agreement becomes progressively unlikely as more categories are available. I am suggesting that the convenience of calculating %-agreement, which is often cited as its advantage, cannot compensate for its meaninglessness. Let me hasten to add that chance-correction is not a panacea either. Chance-corrected agreement coefficients do not form a uniform class. Benini (1901), Bennett, Alpert, and Goldstein (1954), Cohen (1960), Goodman and Kruskal (1954), Krippendorff (1970, 2004), and Scott (1955) build different corrections into their coefficients, thus measuring reliability on slightly different scales. Chance can mean different things. Discussing these coefficients in terms of being conservative (yielding lower values than expected) or liberal (yielding higher values than expected) glosses over their crucial mathematical differences and privileges an intuitive sense of the kind of magnitudes that are somehow considered acceptable. If it were the issue of striking a balance between conservative and liberal coefficients, it would be easy to follow statistical practices and modify larger coefficients by squaring them and smaller coefficients by applying the square root to them. However, neither transformation would alter what these mathematical functions actually measure; only the sizes of the intervals between 0 and 1. Lombard et al., by contrast, attempt to resolve their dilemma by recommending that content analysts use several reliability measures. In their own report, they use , “an index ...known to be conservative,” but when measures below .700, they revert to %-agreement, “a liberal index,” and accept data as reliable as long as the latter is above .900 (2002, p. 596). They give no empirical justification for their choice. I shall illustrate below the kind of data that would pass their criterion. Relation Between Agreement and Reliability To be clear, agreement is what we measure; reliability is what we wish to infer from it. In content analysis, reproducibility is arguably the most important interpretation of reliability (Krippendorff, 2004, p.215). I am suggesting that an agreement coefficient can become an index of reliability only when (1) It is applied to proper reliability data. Such data result from duplicating the process of describing, categorizing, or measuring a sample of data obtained from the population of data whose reliability is in question. Typically, but not exclusively, duplications are achieved by employing two or more widely available coders or observers who, working independent of each other, apply the same coding instructions or recording devices to the same set of units of analysis. (2) It treats units of analysis as separately describable or categorizable, without, however, presuming any knowledge about the correctness of their descriptions or categories. What matters, therefore, is not truths, correlations, subjectivity, or the predictability of one particular coder‟s use of categories from that by another coder, but agreements or disagreements among multiple descriptions generated by a coding procedure, regardless of who enacts that procedure. Reproducibility is about data making, not about coders. A coefficient for assessing the reliability of data must treat coders as interchangeable and count observable coder idiosyncrasies as disagreement. (3) Its values correlate with the conditions under which one is willing to rely on imperfect data. The correlation between a measure of agreement and the rely-ability on data involves two kinds of inferences. Estimating the (dis)agreement in a population of data from the (dis)agreements observed and meas",
"title": ""
},
{
"docid": "6adb3d2e49fa54679c4fb133a992b4f7",
"text": "Kathleen McKeown1, Hal Daume III2, Snigdha Chaturvedi2, John Paparrizos1, Kapil Thadani1, Pablo Barrio1, Or Biran1, Suvarna Bothe1, Michael Collins1, Kenneth R. Fleischmann3, Luis Gravano1, Rahul Jha4, Ben King4, Kevin McInerney5, Taesun Moon6, Arvind Neelakantan8, Diarmuid O’Seaghdha7, Dragomir Radev4, Clay Templeton3, Simone Teufel7 1Columbia University, 2University of Maryland, 3University of Texas at Austin, 4University of Michigan, 5Rutgers University, 6IBM, 7Cambridge University, 8University of Massachusetts at Amherst",
"title": ""
},
{
"docid": "ce2ef27f032d30ce2bc6aa5509a58e49",
"text": "Bibliometric measures are commonly used to estimate the popularity and the impact of published research. Existing bibliometric measures provide “quantitative” indicators of how good a published paper is. This does not necessarily reflect the “quality” of the work presented in the paper. For example, when hindex is computed for a researcher, all incoming citations are treated equally, ignoring the fact that some of these citations might be negative. In this paper, we propose using NLP to add a “qualitative” aspect to biblometrics. We analyze the text that accompanies citations in scientific articles (which we term citation context). We propose supervised methods for identifying citation text and analyzing it to determine the purpose (i.e. author intention) and the polarity (i.e. author sentiment) of citation.",
"title": ""
}
] |
[
{
"docid": "b5cce2a39a51108f9191bdd3516646ca",
"text": "The aim of component technology is the replacement of large monolithic applications with sets of smaller software components, whose particular functionality and interoperation can be adapted to users’ needs. However, the adaptation mechanisms of component software are still limited. Most proposals concentrate on adaptations that can be achieved either at compile time or at link time. Current support for dynamic component adaptation, i.e. unanticipated, incremental modifications of a component system at run-time, is not sufficient. This paper proposes object-based inheritance (also known as delegation) as a complement to purely forwarding-based object composition. It presents a typesafe integration of delegation into a class-based object model and shows how it overcomes the problems faced by forwarding-based component interaction, how it supports independent extensibility of components and unanticipated, dynamic component adaptation.",
"title": ""
},
{
"docid": "7ccbb730f1ce8eca687875c632520545",
"text": "Increasing cost of the fertilizers with lesser nutrient use efficiency necessitates alternate means to fertilizers. Soil is a storehouse of nutrients and energy for living organisms under the soil-plant-microorganism system. These rhizospheric microorganisms are crucial components of sustainable agricultural ecosystems. They are involved in sustaining soil as well as crop productivity under organic matter decomposition, nutrient transformations, and biological nutrient cycling. The rhizospheric microorganisms regulate the nutrient flow in the soil through assimilating nutrients, producing biomass, and converting organically bound forms of nutrients. Soil microorganisms play a significant role in a number of chemical transformations of soils and thus, influence the availability of macroand micronutrients. Use of plant growth-promoting microorganisms (PGPMs) helps in increasing yields in addition to conventional plant protection. The most important PGPMs are Azospirillum, Azotobacter, Bacillus subtilis, B. mucilaginosus, B. edaphicus, B. circulans, Paenibacillus spp., Acidithiobacillus ferrooxidans, Pseudomonas, Burkholderia, potassium, phosphorous, zinc-solubilizing V.S. Meena (*) Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India Indian Council of Agricultural Research – Vivekananda Institute of Hill Agriculture, Almora 263601, Uttarakhand, India e-mail: [email protected]; [email protected] I. Bahadur • B.R. Maurya Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India A. Kumar Department of Botany, MMV, Banaras Hindu University, Varanasi 221005, India R.K. Meena Department of Plant Sciences, School of Life Sciences, University of Hyderabad, Hyderabad 500046, TG, India S.K. Meena Division of Soil Science and Agricultural Chemistry, Indian Agriculture Research Institute, New Delhi 110012, India J.P. Verma Institute of Environment and Sustainable Development, Banaras Hindu University, Varanasi 22100, Uttar Pradesh, India # Springer India 2016 V.S. Meena et al. (eds.), Potassium Solubilizing Microorganisms for Sustainable Agriculture, DOI 10.1007/978-81-322-2776-2_1 1 microorganisms, or SMART microbes; these are eco-friendly and environmentally safe. The rhizosphere is the important area of soil influenced by plant roots. It is composed of huge microbial populations that are somehow different from the rest of the soil population, generally denominated as the “rhizosphere effect.” The rhizosphere is the small region of soil that is immediately near to the root surface and also affected by root exudates.",
"title": ""
},
{
"docid": "da7b39dce3c7c8a08f11db132925fe37",
"text": "In this paper, a new language identification system is presented based on the total variability approach previously developed in the field of speaker identification. Various techniques are employed to extract the most salient features in the lower dimensional i-vector space and the system developed results in excellent performance on the 2009 LRE evaluation set without the need for any post-processing or backend techniques. Additional performance gains are observed when the system is combined with other acoustic systems.",
"title": ""
},
{
"docid": "0bb73266d8e4c18503ccda4903856e44",
"text": "Recent progress in advanced driver assistance systems and the race towards autonomous vehicles is mainly driven by two factors: (1) increasingly sophisticated algorithms that interpret the environment around the vehicle and react accordingly, and (2) the continuous improvements of sensor technology itself. In terms of cameras, these improvements typically include higher spatial resolution, which as a consequence requires more data to be processed. The trend to add multiple cameras to cover the entire surrounding of the vehicle is not conducive in that matter. At the same time, an increasing number of special purpose algorithms need access to the sensor input data to correctly interpret the various complex situations that can occur, particularly in urban traffic. By observing those trends, it becomes clear that a key challenge for vision architectures in intelligent vehicles is to share computational resources. We believe this challenge should be faced by introducing a representation of the sensory data that provides compressed and structured access to all relevant visual content of the scene. The Stixel World discussed in this paper is such a representation. It is a medium-level model of the environment that is specifically designed to compress information about obstacles by leveraging the typical layout of outdoor traffic scenes. It has proven useful for a multi∗Corresponding author: [email protected] Authors contributed equally and are listed in alphabetical order Preprint submitted to Image and Vision Computing February 14, 2017 tude of automotive vision applications, including object detection, tracking, segmentation, and mapping. In this paper, we summarize the ideas behind the model and generalize it to take into account multiple dense input streams: the image itself, stereo depth maps, and semantic class probability maps that can be generated, e.g ., by deep convolutional neural networks. Our generalization is embedded into a novel mathematical formulation for the Stixel model. We further sketch how the free parameters of the model can be learned using structured SVMs.",
"title": ""
},
{
"docid": "601d9060ac35db540cdd5942196db9e0",
"text": "In this paper, we review nine visualization techniques that can be used for visual exploration of multidimensional financial data. We illustrate the use of these techniques by studying the financial performance of companies from the pulp and paper industry. We also illustrate the use of visualization techniques for detecting multivariate outliers, and other patterns in financial performance data in the form of clusters, relationships, and trends. We provide a subjective comparison between different visualization techniques as to their capabilities for providing insight into financial performance data. The strengths of each technique and the potential benefits of using multiple visualization techniques for gaining insight into financial performance data are highlighted.",
"title": ""
},
{
"docid": "7abad18b2ddc66b07267ef76b109d1c9",
"text": "Modern applications for distributed publish/subscribe systems often require stream aggregation capabilities along with rich data filtering. When compared to other distributed systems, aggregation in pub/sub differentiates itself as a complex problem which involves dynamic dissemination paths that are difficult to predict and optimize for a priori, temporal fluctuations in publication rates, and the mixed presence of aggregated and non-aggregated workloads. In this paper, we propose a formalization for the problem of minimizing communication traffic in the context of aggregation in pub/sub. We present a solution to this minimization problem by using a reduction to the well-known problem of minimum vertex cover in a bipartite graph. This solution is optimal under the strong assumption of complete knowledge of future publications. We call the resulting algorithm \"Aggregation Decision, Optimal with Complete Knowledge\" (ADOCK). We also show that under a dynamic setting without full knowledge, ADOCK can still be applied to produce a low, yet not necessarily optimal, communication cost. We also devise a computationally cheaper dynamic approach called \"Aggregation Decision with Weighted Publication\" (WAD). We compare our solutions experimentally using two real datasets and explore the trade-offs with respect to communication and computation costs.",
"title": ""
},
{
"docid": "d795351a71887f46f9729e8e06a69bc6",
"text": "This research finds out what criteria Ethereum needs to fulfil to replace paper contracts and if it fulfils them. It dives into aspects such as privacy and security of the blockchain and its contracts, and if it is even possible at all to place a contract on the blockchain. However, due to the variety of contract clauses and a large privacy setback, it is not recommended to place paper contracts on the Ethereum blockchain.",
"title": ""
},
{
"docid": "ae7009ff00bec61884759b6eacf7e6b2",
"text": "Four novel terephthaloyl thiourea chitosan (TTU-chitosan) hydrogels were synthesized via a cross-linking reaction of chitosan with different concentrations of terephthaloyl diisothiocyanate. Their structures were investigated by elemental analyses, FTIR, SEM and X-ray diffraction. The antimicrobial activities of the hydrogels against three species of bacteria (Bacillis subtilis, Staphylococcus aureus and Escherichia coli) and three crop-threatening pathogenic fungi (Aspergillus fumigatus, Geotrichum candidum and Candida albicans) are much higher than that of the parent chitosan. The hydrogels were more potent in case of Gram-positive bacteria than Gram-negative bacteria. Increasing the degree of cross-linking in the hydrogels resulted in a stronger antimicrobial activity.",
"title": ""
},
{
"docid": "d1b509ce63a9ca777d6a0d4d8af19ae3",
"text": "The study explores the reliability, validity, and measurement invariance of the Video game Addiction Test (VAT). Game-addiction problems are often linked to Internet enabled online games; the VAT has the unique benefit that it is theoretically and empirically linked to Internet addiction. The study used data (n=2,894) from a large-sample paper-and-pencil questionnaire study, conducted in 2009 on secondary schools in Netherlands. Thus, the main source of data was a large sample of schoolchildren (aged 13-16 years). Measurements included the proposed VAT, the Compulsive Internet Use Scale, weekly hours spent on various game types, and several psychosocial variables. The VAT demonstrated excellent reliability, excellent construct validity, a one-factor model fit, and a high degree of measurement invariance across gender, ethnicity, and learning year, indicating that the scale outcomes can be compared across different subgroups with little bias. In summary, the VAT can be helpful in the further study of video game addiction, and it contributes to the debate on possible inclusion of behavioral addictions in the upcoming DSM-V.",
"title": ""
},
{
"docid": "55285f99e1783bcba47ab41e56171026",
"text": "Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.",
"title": ""
},
{
"docid": "70eac68ec33cdf99fee4a16f2cee468a",
"text": "Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.",
"title": ""
},
{
"docid": "0df3e40b3fa44121943de03941fdddc0",
"text": "From generation to generation in all countries all around the world medicinal plants play an important role in our live from ancient time till these days of wide drugs and pharmacological high technique industries , the studding of biological and pharmacological activities of plant essential oils attracted the attention to the potential use of these natural products from chemical and pharmacological investigation to their therapeutic aspects. In this paper two resins commiphora Africana and commiphora myrrha were selected to discuss their essential oils for chemical analysis and biological aspect the results of GCMS shows that the two resins are rich in sesqiuterpenes and sesqiuterpene lactones compounds that possess anti-inflammatory and antitumor activity Antibacterial and antifungal bioassay shows antibacterial and antifungal activity higher in the myrrha oil than the Africana oil while antiviral bioassay shows higher antiviral activity in the Africana oil than myrrha oil",
"title": ""
},
{
"docid": "c60693035f0f99528a741fe5e3d88219",
"text": "Transmit array design is more challenging for dual-band operation than for single band, due to the independent 360° phase wrapping jumps needed at each band when large electrical length compensation is involved. This happens when aiming at large gains, typically above 25 dBi with beam scanning and $F/D \\le 1$ . No such designs have been reported in the literature. A general method is presented here to reduce the complexity of dual-band transmit array design, valid for arbitrarily large phase error compensation and any band ratio, using a finite number of different unit cells. The procedure is demonstrated for two offset transmit array implementations operating in circular polarization at 20 GHz(Rx) and 30 GHz(Tx) for Ka-band satellite-on-the-move terminals with mechanical beam-steering. An appropriate set of 30 dual-band unit cells is developed with transmission coefficient greater than −0.9 dB. The full-size transmit array is characterized by full-wave simulation enabling elevation beam scanning over 0°–50° with gains reaching 26 dBi at 20 GHz and 29 dBi at 30 GHz. A smaller prototype was fabricated and measured, showing a measured gain of 24 dBi at 20 GHz and 27 dBi at 30 GHz. In both cases, the beam pointing direction is coincident over the two frequency bands, and thus confirming the proposed design procedure.",
"title": ""
},
{
"docid": "f9765c97a101a163a486b18e270d67f5",
"text": "We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature representation; and conventional margin methods for neural networks only enforce margin at the output layer. Such methods are therefore not well suited for deep networks. In this work, we propose a novel loss function to impose a margin on any chosen set of layers of a deep network (including input and hidden layers). Our formulation allows choosing any lp norm (p ≥ 1) on the metric measuring the margin. We demonstrate that the decision boundary obtained by our loss has nice properties compared to standard classification loss functions. Specifically, we show improved empirical results on the MNIST, CIFAR-10 and ImageNet datasets on multiple tasks: generalization from small training sets, corrupted labels, and robustness against adversarial perturbations. The resulting loss is general and complementary to existing data augmentation (such as random/adversarial input transform) and regularization techniques such as weight decay, dropout, and batch norm. 2",
"title": ""
},
{
"docid": "682b3d97bdadd988b0a21d5dd6774fbc",
"text": "WTF (\"Who to Follow\") is Twitter's user recommendation service, which is responsible for creating millions of connections daily between users based on shared interests, common connections, and other related factors. This paper provides an architectural overview and shares lessons we learned in building and running the service over the past few years. Particularly noteworthy was our design decision to process the entire Twitter graph in memory on a single server, which significantly reduced architectural complexity and allowed us to develop and deploy the service in only a few months. At the core of our architecture is Cassovary, an open-source in-memory graph processing engine we built from scratch for WTF. Besides powering Twitter's user recommendations, Cassovary is also used for search, discovery, promoted products, and other services as well. We describe and evaluate a few graph recommendation algorithms implemented in Cassovary, including a novel approach based on a combination of random walks and SALSA. Looking into the future, we revisit the design of our architecture and comment on its limitations, which are presently being addressed in a second-generation system under development.",
"title": ""
},
{
"docid": "088078841a9bf35bcfb38c1d85573860",
"text": "Multilingual Word Embeddings (MWEs) represent words from multiple languages in a single distributional vector space. Unsupervised MWE (UMWE) methods acquire multilingual embeddings without cross-lingual supervision, which is a significant advantage over traditional supervised approaches and opens many new possibilities for low-resource languages. Prior art for learning UMWEs, however, merely relies on a number of independently trained Unsupervised Bilingual Word Embeddings (UBWEs) to obtain multilingual embeddings. These methods fail to leverage the interdependencies that exist among many languages. To address this shortcoming, we propose a fully unsupervised framework for learning MWEs1 that directly exploits the relations between all language pairs. Our model substantially outperforms previous approaches in the experiments on multilingual word translation and cross-lingual word similarity. In addition, our model even beats supervised approaches trained with cross-lingual resources.",
"title": ""
},
{
"docid": "4362bc019deebc239ba4b6bc2fee446e",
"text": "observed. It was mainly due to the developments in biological studies, the change of a population lifestyle and the increase in the consumer awareness concerning food products. The health quality of food depends mainly on nutrients, but also on foreign substances such as food additives. The presence of foreign substances in the food can be justified, allowed or tolerated only when they are harmless to our health. Epidemic obesity and diabetes encouraged the growth of the artificial sweetener industry. There are more and more people who are trying to lose weight or keeping the weight off; therefore, sweeteners can be now found in almost all food products. There are two main types of sweeteners, i.e., nutritive and artificial ones. The latter does not provide calories and will not influence blood glucose; however, some of nutritive sweeteners such as sugar alcohols also characterize with lower blood glucose response and can be metabolized without insulin, being at the same time natural compounds. Sugar alcohols (polyols or polyhydric alcohols) are low digestible carbohydrates, which are obtained by substituting and aldehyde group with a hydroxyl one [1, 2]. As most of sugar alcohols are produced from their corresponding aldose sugars, they are also called alditols [3]. Among sugar alcohols can be listed hydrogenated monosaccharides (sorbitol, mannitol), hydrogenated disaccharides (isomalt, maltitol, lactitol) and mixtures of hydrogenated mono-diand/or oligosaccharides (hydrogenated starch hydrolysates) [1, 2, 4]. Polyols are naturally present in smaller quantities in fruits as well as in certain kinds of vegetables or mushrooms, and they are also regulated as either generally recognized as safe or food additives [5–7]. Food additives are substances that are added intentionally to foodstuffs in order to perform certain technological functions such as to give color, to sweeten or to help in food preservation. Abstract Epidemic obesity and diabetes encouraged the changes in population lifestyle and consumers’ food products awareness. Food industry has responded people’s demand by producing a number of energy-reduced products with sugar alcohols as sweeteners. These compounds are usually produced by a catalytic hydrogenation of carbohydrates, but they can be also found in nature in fruits, vegetables or mushrooms as well as in human organism. Due to their properties, sugar alcohols are widely used in food, beverage, confectionery and pharmaceutical industries throughout the world. They have found use as bulk sweeteners that promote dental health and exert prebiotic effect. They are added to foods as alternative sweeteners what might be helpful in the control of calories intake. Consumption of low-calorie foods by the worldwide population has dramatically increased, as well as health concerns associated with the consequent high intake of sweeteners. This review deals with the role of commonly used sugar alcohols such as erythritol, isomalt, lactitol, maltitol, mannitol, sorbitol and xylitol as sugar substitutes in food industry.",
"title": ""
},
{
"docid": "c0283c87e2a8305ba43ce87bf74a56a6",
"text": "Real-world deployments of accelerometer-based human activity recognition systems need to be carefully configured regarding the sampling rate used for measuring acceleration. Whilst a low sampling rate saves considerable energy, as well as transmission bandwidth and storage capacity, it is also prone to omitting relevant signal details that are of interest for contemporary analysis tasks. In this paper we present a pragmatic approach to optimising sampling rates of accelerometers that effectively tailors recognition systems to particular scenarios, thereby only relying on unlabelled sample data from the domain. Employing statistical tests we analyse the properties of accelerometer data and determine optimal sampling rates through similarity analysis. We demonstrate the effectiveness of our method in experiments on 5 benchmark datasets where we determine optimal sampling rates that are each substantially below those originally used whilst maintaining the accuracy of reference recognition systems. c © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4bf86b129afab00ebe60e6ad39117177",
"text": "Migrating to microservices (microservitization) enables optimising the autonomy, replaceability, decentralised governance and traceability of software architectures. Despite the hype for microservitization , the state of the art still lacks consensus on the definition of microservices, their properties and their modelling techniques. This paper summarises views of microservices from informal literature to reflect on the foundational context of this paradigm shift. A strong foundational context can advance our understanding of microservitization and help guide software architects in addressing its design problems. One such design problem is finalising the optimal level of granularity of a microservice architecture. Related design trade-offs include: balancing the size and number of microservices in an architecture and balancing the nonfunctional requirement satisfaction levels of the individual microservices as well as their satisfaction for the overall system. We propose how self-adaptivity can assist in addressing these design trade-offs and discuss some of the challenges such a selfadaptive solution. We use a hypothetical online movie streaming system to motivate these design trade-offs. A solution roadmap is presented in terms of the phases of a feedback control loop.",
"title": ""
},
{
"docid": "e3bb16dfbe54599c83743e5d7f1facc6",
"text": "Testosterone-dependent secondary sexual characteristics in males may signal immunological competence and are sexually selected for in several species,. In humans, oestrogen-dependent characteristics of the female body correlate with health and reproductive fitness and are found attractive. Enhancing the sexual dimorphism of human faces should raise attractiveness by enhancing sex-hormone-related cues to youth and fertility in females,, and to dominance and immunocompetence in males,,. Here we report the results of asking subjects to choose the most attractive faces from continua that enhanced or diminished differences between the average shape of female and male faces. As predicted, subjects preferred feminized to average shapes of a female face. This preference applied across UK and Japanese populations but was stronger for within-population judgements, which indicates that attractiveness cues are learned. Subjects preferred feminized to average or masculinized shapes of a male face. Enhancing masculine facial characteristics increased both perceived dominance and negative attributions (for example, coldness or dishonesty) relevant to relationships and paternal investment. These results indicate a selection pressure that limits sexual dimorphism and encourages neoteny in humans.",
"title": ""
}
] |
scidocsrr
|
636f172b02e5af09431bf0c148ce9de8
|
Swarm intelligence based routing protocol for wireless sensor networks: Survey and future directions
|
[
{
"docid": "510b9b709d8bd40834ed0409d1e83d4d",
"text": "In this paper we describe AntHocNet, an algorithm for routing in mobile ad hoc networks. It is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement. The algorithm is based on the Nature-inspired Ant Colony Optimization framework. Paths are learned by guided Monte Carlo sampling using ant-like agents communicating in a stigmergic way. In an extensive set of simulation experiments, we compare AntHocNet with AODV, a reference algorithm in the field. We show that our algorithm can outperform AODV on different evaluation criteria. AntHocNet’s performance advantage is visible over a broad range of possible network scenarios, and increases for larger, sparser and more mobile networks.",
"title": ""
},
{
"docid": "376c9736ccd7823441fd62c46eee0242",
"text": "Description: Infrastructure for Homeland Security Environments Wireless Sensor Networks helps readers discover the emerging field of low-cost standards-based sensors that promise a high order of spatial and temporal resolution and accuracy in an ever-increasing universe of applications. It shares the latest advances in science and engineering paving the way towards a large plethora of new applications in such areas as infrastructure protection and security, healthcare, energy, food safety, RFID, ZigBee, and processing. Unlike other books on wireless sensor networks that focus on limited topics in the field, this book is a broad introduction that covers all the major technology, standards, and application topics. It contains everything readers need to know to enter this burgeoning field, including current applications and promising research and development; communication and networking protocols; middleware architecture for wireless sensor networks; and security and management. The straightforward and engaging writing style of this book makes even complex concepts and processes easy to follow and understand. In addition, it offers several features that help readers grasp the material and then apply their knowledge in designing their own wireless sensor network systems: Examples illustrate how concepts are applied to the development and application of wireless sensor networks Detailed case studies set forth all the steps of design and implementation needed to solve real-world problems Chapter conclusions that serve as an excellent review by stressing the chapter's key concepts References in each chapter guide readers to in-depth discussions of individual topics This book is ideal for networking designers and engineers who want to fully exploit this new technology and for government employees who are concerned about homeland security. With its examples, it is appropriate for use as a coursebook for upper-level undergraduates and graduate students.",
"title": ""
}
] |
[
{
"docid": "7ca908e7896afc49a0641218e1c4febf",
"text": "Timely and accurate classification and interpretation of high-resolution images are very important for urban planning and disaster rescue. However, as spatial resolution gets finer, it is increasingly difficult to recognize complex patterns in high-resolution remote sensing images. Deep learning offers an efficient strategy to fill the gap between complex image patterns and their semantic labels. However, due to the hierarchical abstract nature of deep learning methods, it is difficult to capture the precise outline of different objects at the pixel level. To further reduce this problem, we propose an object-based deep learning method to accurately classify the high-resolution imagery without intensive human involvement. In this study, high-resolution images were used to accurately classify three different urban scenes: Beijing (China), Pavia (Italy), and Vaihingen (Germany). The proposed method is built on a combination of a deep feature learning strategy and an object-based classification for the interpretation of high-resolution images. Specifically, high-level feature representations extracted through the convolutional neural networks framework have been systematically investigated over five different layer configurations. Furthermore, to improve the classification accuracy, an object-based classification method also has been integrated with the deep learning strategy for more efficient image classification. Experimental results indicate that with the combination of deep learning and object-based classification, it is possible to discriminate different building types in Beijing Scene, such as commercial buildings and residential buildings with classification accuracies above 90%.",
"title": ""
},
{
"docid": "5ed1a40b933e44f0a7f7240bbca24ab4",
"text": "We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states and actions, for both the undiscounted and discounted cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off.",
"title": ""
},
{
"docid": "90a1fc43ee44634bce3658463503994e",
"text": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.",
"title": ""
},
{
"docid": "61411c55041f40c3b0c63f3ebd4c621f",
"text": "This paper presents an application of neural network approach for the prediction of peak ground acceleration (PGA) using the strong motion data from Turkey, as a soft computing technique to remove uncertainties in attenuation equations. A training algorithm based on the Fletcher–Reeves conjugate gradient back-propagation was developed and employed for three sample sets of strong ground motion. The input variables in the constructed artificial neural network (ANN) model were the magnitude, the source-to-site distance and the site conditions, and the output was the PGA. The generalization capability of ANN algorithms was tested with the same training data. To demonstrate the authenticity of this approach, the network predictions were compared with the ones from regressions for the corresponding attenuation equations. The results indicated that the fitting between the predicted PGA values by the networks and the observed ones yielded high correlation coefficients (R). In addition, comparisons of the correlations by the ANN and the regression method showed that the ANN approach performed better than the regression. Even though the developed ANN models suffered from optimal configuration about the generalization capability, they can be conservatively used to well understand the influence of input parameters for the PGA predictions. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "34f6603912c9775fc48329e596467107",
"text": "Turbo generator with evaporative cooling stator and air cooling rotor possesses many excellent qualities for mid unit. The stator bars and core are immerged in evaporative coolant, which could be cooled fully. The rotor bars are cooled by air inner cooling mode, and the cooling effect compared with hydrogen and water cooling mode is limited. So an effective ventilation system has to been employed to insure the reliability of rotor. This paper presents the comparisons of stator temperature distribution between evaporative cooling mode and air cooling mode, and the designing of rotor ventilation system combined with evaporative cooling stator.",
"title": ""
},
{
"docid": "c4be39977487cdebc8127650c8eda433",
"text": "Unfavorable wake and separated flow from the hull might cause a dramatic decay of the propeller performance in single-screw propelled vessels such as tankers, bulk carriers and containers. For these types of vessels, special attention has to be paid to the design of the stern region, the occurrence of a good flow towards the propeller and rudder being necessary to avoid separation and unsteady loads on the propeller blades and, thus, to minimize fuel consumption and the risk for cavitation erosion and vibrations. The present work deals with the analysis of the propeller inflow in a single-screw chemical tanker vessel affected by massive flow separation in the stern region. Detailed flow measurements by Laser Doppler Velocimetry (LDV) were performed in the propeller region at model scale, in the Large Circulating Water Channel of CNR-INSEAN. Tests were undertaken with and without propeller in order to investigate its effect on the inflow characteristics and the separation mechanisms. In this regard, the study concerned also a phase locked analysis of the propeller perturbation at different distances upstream of the propulsor. The study shows the effectiveness of the 3 order statistical moment (i.e. skewness) for describing the topology of the wake and accurately identifying the portion affected by the detached flow.",
"title": ""
},
{
"docid": "1909d62daf3df32fad94d6a205cc0a8c",
"text": "Scalability properties of deep neural networks raise key re search questions, particularly as the problems considered become larger and more challenging. This paper expands on the idea of conditional computation introd uce in [2], where the nodes of a deep network are augmented by a set of gating uni ts that determine when a node should be calculated. By factorizing the wei ght matrix into a low-rank approximation, an estimation of the sign of the pr -nonlinearity activation can be efficiently obtained. For networks using rec tifi d-linear hidden units, this implies that the computation of a hidden unit wit h an estimated negative pre-nonlinearity can be omitted altogether, as its val ue will become zero when nonlinearity is applied. For sparse neural networks, this c an result in considerable speed gains. Experimental results using the MNIST and SVHN d ata sets with a fully-connected deep neural network demonstrate the perf ormance robustness of the proposed scheme with respect to the error introduced b y the conditional computation process.",
"title": ""
},
{
"docid": "94e2bfa218791199a59037f9ea882487",
"text": "As a developing discipline, research results in the field of human computer interaction (HCI) tends to be \"soft\". Many workers in the field have argued that the advancement of HCI lies in \"hardening\" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.",
"title": ""
},
{
"docid": "f64e65df9db7219336eafb20d38bf8cf",
"text": "With predictions that this nursing shortage will be more severe and have a longer duration than has been previously experienced, traditional strategies implemented by employers will have limited success. The aging nursing workforce, low unemployment, and the global nature of this shortage compound the usual factors that contribute to nursing shortages. For sustained change and assurance of an adequate supply of nurses, solutions must be developed in several areas: education, healthcare deliver systems, policy and regulations, and image. This shortage is not solely nursing's issue and requires a collaborative effort among nursing leaders in practice and education, health care executives, government, and the media. This paper poses several ideas of solutions, some already underway in the United States, as a catalyst for readers to initiate local programs.",
"title": ""
},
{
"docid": "d0cdbd1137e9dca85d61b3d90789d030",
"text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).",
"title": ""
},
{
"docid": "0cca7892dc3a741deca22f7699e1ed7e",
"text": "Document polarity detection is a part of sentiment analysis where a document is classified as a positive polarity document or a negative polarity document. The applications of polarity detection are content filtering and opinion mining. Content filtering of negative polarity documents is an important application to protect children from negativity and can be used in security filters of organizations. In this paper, dictionary based method using polarity lexicon and machine learning algorithms are applied for polarity detection of Kannada language documents. In dictionary method, a manually created polarity lexicon of 5043 Kannada words is used and compared with machine learning algorithms like Naïve Bayes and Maximum Entropy. It is observed that performance of Naïve Bayes and Maximum Entropy is better than dictionary based method with accuracy of 0.90, 0.93 and 0.78 respectively.",
"title": ""
},
{
"docid": "a448b5e4e4bd017049226f06ce32fa9d",
"text": "We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator’s action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphoto- realistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 com- pared to the most accurate prior approximation scheme, while being the fastest. We show that our models general- ize across datasets and across resolutions, and investigate a number of extensions of the presented approach.",
"title": ""
},
{
"docid": "6936b03672c64798ca4be118809cc325",
"text": "We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with O(n) feedforward passes for n keypoints, instead of O(n) for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.",
"title": ""
},
{
"docid": "0131e5a748fb70627746068d33553eca",
"text": "Fast changing, increasingly complex, and diverse computing platforms pose central problems in scientific computing: How to achieve, with reasonable effort, portable optimal performance? We present SPIRAL, which considers this problem for the performance-critical domain of linear digital signal processing (DSP) transforms. For a specified transform, SPIRAL automatically generates high-performance code that is tuned to the given platform. SPIRAL formulates the tuning as an optimization problem and exploits the domain-specific mathematical structure of transform algorithms to implement a feedback-driven optimizer. Similar to a human expert, for a specified transform, SPIRAL \"intelligently\" generates and explores algorithmic and implementation choices to find the best match to the computer's microarchitecture. The \"intelligence\" is provided by search and learning techniques that exploit the structure of the algorithm and implementation space to guide the exploration and optimization. SPIRAL generates high-performance code for a broad set of DSP transforms, including the discrete Fourier transform, other trigonometric transforms, filter transforms, and discrete wavelet transforms. Experimental results show that the code generated by SPIRAL competes with, and sometimes outperforms, the best available human tuned transform library code.",
"title": ""
},
{
"docid": "0d2e5667545ebc9380416f9f625dd836",
"text": "New developments in assistive technology are likely to make an important contribution to the care of elderly people in institutions and at home. Video-monitoring, remote health monitoring, electronic sensors and equipment such as fall detectors, door monitors, bed alerts, pressure mats and smoke and heat alarms can improve older people's safety, security and ability to cope at home. Care at home is often preferable to patients and is usually less expensive for care providers than institutional alternatives.",
"title": ""
},
{
"docid": "e8f15d3689f1047cd05676ebd72cc0fc",
"text": "We argue that in fully-connected networks a phase transition delimits the overand under-parametrized regimes where fitting can or cannot be achieved. Under some general conditions, we show that this transition is sharp for the hinge loss. In the whole over-parametrized regime, poor minima of the loss are not encountered during training since the number of constraints to satisfy is too small to hamper minimization. Our findings support a link between this transition and the generalization properties of the network: as we increase the number of parameters of a given model, starting from an under-parametrized network, we observe that the generalization error displays three phases: (i) initial decay, (ii) increase until the transition point — where it displays a cusp — and (iii) slow decay toward a constant for the rest of the over-parametrized regime. Thereby we identify the region where the classical phenomenon of over-fitting takes place, and the region where the model keeps improving, in line with previous empirical observations for modern neural networks.",
"title": ""
},
{
"docid": "574259df6c01fd0c46160b3f8548e4e7",
"text": "Hashtag has emerged as a widely used concept of popular culture and campaigns, but its implications on people’s privacy have not been investigated so far. In this paper, we present the first systematic analysis of privacy issues induced by hashtags. We concentrate in particular on location, which is recognized as one of the key privacy concerns in the Internet era. By relying on a random forest model, we show that we can infer a user’s precise location from hashtags with accuracy of 70% to 76%, depending on the city. To remedy this situation, we introduce a system called Tagvisor that systematically suggests alternative hashtags if the user-selected ones constitute a threat to location privacy. Tagvisor realizes this by means of three conceptually different obfuscation techniques and a semantics-based metric for measuring the consequent utility loss. Our findings show that obfuscating as little as two hashtags already provides a near-optimal trade-off between privacy and utility in our dataset. This in particular renders Tagvisor highly time-efficient, and thus, practical in real-world settings.",
"title": ""
},
{
"docid": "1a5b28583eaf7cab8cc724966d700674",
"text": "Advertising (ad) revenue plays a vital role in supporting free websites. When the revenue dips or increases sharply, ad system operators must find and fix the rootcause if actionable, for example, by optimizing infrastructure performance. Such revenue debugging is analogous to diagnosis and root-cause analysis in the systems literature but is more general. Failure of infrastructure elements is only one potential cause; a host of other dimensions (e.g., advertiser, device type) can be sources of potential causes. Further, the problem is complicated by derived measures such as costs-per-click that are also tracked along with revenue. Our paper takes the first systematic look at revenue debugging. Using the concepts of explanatory power, succinctness, and surprise, we propose a new multidimensional root-cause algorithm for fundamental and derived measures of ad systems to identify the dimension mostly likely to blame. Further, we implement the attribution algorithm and a visualization interface in a tool called the Adtributor to help troubleshooters quickly identify potential causes. Based on several case studies on a very large ad system and extensive evaluation, we show that the Adtributor has an accuracy of over 95% and helps cut down troubleshooting time by an order of magnitude.",
"title": ""
},
{
"docid": "30279db171fffe6fac561541a5d175ca",
"text": "Deformable displays can provide two major benefits compared to rigid displays: Objects of different shapes and deformabilities, situated in our physical environment, can be equipped with deformable displays, and users can benefit from their pre-existing knowledge about the interaction with physical objects when interacting with deformable displays. In this article we present InformationSense, a large, highly deformable cloth display. The article contributes to two research areas in the context of deformable displays: It presents an approach for the tracking of large, highly deformable surfaces, and it presents one of the first UX analyses of cloth displays that will help with the design of future interaction techniques for this kind of display. The comparison of InformationSense with a rigid display interface unveiled the trade-off that while users are able to interact with InformationSense more naturally and significantly preferred InformationSense in terms of joy of use, they preferred the rigid display interfaces in terms of efficiency. This suggests that deformable displays are already suitable if high hedonic qualities are important but need to be enhanced with additional digital power if high pragmatic qualities are required.",
"title": ""
},
{
"docid": "18e5b72779f6860e2a0f2ec7251b0718",
"text": "This paper presents a novel dielectric resonator filter exploiting dual TM11 degenerate modes. The dielectric rod resonators are short circuited on the top and bottom surfaces to the metallic cavity. The dual-mode cavities can be conveniently arranged in many practical coupling configurations. Through-holes in height direction are made in each of the dielectric rods for the frequency tuning and coupling screws. All the coupling elements, including inter-cavity coupling elements, are accessible from the top of the filter cavity. This planar coupling configuration is very attractive for composing a diplexer or a parallel multifilter assembly using the proposed filter structure. To demonstrate the new filter technology, two eight-pole filters with cross-couplings for UMTS band are prototyped and tested. It has been experimentally shown that as compared to a coaxial combline filter with a similar unloaded Q, the proposed dual-mode filter can save filter volume by more than 50%. Moreover, a simple method that can effectively suppress the lower band spurious mode is also presented.",
"title": ""
}
] |
scidocsrr
|
95dae1c267cfb5f8cd2d5206f0d66194
|
Fit4life: the design of a persuasive technology promoting healthy behavior and ideal weight
|
[
{
"docid": "8d292592202c948c439f055ca5df9d56",
"text": "This paper provides an overview of the current state of the art in persuasive systems design. All peer-reviewed full papers published at the first three International Conferences on Persuasive Technology were analyzed employing a literature review framework. Results from this analysis are discussed and directions for future research are suggested. Most research papers so far have been experimental. Five out of six of these papers (84.4%) have addressed behavioral change rather than an attitude change. Tailoring, tunneling, reduction and social comparison have been the most studied methods for persuasion. Quite, surprisingly ethical considerations have remained largely unaddressed in these papers. In general, many of the research papers seem to describe the investigated persuasive systems in a relatively vague manner leaving room for some improvement.",
"title": ""
}
] |
[
{
"docid": "0fb7fa7907e33b3192946407607b54f2",
"text": "We present commensal cuckoo,* a secure group partitioning scheme for large-scale systems that maintains the correctness of many small groups, despite a Byzantine adversary that controls a constant (global) fraction of all nodes. In particular, the adversary is allowed to repeatedly rejoin faulty nodes to the system in an arbitrary adaptive manner, e.g., to collocate them in the same group. Commensal cuckoo addresses serious practical limitations of the state-ofthe- art scheme, the cuckoo rule of Awerbuch and Scheideler, tolerating 32x--41x more faulty nodes with groups as small as 64 nodes (as compared to the hundreds required by the cuckoo rule). Secure group partitioning is a key component of highly-scalable, reliable systems such as Byzantine faulttolerant distributed hash tables (DHTs).",
"title": ""
},
{
"docid": "ca26daaa9961f7ba2343ae84245c1181",
"text": "In a recently held WHO workshop it has been recommended to abandon the distinction between potentially malignant lesions and potentially malignant conditions and to use the term potentially malignant disorders instead. Of these disorders, leukoplakia and erythroplakia are the most common ones. These diagnoses are still defined by exclusion of other known white or red lesions. In spite of tremendous progress in the field of molecular biology there is yet no single marker that reliably enables to predict malignant transformation in an individual patient. The general advice is to excise or laser any oral of oropharyngeal leukoplakia/erythroplakia, if feasible, irrespective of the presence or absence of dysplasia. Nevertheless, it is actually unknown whether such removal truly prevents the possible development of a squamous cell carcinoma. At present, oral lichen planus seems to be accepted in the literature as being a potentially malignant disorder, although the risk of malignant transformation is lower than in leukoplakia. There are no means to prevent such event. The efficacy of follow-up of oral lichen planus is questionable. Finally, brief attention has been paid to oral submucous fibrosis, actinic cheilitis, some inherited cancer syndromes and immunodeficiency in relation to cancer predisposition.",
"title": ""
},
{
"docid": "5e15abdf0268acf2495a06a49a49eee7",
"text": "Analysis of large scale geonomics data, notably gene expres sion, has initially focused on clustering methods. Recently, biclustering techniques we re proposed for revealing submatrices showing unique patterns. We review some of the algorithmic a pproaches to biclustering and discuss their properties.",
"title": ""
},
{
"docid": "4718e64540f5b8d7399852fb0e16944a",
"text": "In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the network's output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases.",
"title": ""
},
{
"docid": "f86e3894a6c61c3734e1aabda3500ef0",
"text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.",
"title": ""
},
{
"docid": "002c83aada3dbbc19a1da7561c53fc4b",
"text": "The Swedish preschool is an important socializing agent because the great majority of children aged, from 1 to 5 years, are enrolled in an early childhood education program. This paper explores how preschool teachers and children, in an ethnically diverse preschool, negotiate the meaning of cultural traditions celebrated in Swedish preschools. Particular focus is given to narrative representations of cultural traditions as they are co-constructed and negotiated in preschool practice between teachers and children. Cultural traditions are seen as shared events in the children’s preschool life, as well as symbolic resources which enable children and preschool teachers to conceive themselves as part of a larger whole. The data analyzed are three videotaped circle time events focused on why a particular tradition is celebrated. Methodologically the analysis builds on a narrative approach inspired by Bakhtin’s notion of addressivity and on Alexander’s ideas about dialogic teaching. The results of the analysis show that the teachers attempt to achieve a balance between transferring traditional cultural and religious values and realizing a child-centered pedagogy, emphasizing the child’s initiative. The analyses also show that narratives with a religious tonality generate some uncertainty on how to communicate with the children about the traditions that are being discussed. These research findings are important because, in everyday practice, preschool teachers enact whether religion is regarded as an essential part of cultural socialization, while acting both as keepers of traditions and agents of change.",
"title": ""
},
{
"docid": "5351eb646699758a4c1dd1d4e9c35b26",
"text": "Interpersonal trust is one of the key components of efficient teamwork. Research suggests two main approaches for trust formation: personal information exchange (e.g., social icebreakers), and creating a context of risk and interdependence (e.g., trust falls). However, because these strategies are difficult to implement in an online setting, trust is more difficult to achieve and preserve in distributed teams. In this paper, we argue that games are an optimal environment for trust formation because they can simulate both risk and interdependence. Results of our online experiment show that a social game can be more effective than a social task at fostering interpersonal trust. Furthermore, trust formation through the game is reliable, but trust depends on several contingencies in the social task. Our work suggests that gameplay interactions do not merely promote impoverished versions of the rich ties formed through conversation; but rather engender genuine social bonds. \\",
"title": ""
},
{
"docid": "4411ff57ab4fbfdff76501fe2e3f6f4a",
"text": "Incorporating wireless transceivers with numerous antennas (such as Massive-MIMO) is a prospective way to increase the link capacity or enhance the energy efficiency of future communication systems. However, the benefits of such approach can be realized only when proper channel information is available at the transmitter. Since the amount of the channel information required by the transmitter is large with so many antennas, the feedback is arduous in practice, especially for frequency division duplexing (FDD) systems. This paper proposes channel feedback reduction techniques based on the theory of compressive sensing, which permits the transmitter to obtain channel information with acceptable accuracy under substantially reduced feedback load. Furthermore, by leveraging properties of compressive sensing, we present two adaptive feedback protocols, in which the feedback content can be dynamically configured based on channel conditions to improve the efficiency.",
"title": ""
},
{
"docid": "a774567d957ed0ea209b470b8eced563",
"text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.",
"title": ""
},
{
"docid": "7071a178d42011a39145066da2d08895",
"text": "This paper discusses the trend modeling for traffic time series. First, we recount two types of definitions for a long-term trend that appeared in previous studies and illustrate their intrinsic differences. We show that, by assuming an implicit temporal connection among the time series observed at different days/locations, the PCA trend brings several advantages to traffic time series analysis. We also describe and define the so-called short-term trend that cannot be characterized by existing definitions. Second, we sequentially review the role that trend modeling plays in four major problems in traffic time series analysis: abnormal data detection, data compression, missing data imputation, and traffic prediction. The relations between these problems are revealed, and the benefit of detrending is explained. For the first three problems, we summarize our findings in the last ten years and try to provide an integrated framework for future study. For traffic prediction problem, we present a new explanation on why prediction accuracy can be improved at data points representing the short-term trends if the traffic information from multiple sensors can be appropriately used. This finding indicates that the trend modeling is not only a technique to specify the temporal pattern but is also related to the spatial relation of traffic time series.",
"title": ""
},
{
"docid": "1e3d8e4d78052cfccc2f23dadcfa841b",
"text": "OBJECTIVE\nAlthough the underlying cause of Huntington's disease (HD) is well established, the actual pathophysiological processes involved remain to be fully elucidated. In other proteinopathies such as Alzheimer's and Parkinson's diseases, there is evidence for impairments of the cerebral vasculature as well as the blood-brain barrier (BBB), which have been suggested to contribute to their pathophysiology. We investigated whether similar changes are also present in HD.\n\n\nMETHODS\nWe used 3- and 7-Tesla magnetic resonance imaging as well as postmortem tissue analyses to assess blood vessel impairments in HD patients. Our findings were further investigated in the R6/2 mouse model using in situ cerebral perfusion, histological analysis, Western blotting, as well as transmission and scanning electron microscopy.\n\n\nRESULTS\nWe found mutant huntingtin protein (mHtt) aggregates to be present in all major components of the neurovascular unit of both R6/2 mice and HD patients. This was accompanied by an increase in blood vessel density, a reduction in blood vessel diameter, as well as BBB leakage in the striatum of R6/2 mice, which correlated with a reduced expression of tight junction-associated proteins and increased numbers of transcytotic vesicles, which occasionally contained mHtt aggregates. We confirmed the existence of similar vascular and BBB changes in HD patients.\n\n\nINTERPRETATION\nTaken together, our results provide evidence for alterations in the cerebral vasculature in HD leading to BBB leakage, both in the R6/2 mouse model and in HD patients, a phenomenon that may, in turn, have important pathophysiological implications.",
"title": ""
},
{
"docid": "85ccad436c7e7eed128825e3946ae0ef",
"text": "Recent research has made great strides in the field of detecting botnets. However, botnets of all kinds continue to plague the Internet, as many ISPs and organizations do not deploy these techniques. We aim to mitigate this state by creating a very low-cost method of detecting infected bot host. Our approach is to leverage the botnet detection work carried out by some organizations to easily locate collaborating bots elsewhere. We created BotMosaic as a countermeasure to IRC-based botnets. BotMosaic relies on captured bot instances controlled by a watermarker, who inserts a particular pattern into their network traffic. This pattern can then be detected at a very low cost by client organizations and the watermark can be tuned to provide acceptable false-positive rates. A novel feature of the watermark is that it is inserted collaboratively into the flows of multiple captured bots at once, in order to ensure the signal is strong enough to be detected. BotMosaic can also be used to detect stepping stones and to help trace back to the botmaster. It is content agnostic and can operate on encrypted traffic. We evaluate BotMosaic using simulations and a testbed deployment.",
"title": ""
},
{
"docid": "eabeed186d3ca4a372f5f83169d44e57",
"text": "In disciplines as diverse as social network analysis and neuroscience, many large graphs are believed to be composed of loosely connected smaller graph primitives, whose structure is more amenable to analysis We propose a robust, scalable, integrated methodology for community detection and community comparison in graphs. In our procedure, we first embed a graph into an appropriate Euclidean space to obtain a low-dimensional representation, and then cluster the vertices into communities. We next employ nonparametric graph inference techniques to identify structural similarity among these communities. These two steps are then applied recursively on the communities, allowing us to detect more fine-grained structure. We describe a hierarchical stochastic blockmodel—namely, a stochastic blockmodel with a natural hierarchical structure—and establish conditions under which our algorithm yields consistent estimates of model parameters and motifs, which we define to be stochastically similar groups of subgraphs. Finally, we demonstrate the effectiveness of our algorithm in both simulated and real data. Specifically, we address the problem of locating similar sub-communities in a partially reconstructed Drosophila connectome and in the social network Friendster.",
"title": ""
},
{
"docid": "b43cf46b0329172b6a9a6deadb6de8bc",
"text": "We present the approaches for the four video-tolanguage tasks of LSMDC 2016, including movie description, fill-in-the-blank, multiple-choice test, and movie retrieval. Our key idea is to adopt the semantic attention mechanism; we first build a set of attribute words that are consistently discovered on video frames, and then selectively fuse them with input words for more semantic representation and with output words for more accurate prediction. We show that our implementation of semantic attention indeed improves the performance of multiple video-tolanguage tasks. Specifically, the presented approaches participated in all the four tasks of the LSMDC 2016, and have won three of them, including fill-in-the-blank, multiplechoice test, and movie retrieval.",
"title": ""
},
{
"docid": "cc219b4f335c9e10f31db746b766b425",
"text": "Congenital tumors of the central nervous system (CNS) are often arbitrarily divided into “definitely congenital” (present or producing symptoms at birth), “probably congenital” (present or producing symptoms within the first week of life), and “possibly congenital” (present or producing symptoms within the first 6 months of life). They represent less than 2% of all childhood brain tumors. The clinical features of newborns include an enlarged head circumference, associated hydrocephalus, and asymmetric skull growth. At birth, a large head or a tense fontanel is the presenting sign in up to 85% of patients. Neurological symptoms as initial symptoms are comparatively rare. The prenatal diagnosis of congenital CNS tumors, while based on ultrasonography, has significantly benefited from the introduction of prenatal magnetic resonance imaging studies. Teratomas constitute about one third to one half of these tumors and are the most common neonatal brain tumor. They are often immature because of primitive neural elements and, rarely, a component of mixed malignant germ cell tumors. Other tumors include astrocytomas, choroid plexus papilloma, primitive neuroectodermal tumors, atypical teratoid/rhabdoid tumors, and medulloblastomas. Less common histologies include craniopharyngiomas and ependymomas. There is a strong predilection for supratentorial locations, different from tumors of infants and children. Differential diagnoses include spontaneous intracranial hemorrhage that can occur in the presence of coagulation factor deficiency or underlying vascular malformations, and congenital brain malformations, especially giant heterotopia. The prognosis for patients with congenital tumors is generally poor, usually because of the massive size of the tumor. However, tumors can be resected successfully if they are small and favorably located. The most favorable outcomes are achieved with choroid plexus tumors, where aggressive surgical treatment leads to disease-free survival.",
"title": ""
},
{
"docid": "3f5461231e7120be4fbddfd53c533a53",
"text": "OBJECTIVE\nTo develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common.\n\n\nSTUDY DESIGN\nRegression risk analysis estimates were compared with internal standards as well as with Mantel-Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR.\n\n\nDATA COLLECTION\nData sets produced using Monte Carlo simulations.\n\n\nPRINCIPAL FINDINGS\nRegression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases.\n\n\nCONCLUSIONS\nRegression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case-control studies, particularly when outcomes are common or effect size is large.",
"title": ""
},
{
"docid": "f16d93249254118060ce81b2f92faca5",
"text": "Radiologists are critically interested in promoting best practices in medical imaging, and to that end, they are actively developing tools that will optimize terminology and reporting practices in radiology. The RadLex® vocabulary, developed by the Radiological Society of North America (RSNA), is intended to create a unifying source for the terminology that is used to describe medical imaging. The RSNA Reporting Initiative has developed a library of reporting templates to integrate reusable knowledge, or meaning, into the clinical reporting process. This report presents the initial analysis of the intersection of these two major efforts. From 70 published radiology reporting templates, we extracted the names of 6,489 reporting elements. These terms were reviewed in conjunction with the RadLex vocabulary and classified as an exact match, a partial match, or unmatched. Of 2,509 unique terms, 1,017 terms (41%) matched exactly to RadLex terms, 660 (26%) were partial matches, and 832 reporting terms (33%) were unmatched to RadLex. There is significant overlap between the terms used in the structured reporting templates and RadLex. The unmatched terms were analyzed using the multidimensional scaling (MDS) visualization technique to reveal semantic relationships among them. The co-occurrence analysis with the MDS visualization technique provided a semantic overview of the investigated reporting terms and gave a metric to determine the strength of association among these terms.",
"title": ""
},
{
"docid": "bc781e8aa4fbc8ead4d996595ee49e72",
"text": "Recent studies of an increasing number of hominin fossils highlight regional and chronological diversities of archaic Homo in the Pleistocene of eastern Asia. However, such a realization is still based on limited geographical occurrences mainly from Indonesia, China and Russian Altai. Here we describe a newly discovered archaic Homo mandible from Taiwan (Penghu 1), which further increases the diversity of Pleistocene Asian hominins. Penghu 1 revealed an unexpectedly late survival (younger than 450 but most likely 190-10 thousand years ago) of robust, apparently primitive dentognathic morphology in the periphery of the continent, which is unknown among the penecontemporaneous fossil records from other regions of Asia except for the mid-Middle Pleistocene Homo from Hexian, Eastern China. Such patterns of geographic trait distribution cannot be simply explained by clinal geographic variation of Homo erectus between northern China and Java, and suggests survival of multiple evolutionary lineages among archaic hominins before the arrival of modern humans in the region.",
"title": ""
},
{
"docid": "e38cbee5c03319d15086e9c39f7f8520",
"text": "In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL2.1. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilities of planners: the handling of continuous linear change, and the handling of duration-dependent effects in combination with duration inequalities, both of which require tightly coupled temporal and numeric reasoning during planning. COLIN combines FF-style forward chaining search, with the use of a Linear Program (LP) to check the consistency of the interacting temporal and numeric constraints at each state. The LP is used to compute bounds on the values of variables in each state, reducing the range of actions that need to be considered for application. In addition, we develop an extension of the Temporal Relaxed Planning Graph heuristic of CRIKEY3, to support reasoning directly with continuous change. We extend the range of task variables considered to be suitable candidates for specifying the gradient of the continuous numeric change effected by an action. Finally, we explore the potential for employing mixed integer programming as a tool for optimising the timestamps of the actions in the plan, once a solution has been found. To support this, we further contribute a selection of extended benchmark domains that include continuous numeric effects. We present results for COLIN that demonstrate its scalability on a range of benchmarks, and compare to existing state-of-the-art planners.",
"title": ""
},
{
"docid": "57c91bce931a23501f42772c103d15c1",
"text": "Faceted browsing is widely used in Web shops and product comparison sites. In these cases, a fixed ordered list of facets is often employed. This approach suffers from two main issues. First, one needs to invest a significant amount of time to devise an effective list. Second, with a fixed list of facets, it can happen that a facet becomes useless if all products that match the query are associated to that particular facet. In this work, we present a framework for dynamic facet ordering in e-commerce. Based on measures for specificity and dispersion of facet values, the fully automated algorithm ranks those properties and facets on top that lead to a quick drill-down for any possible target product. In contrast to existing solutions, the framework addresses e-commerce specific aspects, such as the possibility of multiple clicks, the grouping of facets by their corresponding properties, and the abundance of numeric facets. In a large-scale simulation and user study, our approach was, in general, favorably compared to a facet list created by domain experts, a greedy approach as baseline, and a state-of-the-art entropy-based solution.",
"title": ""
}
] |
scidocsrr
|
ef41e7316954743722eabdae8c8c7feb
|
Knowledge Management as an important tool in Organisational Management : A Review of Literature
|
[
{
"docid": "adcaa15fd8f1e7887a05d3cb1cd47183",
"text": "The dynamic capabilities framework analyzes the sources and methods of wealth creation and capture by private enterprise firms operating in environments of rapid technological change. The competitive advantage of firms is seen as resting on distinctive processes (ways of coordinating and combining), shaped by the firm's (specific) asset positions (such as the firm's portfolio of difftcult-to-trade knowledge assets and complementary assets), and the evolution path(s) it has aflopted or inherited. The importance of path dependencies is amplified where conditions of increasing retums exist. Whether and how a firm's competitive advantage is eroded depends on the stability of market demand, and the ease of replicability (expanding intemally) and imitatability (replication by competitors). If correct, the framework suggests that private wealth creation in regimes of rapid technological change depends in large measure on honing intemal technological, organizational, and managerial processes inside the firm. In short, identifying new opportunities and organizing effectively and efficiently to embrace them are generally more fundamental to private wealth creation than is strategizing, if by strategizing one means engaging in business conduct that keeps competitors off balance, raises rival's costs, and excludes new entrants. © 1997 by John Wiley & Sons, Ltd.",
"title": ""
}
] |
[
{
"docid": "907888b819c7f65fe34fb8eea6df9c93",
"text": "Most time-series datasets with multiple data streams have (many) missing measurements that need to be estimated. Most existing methods address this estimation problem either by interpolating within data streams or imputing across data streams; we develop a novel approach that does both. Our approach is based on a deep learning architecture that we call a Multidirectional Recurrent Neural Network (M-RNN). An M-RNN differs from a bi-directional RNN in that it operates across streams in addition to within streams, and because the timing of inputs into the hidden layers is both lagged and advanced. To demonstrate the power of our approach we apply it to a familiar real-world medical dataset and demonstrate significantly improved performance.",
"title": ""
},
{
"docid": "5b8cb0c530daef4e267a8572349f1118",
"text": "I enjoy doing research in Computer Security and Software Engineering and specifically in mobile security and adversarial machine learning. A primary goal of my research is to build adversarial-resilient intelligent security systems. I have been developing such security systems for the mobile device ecosystem that serves billions of users, millions of apps, and hundreds of thousands of app developers. For an ecosystem of this magnitude, manual inspection or rule-based security systems are costly and error-prone. There is a strong need for intelligent security systems that can learn from experiences, solve problems, and use knowledge to adapt to new situations. However, achieving intelligence in security systems is challenging. In the cat-and-mouse game between security analysts and adversaries, the intelligence of adversaries also increases. In this never-ending game, the adversaries continuously evolve their attacks to be specifically adversarial to newly proposed intelligent security techniques. To address this challenge, I have been pursuing two lines of research: (1) enhancing intelligence of existing security systems to automate the security-decision making by techniques such as program analysis [11, 8, 10, 6, U6] , natural language processing (NLP) [9, 7, U7, 1] , and machine learning [8, 4, 3, 2] ; (2) guarding against emerging attacks specifically adversarial to these newly-proposed intelligent security techniques by developing corresponding defenses [13, U1, U2] and testing methodologies [12, 5] . Throughout these research efforts, my general research methodology is to extract insightful data for security systems (through program analysis and NLP techniques), to enable intelligent decision making in security systems (through machine learning techniques that learn from the extracted data), and to strengthen robustness of the security systems by generating adversarial-testing inputs to check these intelligent security techniques and building defense to prevent the adversarial attacks. With this methodology, my research has derived solutions that have high impact on real-world systems. For instance, my work on analysis and testing of mobile applications (apps) [11, 10] in collaboration with Tencent Ltd. has been deployed and adopted in daily testing of a mobile app named WeChat, a popular messenger app with over 900 million monthly active users. A number of tools grown out of my research have been adopted by companies such as Fujitsu [P1, P2, 13, 6] , Samsung [12, 5] , and IBM.",
"title": ""
},
{
"docid": "f86e3894a6c61c3734e1aabda3500ef0",
"text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.",
"title": ""
},
{
"docid": "d9a9339672121fb6c3baeb51f11bfcd8",
"text": "The VISION (video indexing for searching over networks) digital video library system has been developed in our laboratory as a testbed for evaluating automatic and comprehensive mechanisms for video archive creation and content-based search, ®ltering and retrieval of video over local and wide area networks. In order to provide access to video footage within seconds of broadcast, we have developed a new pipelined digital video processing architecture which is capable of digitizing, processing, indexing and compressing video in real time on an inexpensive general purpose computer. These videos were automatically partitioned into short scenes using video, audio and closed-caption information. The resulting scenes are indexed based on their captions and stored in a multimedia database. A clientserver-based graphical user interface was developed to enable users to remotely search this archive and view selected video segments over networks of dierent bandwidths. Additionally, VISION classi®es the incoming videos with respect to a taxonomy of categories and will selectively send users videos which match their individual pro®les. # 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f9f1cf949093c41a84f3af854a2c4a8b",
"text": "Modern TCP implementations are capable of very high point-to-point bandwidths. Delivered performance on the fastest networks is often limited by the sending and receiving hosts, rather than by the network hardware or the TCP protocol implementation itself. In this case, systems can achieve higher bandwidth by reducing host overheads through a variety of optimizations above and below the TCP protocol stack, given support from the network interface. This paper surveys the most important of these optimizations, and illustrates their effects quantitatively with empirical results from a an experimental network delivering up to two gigabits per second of point-to-point TCP bandwidth.",
"title": ""
},
{
"docid": "9a86609ecefc5780a49ca638be4de64c",
"text": "In this paper, we propose an end-to-end capsule network for pixel level localization of actors and actions present in a video. The localization is performed based on a natural language query through which an actor and action are specified. We propose to encode both the video as well as textual input in the form of capsules, which provide more effective representation in comparison with standard convolution based features. We introduce a novel capsule based attention mechanism for fusion of video and text capsules for text selected video segmentation. The attention mechanism is performed via joint EM routing over video and text capsules for text selected actor and action localization. The existing works on actor-action localization are mainly focused on localization in a single frame instead of the full video. Different from existing works, we propose to perform the localization on all frames of the video. To validate the potential of the proposed network for actor and action localization on all the frames of a video, we extend an existing actor-action dataset (A2D) with annotations for all the frames. The experimental evaluation demonstrates the effectiveness of the proposed capsule network for text selective actor and action localization in videos, and it also improves upon the performance of the existing state-of-the art works on single frame-based localization. Figure 1: Overview of the proposed approach. For a given video, we want to localize the actor and action which are described by an input textual query. Capsules are extracted from both the video and the textual query, and a joint EM routing algorithm creates high level capsules, which are further used for localization of selected actors and actions.",
"title": ""
},
{
"docid": "7082e7b9828c316b24f3113cb516a50d",
"text": "The analog voltage-controlled filter used in historical music synthesizers by Moog is modeled using a digital system, which is then compared in terms of audio measurements with the original analog filter. The analog model is mainly borrowed from D'Angelo's previous work. The digital implementation of the filter incorporates a recently proposed antialiasing method. This method enhances the clarity of output signals in the case of large-level input signals, which cause harmonic distortion. The combination of these two ideas leads to a novel digital model, which represents the state of the art in virtual analog musical filters. It is shown that without the antialiasing, the output signals in the nonlinear regime may be contaminated by undesirable spectral components, which are the consequence of aliasing, but that the antialiasing technique suppresses these components sufficiently. Comparison of measurements of the analog and digital filters show that the digital model is accurate within a few dB in the linear regime and has very similar behavior in the nonlinear regime in terms of distortion. The proposed digital filter model can be used as a building block in virtual analog music synthesizers.",
"title": ""
},
{
"docid": "7c17cb4da60caf8806027273c4c10708",
"text": "Recently, IEEE 802.11ax Task Group has adapted OFDMA as a new technique for enabling multi-user transmission. It has been also decided that the scheduling duration should be same for all the users in a multi-user OFDMA so that the transmission of the users should end at the same time. In order to realize that condition, the users with insufficient data should transmit null data (i.e. padding) to fill the duration. While this scheme offers strong features such as resilience to Overlapping Basic Service Set (OBSS) interference and ease of synchronization, it also poses major side issues of degraded throughput performance and waste of devices' energy. In this work, for OFDMA based 802.11 WLANs we first propose practical algorithm in which the scheduling duration is fixed and does not change from time to time. In the second algorithm the scheduling duration is dynamically determined in a resource allocation framework by taking into account the padding overhead, airtime fairness and energy consumption of the users. We analytically investigate our resource allocation problems through Lyapunov optimization techniques and show that our algorithms are arbitrarily close to the optimal performance at the price of reduced convergence rate. We also calculate the overhead of our algorithms in a realistic setup and propose solutions for the implementation issues.",
"title": ""
},
{
"docid": "8da9477e774902d4511d51a9ddb8b74b",
"text": "In modern system-on-chip architectures, specialized accelerators are increasingly used to improve performance and energy efficiency. The growing complexity of these systems requires the use of system-level design methodologies featuring high-level synthesis (HLS) for generating these components efficiently. Existing HLS tools, however, have limited support for the system-level optimization of memory elements, which typically occupy most of the accelerator area. We present a complete methodology for designing the private local memories (PLMs) of multiple accelerators. Based on the memory requirements of each accelerator, our methodology automatically determines an area-efficient architecture for the PLMs to guarantee performance and reduce the memory cost based on technology-related information. We implemented a prototype tool, called Mnemosyne, that embodies our methodology within a commercial HLS flow. We designed 13 complex accelerators for selected applications from two recently-released benchmark suites (Perfect and CortexSuite). With our approach we are able to reduce the memory cost of single accelerators by up to 45%. Moreover, when reusing memory IPs across accelerators, we achieve area savings that range between 17% and 55% compared to the case where the PLMs are designed separately.",
"title": ""
},
{
"docid": "52c0c6d1deacdca44df5000b2b437c78",
"text": "This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.",
"title": ""
},
{
"docid": "883be979cd5e7d43ded67da1a40427ce",
"text": "This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.",
"title": ""
},
{
"docid": "5588fd19a3d0d73598197ad465315fd6",
"text": "The growing need for Chinese natural language processing (NLP) is largely in a range of research and commercial applications. However, most of the currently Chinese NLP tools or components still have a wide range of issues need to be further improved and developed. FudanNLP is an open source toolkit for Chinese natural language processing (NLP), which uses statistics-based and rule-based methods to deal with Chinese NLP tasks, such as word segmentation, part-ofspeech tagging, named entity recognition, dependency parsing, time phrase recognition, anaphora resolution and so on.",
"title": ""
},
{
"docid": "48a8cfc2ac8c8c63bbd15aba5a830ef9",
"text": "We extend prior research on masquerade detection using UNIX commands issued by users as the audit source. Previous studies using multi-class training requires gathering data from multiple users to train specific profiles of self and non-self for each user. Oneclass training uses data representative of only one user. We apply one-class Naïve Bayes using both the multivariate Bernoulli model and the Multinomial model, and the one-class SVM algorithm. The result shows that oneclass training for this task works as well as multi-class training, with the great practical advantages of collecting much less data and more efficient training. One-class SVM using binary features performs best among the oneclass training algorithms.",
"title": ""
},
{
"docid": "b1d00c44127956ab703204490de0acd7",
"text": "The key issue of few-shot learning is learning to generalize. This paper proposes a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning methods, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.",
"title": ""
},
{
"docid": "4a81bfdcd2c3d543d2cb182fef28da6c",
"text": "A novel printed compact wide-band planar antenna for mobile handsets is proposed and analyzed in this paper. The radiating patch of the proposed antenna is designed jointly with the shape of the ground plane. A prototype of the proposed antenna with 30 mm in height and 50 mm in width has been fabricated and tested. Its operating bandwidth with voltage standing wave ratio (VSWR) lower than 3:1 is 870-2450 MHz, which covers the global system for mobile communication (GSM, 890-960 MHz), the global positioning system (GPS, 1575.42 MHz), digital communication system (DCS, 1710-1880 MHz), personal communication system (PCS, 1850-1990 MHz), universal mobile telecommunication system (UMTS, 1920-2170 MHz), and wireless local area network (WLAN, 2400-2484 MHz) bands. Therefore, it could be applicable for the existing and future mobile communication systems. Design details and experimental results are also presented and discussed.",
"title": ""
},
{
"docid": "6625c08d03f755550f2a34086b4ae600",
"text": "The general requirement in the automotive radar application is to measure the target range R and radial velocity vr simultaneously and unambiguously with high accuracy and resolution even in multitarget situations, which is a matter of the appropriate waveform design. Based on a single continuous wave chirp transmit signal, target range R and radial velocity vr cannot be measured in an unambiguous way. Therefore a so-called multiple frequency shift keying (MFSK) transmit signal was developed, which is applied to measure target range and radial velocity separately and simultaneously. In this case the radar measurement is based on a frequency and additionally on a phase measurement, which suffers from a lower estimation accuracy compared with a pure frequency measurement. This MFSK waveform can therefore be improved and outperformed by a chirp sequences waveform. Each chirp signal has in this case very short time duration Tchirp. Therefore the measured beat frequency fB is dominated by target range R and is less influenced by the radial velocity vr. The range and radial velocity estimation is based on two separate frequency measurements with high accuracy in both cases. Classical chirp sequence waveforms suffer from possible ambiguities in the velocity measurement. It is the objective of this paper to modify the classical chirp sequence to get an unambiguous velocity measurement even in multitarget situations.",
"title": ""
},
{
"docid": "a58d2058fd310ca553aee16a84006f96",
"text": "This systematic literature review describes the epidemiology of dengue disease in Mexico (2000-2011). The annual number of uncomplicated dengue cases reported increased from 1,714 in 2000 to 15,424 in 2011 (incidence rates of 1.72 and 14.12 per 100,000 population, respectively). Peaks were observed in 2002, 2007, and 2009. Coastal states were most affected by dengue disease. The age distribution pattern showed an increasing number of cases during childhood, a peak at 10-20 years, and a gradual decline during adulthood. All four dengue virus serotypes were detected. Although national surveillance is in place, there are knowledge gaps relating to asymptomatic cases, primary/secondary infections, and seroprevalence rates of infection in all age strata. Under-reporting of the clinical spectrum of the disease is also problematic. Dengue disease remains a serious public health problem in Mexico.",
"title": ""
},
{
"docid": "e9353d465c5dfd8af684d4e09407ea28",
"text": "An overview of the main contributions that introduced the use of nonresonating modes for the realization of pseudoelliptic narrowband waveguide filters is presented. The following are also highlighted: early work using asymmetric irises; oversized H-plane cavity; transverse magnetic cavity; TM dual-mode cavity; and multiple cavity filters.",
"title": ""
},
{
"docid": "48411ae0253630f6ac97be4b478a669f",
"text": "Recently, there has been increasing interest in low-cost, non-contact and pervasive methods for monitoring physiological information for the drivers. For the intelligent driver monitoring system there has been so many approaches like facial expression based method, driving behavior based method and physiological parameters based method. Physiological parameters such as, heart rate (HR), heart rate variability (HRV), respiration rate (RR) etc. are mainly used to monitor physical and mental state. Also, in recent decades, there has been increasing interest in low-cost, non-contact and pervasive methods for measuring physiological information. Monitoring physiological parameters based on camera images is such kind of expected methods that could offer a new paradigm for driver's health monitoring. In this paper, we review the latest developments in using camera images for non-contact physiological parameters that provides a resource for researchers and developers working in the area.",
"title": ""
}
] |
scidocsrr
|
b7fc4ae2c25e5b8abd031a4980887c91
|
Factors Influencing Customer Loyalty Toward Online Shopping
|
[
{
"docid": "5cdc962d9ce66938ad15829f8d0331ed",
"text": "This study aims to provide a picture of how relationship quality can influence customer loyalty or loyalty in the business-to-business context. Building on prior research, we propose relationship quality as a higher construct comprising trust, commitment, satisfaction and service quality. These dimensions of relationship quality can reasonably explain the influence of relationship quality on customer loyalty. This study follows the composite loyalty approach providing both behavioural aspects (purchase intentions) and attitudinal loyalty in order to fully explain the concept of customer loyalty. A literature search is undertaken in the areas of customer loyalty, relationship quality, perceived service quality, trust, commitment and satisfaction. This study then seeks to address the following research issues: Does relationship quality influence both aspects of customer loyalty? Which relationship quality dimensions influence each of the components of customer loyalty? This study was conducted in a business-to-business setting of the courier and freight delivery service industry in Australia. The survey was targeted to Australian Small to Medium Enterprises (SMEs). Two methods were chosen for data collection: mail survey and online survey. The total number of usable respondents who completed both survey was 306. In this study, a two step approach (Anderson and Gerbing 1988) was selected for measurement model and structural model. The results also show that all measurement models of relationship dimensions achieved a satisfactory level of fit to the data. The hypothesized relationships were estimated using structural equation modeling. The overall goodness of fit statistics shows that the structural model fits the data well. As the results show, to maintain customer loyalty to the supplier, a supplier may enhance all four aspects of relationship quality which are trust, commitment, satisfaction and service quality. Specifically, in order to enhance customer’s trust, a supplier should promote the customer’s trust in the supplier. In efforts to emphasize commitment, a supplier should focus on building affective aspects of commitment rather than calculative aspects. Satisfaction appears to be a crucial factor in maintaining purchase intentions whereas service quality will strongly enhance both purchase intentions and attitudinal loyalty.",
"title": ""
},
{
"docid": "a6e35b743c2cfd2cd764e5ad83decaa7",
"text": "An e-vendor’s website inseparably embodies an interaction with the vendor and an interaction with the IT website interface. Accordingly, research has shown two sets of unrelated usage antecedents by customers: 1) customer trust in the e-vendor and 2) customer assessments of the IT itself, specifically the perceived usefulness and perceived ease-of-use of the website as depicted in the technology acceptance model (TAM). Research suggests, however, that the degree and impact of trust, perceived usefulness, and perceived ease of use change with experience. Using existing, validated scales, this study describes a free-simulation experiment that compares the degree and relative importance of customer trust in an e-vendor vis-à-vis TAM constructs of the website, between potential (i.e., new) customers and repeat (i.e., experienced) ones. The study found that repeat customers trusted the e-vendor more, perceived the website to be more useful and easier to use, and were more inclined to purchase from it. The data also show that while repeat customers’ purchase intentions were influenced by both their trust in the e-vendor and their perception that the website was useful, potential customers were not influenced by perceived usefulness, but only by their trust in the e-vendor. Implications of this apparent trust-barrier and guidelines for practice are discussed.",
"title": ""
}
] |
[
{
"docid": "8508162ac44f56aaaa9c521e6628b7b2",
"text": "Pervasive or ubiquitous computing was developed thanks to the technological evolution of embedded systems and computer communication means. Ubiquitous computing has given birth to the concept of smart spaces that facilitate our daily life and increase our comfort where devices provide proactively adpated services. In spite of the significant previous works done in this domain, there still a lot of work and enhancement to do in particular the taking into account of current user's context when providing adaptable services. In this paper we propose an approach for context-aware services adaptation for a smart living room using two machine learning methods.",
"title": ""
},
{
"docid": "096ee0adebc8f8d7284ad55dd9cc9eca",
"text": "Automatically assigning the correct anatomical labels to coronary arteries is an important task that would speed up work flow times of radiographers, radiologists and cardiologists, and also aid the standard assessment of coronary artery disease. However, automatic labelling faces challenges resulting from structures as complex and widely varied as coronary anatomy. A system has been developed which addresses this requirement and is capable of automatically assigning correct anatomical labels to pre-segmented coronary artery centrelines in Cardiac Computed-Tomography Angiographic (CCTA) images with 84% accuracy. The system consists of two major phases: 1) training a multivariate gaussian classifier with labelled anatomies to estimate mean-vectors for each anatomical class and a covariance matrix pooled over all classes, based on a set of features; 2) generating all plausible label combinations per test anatomy based on a set of topological and geometric rules, and returning the most likely based on the parameters generated in 1).",
"title": ""
},
{
"docid": "f7e779114a0eb67fd9e3dfbacf5110c9",
"text": "Online game is an increasingly popular source of entertainment for all ages, with relatively prevalent negative consequences. Addiction is a problem that has received much attention. This research aims to develop a measure of online game addiction for Indonesian children and adolescents. The Indonesian Online Game Addiction Questionnaire draws from earlier theories and research on the internet and game addiction. Its construction is further enriched by including findings from qualitative interviews and field observation to ensure appropriate expression of the items. The measure consists of 7 items with a 5-point Likert Scale. It is validated by testing 1,477 Indonesian junior and senior high school students from several schools in Manado, Medan, Pontianak, and Yogyakarta. The validation evidence is shown by item-total correlation and criterion validity. The Indonesian Online Game Addiction Questionnaire has good item-total correlation (ranging from 0.29 to 0.55) and acceptable reliability (α = 0.73). It is also moderately correlated with the participant's longest time record to play online games (r = 0.39; p<0.01), average days per week in playing online games (ρ = 0.43; p<0.01), average hours per days in playing online games (ρ = 0.41; p<0.01), and monthly expenditure for online games (ρ = 0.30; p<0.01). Furthermore, we created a clinical cut-off estimate by combining criteria and population norm. The clinical cut-off estimate showed that the score of 14 to 21 may indicate mild online game addiction, and the score of 22 and above may indicate online game addiction. Overall, the result shows that Indonesian Online Game Addiction Questionnaire has sufficient psychometric property for research use, as well as limited clinical application.",
"title": ""
},
{
"docid": "8db41c68c77a5e9075a2404e382c0634",
"text": "We propose, WarpGAN, a fully automatic network that can generate caricatures given an input face photo. Besides transferring rich texture styles, WarpGAN learns to automatically predict a set of control points that can warp the photo into a caricature, while preserving identity. We introduce an identity-preserving adversarial loss that aids the discriminator to distinguish between different subjects. Moreover, WarpGAN allows customization of the generated caricatures by controlling the exaggeration extent and the visual styles. Experimental results on a public domain dataset, WebCaricature, show that WarpGAN is capable of generating a diverse set of caricatures while preserving the identities. Five caricature experts suggest that caricatures generated by WarpGAN are visually similar to hand-drawn ones and only prominent facial features are exaggerated. ∗ indicates equal contribution",
"title": ""
},
{
"docid": "d0690dcac9bf28f1fe6e2153035f898c",
"text": "The estimation of the homography between two views is a key step in many applications involving multiple view geometry. The homography exists between two views between projections of points on a 3D plane. A homography exists also between projections of all points if the cameras have purely rotational motion. A number of algorithms have been proposed for the estimation of the homography relation between two images of a planar scene. They use features or primitives ranging from simple points to a complex ones like non-parametric curves. Different algorithms make different assumptions on the imaging setup and what is known about them. This article surveys several homography estimation techniques from the literature. The essential theory behind each method is presented briefly and compared with the others. Experiments aimed at providing a representative analysis and comparison of the methods discussed are also presented in the paper.",
"title": ""
},
{
"docid": "cfcad9de10e7bc3cd0aa2a02f42e371d",
"text": "Ridesharing is a challenging topic in the urban computing paradigm, which utilizes urban sensors to generate a wealth of benefits and thus is an important branch in ubiquitous computing. Traditionally, ridesharing is achieved by mainly considering the received user ridesharing requests and then returns solutions to users. However, there lack research efforts of examining user acceptance to the proposed solutions. To our knowledge, user decisions in accepting/rejecting a rideshare is one of the crucial, yet not well studied, factors in the context of dynamic ridesharing. Moreover, existing research attention is mainly paid to find the nearest taxi, whilst in reality the nearest taxi may not be the optimal answer. In this paper, we tackle the above un-addressed issues while preserving the scalability of the system. We present a scalable framework, namely TRIPS, which supports the probability of accepting each request by the companion passengers and minimizes users’ efforts. In TRIPS, we propose three search techniques to increase the efficiency of the proposed ridesharing service. We also reformulate the criteria for searching and ranking ridesharing alternatives and propose indexing techniques to optimize the process. Our approach is validated using a real, large-scale dataset of 10,357 GPS-equipped taxis in the city of Beijing, China and showcases its effectiveness on the ridesharing task.",
"title": ""
},
{
"docid": "68a826dad7fd3da0afc234bb04505d8a",
"text": "The use of deep syntactic information such as typed dependencies has been shown to be very effective in Information Extraction. Despite this potential, the process of manually creating rule-based information extractors that operate on dependency trees is not intuitive for persons without an extensive NLP background. In this system demonstration, we present a tool and a workflow designed to enable initiate users to interactively explore the effect and expressivity of creating Information Extraction rules over dependency trees. We introduce the proposed five step workflow for creating information extractors, the graph query based rule language, as well as the core features of the PROPMINER tool.",
"title": ""
},
{
"docid": "eb8ad65b29e83dff8f1d588f231ee1d4",
"text": "Rheumatic heart disease (RHD) is an important cause of cardiac morbidity and mortality globally, particularly in the Pacific region. Susceptibility to RHD is thought to be due to genetic factors that are influenced by environmental factors, such as crowding and poverty. However, there are few data relating to these environmental factors in the Pacific region. We conducted a case-control study of 80 cases of RHD with age- and sex-matched controls in Fiji using a questionnaire to investigate associations of RHD with a number of environmental factors. There was a trend toward increased risk of RHD in association with poor-quality housing and lower socioeconomic status, but only one factor, maternal unemployment, reached statistical significance (OR 2.6, 95% confidence interval 1.2–5.8). Regarding crowding, little difference was observed between the two groups. Although our data do not allow firm conclusions, they do suggest that further studies of socioeconomic factors and RHD in the Pacific are warranted. They also suggest that genetic studies would provide an insight into susceptibility to RHD in this population.",
"title": ""
},
{
"docid": "c9e5a1b9c18718cc20344837e10b08f7",
"text": "Reconnaissance is the initial and essential phase of a successful advanced persistent threat (APT). In many cases, attackers collect information from social media, such as professional social networks. This information is used to select members that can be exploited to penetrate the organization. Detecting such reconnaissance activity is extremely hard because it is performed outside the organization premises. In this paper, we propose a framework for management of social network honeypots to aid in detection of APTs at the reconnaissance phase. We discuss the challenges that such a framework faces, describe its main components, and present a case study based on the results of a field trial conducted with the cooperation of a large European organization. In the case study, we analyze the deployment process of the social network honeypots and their maintenance in real social networks. The honeypot profiles were successfully assimilated into the organizational social network and received suspicious friend requests and mail messages that revealed basic indications of a potential forthcoming attack. In addition, we explore the behavior of employees in professional social networks, and their resilience and vulnerability toward social network infiltration.",
"title": ""
},
{
"docid": "443652d4a9d96eedd832c5dbb3b41f0a",
"text": "This paper presents a rigorous analytical model for analyzing the effects of local oscillator output imperfections such as phase/amplitude imbalances and phase noise on M -ary quadrature amplitude modulation (M-QAM) transceiver performance. A closed-form expression of the error vector magnitude (EVM) and an analytic expression of the symbol error rate (SER) are derived considering a single-carrier linear transceiver link with additive white Gaussian noise channel. The proposed analytical model achieves a good agreement with the simulation results based on the Monte Carlo method. The proposed QAM imperfection analysis model provides an efficient means for system and circuit designers to analyze the wireless transceiver performance and specify the transceiver block specifications.",
"title": ""
},
{
"docid": "f1889dbb14d6819426eba1695014ec2d",
"text": "Monoclonal antibodies (MAb) were produced to hexanal-bovine serum albumin conjugates. An indirect competitive ELISA was developed with a detection range of 1-50 ng of hexanal/mL. Hexanal conjugated to three different proteins was recognized, whereas free hexanal and the native proteins were not detected. The antibody cross-reacted with pentanal, heptanal, and 2-trans-hexenal conjugated to chicken serum albumin (CSA) with cross-reactivities of 37.9, 76.6, and 45.0%, respectively. There was no cross-reactivity with propanal, butanal, octanal, and nonanal conjugated to CSA. The hexanal content of a meat model system was determined using MAb and polyclonal antibody-based ELISAs and compared with analysis by a dynamic headspace gas chromatographic (HS-GC) method and a thiobarbituric acid reactive substances (TBARS) assay. Both ELISAs showed strong correlations with the HS-GC and TBARS methods. ELISAs may be a fast and simple alternative to GC for monitoring lipid oxidation in meat.",
"title": ""
},
{
"docid": "70b2bf304c161cd0a5408a813e5d9fc5",
"text": "[1] TheMoscoviense Basin, on the northern portion of the lunar farside, displays topography with a partial peak ring, in addition to rings that are offset to the southeast. These rings do not follow the typical concentric ring spacing that is recognized with other basins, suggesting that they may have formed as a result of an oblique impact or perhaps multiple impacts. In addition to the unusual ring spacing present, the Moscoviense Basin contains diverse mare basalt units covering the basin floor and a few highland mafic exposures within its rings. New analysis of previously mapped mare units suggests that the oldest mare unit is the remnant of the impact melt sheet. The Moscoviense Basin provides a glimpse into the lunar highlands terrain and an opportunity to explore the geologic context of initial lunar crustal development and modification.",
"title": ""
},
{
"docid": "b7b664d1749b61f2f423d7080a240a60",
"text": "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.",
"title": ""
},
{
"docid": "08260ba76f242725b8a08cbd8e4ec507",
"text": "Vocal singing (singing with lyrics) shares features common to music and language but it is not clear to what extent they use the same brain systems, particularly at the higher cortical level, and how this varies with expertise. Twenty-six participants of varying singing ability performed two functional imaging tasks. The first examined covert generative language using orthographic lexical retrieval while the second required covert vocal singing of a well-known song. The neural networks subserving covert vocal singing and language were found to be proximally located, and their extent of cortical overlap varied with singing expertise. Nonexpert singers showed greater engagement of their language network during vocal singing, likely accounting for their less tuneful performance. In contrast, expert singers showed a more unilateral pattern of activation associated with reduced engagement of the right frontal lobe. The findings indicate that singing expertise promotes independence from the language network with decoupling producing more tuneful performance. This means that the age-old singing practice of 'finding your singing voice' may be neurologically mediated by changing how strongly singing is coupled to the language system.",
"title": ""
},
{
"docid": "e6df946c5b56b38f35a3e9798cc819bf",
"text": "Most of the microwave communication systems have requirement of power dividers that are essential for power splitting and combining operations. This paper presents a structure and methodology for designing a rectangular waveguide Folded E plane Tee. The structure proposed has the advantage of less area consumption as compared to a conventional waveguide Tee. The paper also presents design equations using which one can design a Folded E plane Tee at any desired frequency. The designs thus obtained at some random frequencies from the equations have been simulated in COMSOL Multiphysics and the scattering parameters obtained have been presented.",
"title": ""
},
{
"docid": "7fc65ecddd4568283c0c21cd63804f07",
"text": "We present a system that detects floor plan automatically and realistically populated by a variety of objects of walls and windows. Given examples of floor plan, our system extracts, in advance, bearing wall, setting others objects which are not bearing wall into a non-bearing walls set. And then, to find contours in the non-bearing walls set. It recognize windows from these contours. The left objects of the set will to be identified walls including with the original bearing walls. The last step is to disintegrate wall into independent rectangular one by one. We demonstrate that our system can handle multiple realistic floor plan and, through decomposing and rebuilding, recognize walls, windows of a floor plan image. Based on high resolution images downloaded from Baidu, the experimental result shows that the average recognition rate of the proposed method is 90.21%, which proves the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "fee78b996d88584499f342f7da89addf",
"text": "It has become standard for search engines to augment result lists with document summaries. Each document summary consists of a title, abstract, and a URL. In this work, we focus on the task of selecting relevant sentences for inclusion in the abstract. In particular, we investigate how machine learning-based approaches can effectively be applied to the problem. We analyze and evaluate several learning to rank approaches, such as ranking support vector machines (SVMs), support vector regression (SVR), and gradient boosted decision trees (GBDTs). Our work is the first to evaluate SVR and GBDTs for the sentence selection task. Using standard TREC test collections, we rigorously evaluate various aspects of the sentence selection problem. Our results show that the effectiveness of the machine learning approaches varies across collections with different characteristics. Furthermore, the results show that GBDTs provide a robust and powerful framework for the sentence selection task and significantly outperform SVR and ranking SVMs on several data sets.",
"title": ""
},
{
"docid": "03daea46a533bcc91cc07071f7c2ca2a",
"text": "This article describes the RMediation package,which offers various methods for building confidence intervals (CIs) for mediated effects. The mediated effect is the product of two regression coefficients. The distribution-of-the-product method has the best statistical performance of existing methods for building CIs for the mediated effect. RMediation produces CIs using methods based on the distribution of product, Monte Carlo simulations, and an asymptotic normal distribution. Furthermore, RMediation generates percentiles, quantiles, and the plot of the distribution and CI for the mediated effect. An existing program, called PRODCLIN, published in Behavior Research Methods, has been widely cited and used by researchers to build accurate CIs. PRODCLIN has several limitations: The program is somewhat cumbersome to access and yields no result for several cases. RMediation described herein is based on the widely available R software, includes several capabilities not available in PRODCLIN, and provides accurate results that PRODCLIN could not.",
"title": ""
},
{
"docid": "d6681899902b990f82b775927cde9277",
"text": "Facial expression provides an important behavioral measure for studies of emotion, cognitive processes, and social interaction. Facial expression recognition has recently become a promising research area. Its applications include human-computer interfaces, human emotion analysis, and medical care and cure. In this paper, we investigate various feature representation and expression classification schemes to recognize seven different facial expressions, such as happy, neutral, angry, disgust, sad, fear and surprise, in the JAFFE database. Experimental results show that the method of combining 2D-LDA (Linear Discriminant Analysis) and SVM (Support Vector Machine) outperforms others. The recognition rate of this method is 95.71% by using leave-one-out strategy and 94.13% by using cross-validation strategy. It takes only 0.0357 second to process one image of size 256 × 256.",
"title": ""
},
{
"docid": "46fba65ad6ad888bb3908d75f0bcc029",
"text": "Deep neural network (DNN) obtains significant accuracy improvements on many speech recognition tasks and its power comes from the deep and wide network structure with a very large number of parameters. It becomes challenging when we deploy DNN on devices which have limited computational and storage resources. The common practice is to train a DNN with a small number of hidden nodes and a small senone set using the standard training process, leading to significant accuracy loss. In this study, we propose to better address these issues by utilizing the DNN output distribution. To learn a DNN with small number of hidden nodes, we minimize the Kullback–Leibler divergence between the output distributions of the small-size DNN and a standard large-size DNN by utilizing a large number of un-transcribed data. For better senone set generation, we cluster the senones in the large set into a small one by directly relating the clustering process to DNN parameters, as opposed to decoupling the senone generation and DNN training process in the standard training. Evaluated on a short message dictation task, the proposed two methods get 5.08% and 1.33% relative word error rate reduction from the standard training method, respectively.",
"title": ""
}
] |
scidocsrr
|
f01b3bcc1e3f6ba62a91414f97d33d8d
|
Marketplace or Reseller?
|
[
{
"docid": "c7d629a83de44e17a134a785795e26d8",
"text": "How can firms profitably give away free products? This paper provides a novel answer and articulates tradeoffs in a space of information product design. We introduce a formal model of two-sided network externalities based in textbook economics—a mix of Katz & Shapiro network effects, price discrimination, and product differentiation. Externality-based complements, however, exploit a different mechanism than either tying or lock-in even as they help to explain many recent strategies such as those of firms selling operating systems, Internet browsers, games, music, and video. The model presented here argues for three simple but useful results. First, even in the absence of competition, a firm can rationally invest in a product it intends to give away into perpetuity. Second, we identify distinct markets for content providers and end consumers and show that either can be a candidate for a free good. Third, product coupling across markets can increase consumer welfare even as it increases firm profits. The model also generates testable hypotheses on the size and direction of network effects while offering insights to regulators seeking to apply antitrust law to network markets. ACKNOWLEDGMENTS: We are grateful to participants of the 1999 Workshop on Information Systems and Economics, the 2000 Association for Computing Machinery SIG E-Commerce, the 2000 International Conference on Information Systems, the 2002 Stanford Institute for Theoretical Economics (SITE) workshop on Internet Economics, the 2003 Insitut D’Economie Industrielle second conference on “The Economics of the Software and Internet Industries,” as well as numerous participants at university seminars. We wish to thank Tom Noe for helpful observations on oligopoly markets, Lones Smith, Kai-Uwe Kuhn, and Jovan Grahovac for corrections and model generalizations, Jeff MacKie-Mason for valuable feedback on model design and bundling, and Hal Varian for helpful comments on firm strategy and model implications. Frank Fisher provided helpful advice on and knowledge of the Microsoft trial. Jean Tirole provided useful suggestions and examples, particularly in regard to credit card markets. Paul Resnick proposed the descriptive term “internetwork” externality to describe two-sided network externalities. Tom Eisenmann provided useful feedback and examples. We also thank Robert Gazzale, Moti Levi, and Craig Newmark for their many helpful observations. This research has been supported by NSF Career Award #IIS 9876233. For an earlier version of the paper that also addresses bundling and competition, please see “Information Complements, Substitutes, and Strategic Product Design,” November 2000, http://ssrn.com/abstract=249585.",
"title": ""
},
{
"docid": "4a87e61106125ffdd49c42517ce78b87",
"text": "Due to network effects and switching costs, platform providers often become entrenched. To dislodge them, entrants generally must offer revolutionary products. We explore a second path to platform leadership change that does not rely on Schumpeterian creative destruction: platform envelopment. By leveraging common components and shared user relationships, one platform provider can move into another’s market, combining its own functionality with the target’s in a multi-platform bundle. Dominant firms otherwise sheltered from entry by standalone rivals may be vulnerable to an adjacent platform provider’s envelopment attack. We analyze conditions under which envelopment strategies are likely to succeed.",
"title": ""
},
{
"docid": "58c2f9f5f043f87bc51d043f70565710",
"text": "T strategic use of first-party content by two-sided platforms is driven by two key factors: the nature of buyer and seller expectations (favorable versus unfavorable) and the nature of the relationship between first-party content and third-party content (complements or substitutes). Platforms facing unfavorable expectations face an additional constraint: their prices and first-party content investment need to be such that low (zero) participation equilibria are eliminated. This additional constraint typically leads them to invest more (less) in first-party content relative to platforms facing favorable expectations when firstand third-party content are substitutes (complements). These results hold with both simultaneous and sequential entry of the two sides. With two competing platforms—incumbent facing favorable expectations and entrant facing unfavorable expectations— and multi-homing on one side of the market, the incumbent always invests (weakly) more in first-party content relative to the case in which it is a monopolist.",
"title": ""
}
] |
[
{
"docid": "14e5e95ae4422120f5f1bb8cccb2b186",
"text": "We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning (VSDL) model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network (that decreases when noise is added to the input), and one with an effective temperature interpretation (that increases when algorithms are early stopped). Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.",
"title": ""
},
{
"docid": "8bcda11934a1eaff4b41cbe695bbfc4f",
"text": "Back-propagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of non-linearity where the relation between parameters and cost is actually discrete. Inspired by the biological implausibility of back-propagation, a few approaches have been proposed in the past that could play a similar credit assignment role as backprop. In this spirit, we explore a novel approach to credit assignment in deep networks that we call target propagation. The main idea is to compute targets rather than gradients, at each layer. Like gradients, they are propagated backwards. In a way that is related but different from previously proposed proxies for back-propagation which rely on a backwards network with symmetric weights, target propagation relies on auto-encoders at each layer. Unlike back-propagation, it can be applied even when units exchange stochastic bits rather than real numbers. We show that a linear correction for the imperfectness of the auto-encoders is very effective to make target propagation actually work, along with adaptive learning rates.",
"title": ""
},
{
"docid": "a9e27b52ed31b47c23b1281c28556487",
"text": "Nuclear receptors are integrators of hormonal and nutritional signals, mediating changes to metabolic pathways within the body. Given that modulation of lipid and glucose metabolism has been linked to diseases including type 2 diabetes, obesity and atherosclerosis, a greater understanding of pathways that regulate metabolism in physiology and disease is crucial. The liver X receptors (LXRs) and the farnesoid X receptors (FXRs) are activated by oxysterols and bile acids, respectively. Mounting evidence indicates that these nuclear receptors have essential roles, not only in the regulation of cholesterol and bile acid metabolism but also in the integration of sterol, fatty acid and glucose metabolism.",
"title": ""
},
{
"docid": "77b1e7b6f91cf5e2d4380a9d117ae7d9",
"text": "This paper theoretically introduces and develops a new operation diagram (OPD) and parameter estimator for the synchronous reluctance machine (SynRM). The OPD demonstrates the behavior of the machine's main performance parameters, such as torque, current, voltage, frequency, flux, power factor (PF), and current angle, all in one graph. This diagram can easily be used to describe different control strategies, possible operating conditions, both below- and above-rated speeds, etc. The saturation effect is also discussed with this diagram by finite-element-method calculations. A prototype high-performance SynRM is designed for experimental studies, and then, both machines' [corresponding induction machine (IM)] performances at similar loading and operation conditions are tested, measured, and compared to demonstrate the potential of SynRM. The laboratory measurements (on a standard 15-kW Eff1 IM and its counterpart SynRM) show that SynRM has higher efficiency, torque density, and inverter rating and lower rotor temperature and PF in comparison to IM at the same winding-temperature-rise condition. The measurements show that the torque capability of SynRM closely follows that of IM.",
"title": ""
},
{
"docid": "30740e33cdb2c274dbd4423e8f56405e",
"text": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.",
"title": ""
},
{
"docid": "9adf653a332e07b8aa055b62449e1475",
"text": "False-belief task have mainly been associated with the explanatory notion of the theory of mind and the theory-theory. However, it has often been pointed out that this kind of highlevel reasoning is computational and time expensive. During the last decades, the idea of embodied intelligence, i.e. complex behavior caused by sensorimotor contingencies, has emerged in both the fields of neuroscience, psychology and artificial intelligence. Viewed from this perspective, the failing in a false-belief test can be the result of the impairment to recognize and track others’ sensorimotor contingencies and affordances. Thus, social cognition is explained in terms of lowlevel signals instead of high-level reasoning. In this work, we present a generative model for optimal action selection which simultaneously can be employed to make predictions of others’ actions. As we base the decision making on a hidden state representation of sensorimotor signals, this model is in line with the ideas of embodied intelligence. We demonstrate how the tracking of others’ hidden states can give rise to correct falsebelief inferences, while a lack thereof leads to failing. With this work, we want to emphasize the importance of sensorimotor contingencies in social cognition, which might be a key to artificial, socially intelligent systems.",
"title": ""
},
{
"docid": "3e43ee5513a0bd8bea8b1ea5cf8cefec",
"text": "Hans-Juergen Boehm Computer Science Department, Rice University, Houston, TX 77251-1892, U.S.A. Mark Weiser Xerox Corporation, Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 94304, U.S.A. A later version of this paper appeared in Software Practice and Experience 18, 9, pp. 807-820. Copyright 1988 by John Wiley and Sons, Ld. The publishers rules appear to allow posting of preprints, but only on the author’s web site.",
"title": ""
},
{
"docid": "4107fe17e6834f96a954e13cbb920f78",
"text": "Non-orthogonal multiple access (NOMA) can support more users than OMA techniques using the same wireless resources, which is expected to support massive connectivity for Internet of Things in 5G. Furthermore, in order to reduce the transmission latency and signaling overhead, grant-free transmission is highly expected in the uplink NOMA systems, where user activity has to be detected. In this letter, by exploiting the temporal correlation of active user sets, we propose a dynamic compressive sensing (DCS)-based multi-user detection (MUD) to realize both user activity and data detection in several continuous time slots. In particular, as the temporal correlation of the active user sets between adjacent time slots exists, we can use the estimated active user set in the current time slot as the prior information to estimate the active user set in the next time slot. Simulation results show that the proposed DCS-based MUD can achieve much better performance than that of the conventional CS-based MUD in NOMA systems.",
"title": ""
},
{
"docid": "4afbb5f877f3920dccdf60f6f4dfbf91",
"text": "Handling degenerate rotation-only camera motion is a challenge for keyframe-based simultaneous localization and mapping with six degrees of freedom. Existing systems usually filter corresponding keyframe candidates, resulting in mapping starvation and tracking failure. We propose to employ these otherwise discarded keyframes to build up local panorama maps registered in the 3D map. Thus, the system is able to maintain tracking during rotational camera motions. Additionally, we seek to actively associate panoramic and 3D map data for improved 3D mapping through the triangulation of more new 3D map features. We demonstrate the efficacy of our approach in several evaluations that show how the combined system handles rotation only camera motion while creating larger and denser maps compared to a standard SLAM system.",
"title": ""
},
{
"docid": "8a6b9930a9dccb0555980140dd6c4ae4",
"text": "The mass shooting at Sandy Hook elementary school on December 14, 2012 catalyzed a year of active debate and legislation on gun control in the United States. Social media hosted an active public discussion where people expressed their support and opposition to a variety of issues surrounding gun legislation. In this paper, we show how a contentbased analysis of Twitter data can provide insights and understanding into this debate. We estimate the relative support and opposition to gun control measures, along with a topic analysis of each camp by analyzing over 70 million gun-related tweets from 2013. We focus on spikes in conversation surrounding major events related to guns throughout the year. Our general approach can be applied to other important public health and political issues to analyze the prevalence and nature of public opinion.",
"title": ""
},
{
"docid": "725e92f13cc7c03b890b5d2e7380b321",
"text": "Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as “the curse of dimensionality”. This paper presents a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated as a control theory problem and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and speed. This opens up new possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their inter-relationships.",
"title": ""
},
{
"docid": "8b158bfaf805974c1f8478c7ac051426",
"text": "BACKGROUND AND AIMS\nThe analysis of large-scale genetic data from thousands of individuals has revealed the fact that subtle population genetic structure can be detected at levels that were previously unimaginable. Using the Human Genome Diversity Panel as reference (51 populations - 650,000 SNPs), this works describes a systematic evaluation of the resolution that can be achieved for the inference of genetic ancestry, even when small panels of genetic markers are used.\n\n\nMETHODS AND RESULTS\nA comprehensive investigation of human population structure around the world is undertaken by leveraging the power of Principal Components Analysis (PCA). The problem is dissected into hierarchical steps and a decision tree for the prediction of individual ancestry is proposed. A complete leave-one-out validation experiment demonstrates that, using all available SNPs, assignment of individuals to their self-reported populations of origin is essentially perfect. Ancestry informative genetic markers are selected using two different metrics (In and correlation with PCA scores). A thorough cross-validation experiment indicates that, in most cases here, the number of SNPs needed for ancestry inference can be successfully reduced to less than 0.1% of the original 650,000 while retaining close to 100% accuracy. This reduction can be achieved using a novel clustering-based redundancy removal algorithm that is also introduced here. Finally, the applicability of our suggested SNP panels is tested on HapMap Phase 3 populations.\n\n\nCONCLUSION\nThe proposed methods and ancestry informative marker panels, in combination with the increasingly more comprehensive databases of human genetic variation, open new horizons in a variety of fields, ranging from the study of human evolution and population history, to medical genetics and forensics.",
"title": ""
},
{
"docid": "2052d056e4f4831ebd9992882e8e4015",
"text": "Soccer video semantic analysis has attracted a lot of researchers in the last few years. Many methods of machine learning have been applied to this task and have achieved some positive results, but the neural network method has not yet been used to this task from now. Taking into account the advantages of Convolution Neural Network(CNN) in fully exploiting features and the ability of Recurrent Neural Network(RNN) in dealing with the temporal relation, we construct a deep neural network to detect soccer video event in this paper. First we determine the soccer video event boundary which we used Play-Break(PB) segment by the traditional method. Then we extract the semantic features of key frames from PB segment by pre-trained CNN, and at last use RNN to map the semantic features of PB to soccer event types, including goal, goal attempt, card and corner. Because there is no suitable and effective dataset, we classify soccer frame images into nine categories according to their different semantic views and then construct a dataset called Soccer Semantic Image Dataset(SSID) for training CNN. The sufficient experiments evaluated on 30 soccer match videos demonstrate the effectiveness of our method than state-of-art methods.",
"title": ""
},
{
"docid": "7a6181a65121ce577bc77711ce7a095c",
"text": "We present a new, general, and real-time technique for soft global illumination in low-frequency environmental lighting. It accumulates over relatively few spherical proxies which approximate the light blocking and re-radiating effect of dynamic geometry. Soft shadows are computed by accumulating log visibility vectors for each sphere proxy as seen by each receiver point. Inter-reflections are computed by accumulating vectors representing the proxy's unshadowed radiance when illuminated by the environment. Both vectors capture low-frequency directional dependence using the spherical harmonic basis. We also present a new proxy accumulation strategy that splats each proxy to receiver pixels in image space to collect its shadowing and indirect lighting contribution. Our soft GI rendering pipeline unifies direct and indirect soft effects with a simple accumulation strategy that maps entirely to the GPU and outperforms previous vertex-based methods.",
"title": ""
},
{
"docid": "2d98a90332278049d61a6eb431317216",
"text": "Feature extraction is a method of capturing visual content of an image. The feature extraction is the process to represent raw image in its reduced form to facilitate decision making such as pattern classification. We have tried to address the problem of classification MRI brain images by creating a robust and more accurate classifier which can act as an expert assistant to medical practitioners. The objective of this paper is to present a novel method of feature selection and extraction. This approach combines the Intensity, Texture, shape based features and classifies the tumor as white matter, Gray matter, CSF, abnormal and normal area. The experiment is performed on 140 tumor contained brain MR images from the Internet Brain Segmentation Repository. The proposed technique has been carried out over a larger database as compare to any previous work and is more robust and effective. PCA and Linear Discriminant Analysis (LDA) were applied on the training sets. The Support Vector Machine (SVM) classifier served as a comparison of nonlinear techniques Vs linear ones. PCA and LDA methods are used to reduce the number of features used. The feature selection using the proposed technique is more beneficial as it analyses the data according to grouping class variable and gives reduced feature set with high classification accuracy.",
"title": ""
},
{
"docid": "b4a2c3679fe2490a29617c6a158b9dbc",
"text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.",
"title": ""
},
{
"docid": "61e460c93d82acf80983f5947154b139",
"text": "The Internet has many benefits, some of them are to gain knowledge and gain the latest information. The internet can be used by anyone and can contain any information, including negative content such as pornographic content, radicalism, racial intolerance, violence, fraud, gambling, security and drugs. Those contents cause the number of children victims of pornography on social media increasing every year. Based on that, it needs a system that detects pornographic content on social media. This study aims to determine the best model to detect the pornographic content. Model selection is determined based on unigram and bigram features, classification algorithm, k-fold cross validation. The classification algorithm used is Support Vector Machine and Naive Bayes. The highest F1-score is yielded by the model with combination of Support Vector Machine, most common words, and combination of unigram and bigram, which returns F1-Score value of 91.14%.",
"title": ""
},
{
"docid": "85c32427a1a6c04e3024d22b03b26725",
"text": "Monte Carlo tree search (MCTS) is extremely popular in computer Go which determines each action by enormous simulations in a broad and deep search tree. However, human experts select most actions by pattern analysis and careful evaluation rather than brute search of millions of future interactions. In this paper, we propose a computer Go system that follows experts way of thinking and playing. Our system consists of two parts. The first part is a novel deep alternative neural network (DANN) used to generate candidates of next move. Compared with existing deep convolutional neural network (DCNN), DANN inserts recurrent layer after each convolutional layer and stacks them in an alternative manner. We show such setting can preserve more contexts of local features and its evolutions which are beneficial for move prediction. The second part is a long-term evaluation (LTE) module used to provide a reliable evaluation of candidates rather than a single probability from move predictor. This is consistent with human experts nature of playing since they can foresee tens of steps to give an accurate estimation of candidates. In our system, for each candidate, LTE calculates a cumulative reward after several future interactions when local variations are settled. Combining criteria from the two parts, our system determines the optimal choice of next move. For more comprehensive experiments, we introduce a new professional Go dataset (PGD), consisting of 253, 233 professional records. Experiments on GoGoD and PGD datasets show the DANN can substantially improve performance of move prediction over pure DCNN. When combining LTE, our system outperforms most relevant approaches and open engines based on",
"title": ""
},
{
"docid": "b3556499bf5d788de7c4d46100ac3a9f",
"text": "Reuse has been proposed as a microarchitecture-level mechanism to reduce the amount of executed instructions, collapsing dependencies and freeing resources for other instructions. Previous works have used reuse domains such as memory accesses, integer or not floating point, based on the reusability rate. However, these works have not studied the specific contribution of reusing different subsets of instructions for performance. In this work, we analysed the sensitivity of trace reuse to instruction subsets, comparing their efficiency to their complementary subsets. We also studied the amount of reuse that can be extracted from loops. Our experiments show that disabling trace reuse outside loops does not harm performance but reduces in 12% the number of accesses to the reuse table. Our experiments with reuse subsets show that most of the speedup can be retained even when not reusing all types of instructions previously found in the reuse domain. 1 ar X iv :1 71 1. 06 67 2v 1 [ cs .A R ] 1 7 N ov 2 01 7",
"title": ""
},
{
"docid": "8921cffb633b0ea350b88a57ef0d4437",
"text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.",
"title": ""
}
] |
scidocsrr
|
cec3f15a0ef158a6c2aa4ab26edba8bf
|
Index modulation techniques for 5G wireless networks
|
[
{
"docid": "aa40633b4f06b6bb882c77d7d9241949",
"text": "This paper proposes a new multiple-input-multiple-output (MIMO) technique called quadrature spatial modulation (QSM). QSM enhances the overall throughput of conventional SM systems by using an extra modulation spatial dimension. The current SM technique uses only the real part of the SM constellation, and the proposed method in this paper extends this to in-phase and quadrature dimensions. It is shown that significant performance enhancements can be achieved at the expense of synchronizing the transmit antennas. Additionally, a closed-form expression for the pairwise error probability (PEP) of generic QSM system is derived and used to calculate a tight upper bound of the average bit error probability (ABEP) over Rayleigh fading channels. Moreover, a simple and general asymptotic expression is derived and analyzed. Obtained Monte Carlo simulation results corroborate the accuracy of the conducted analysis and show the significant enhancements of the proposed QSM scheme.",
"title": ""
},
{
"docid": "3b6e50d7f6389f109da2b1ba125cc64b",
"text": "A new class of low-complexity, yet energy-efficient Multiple-Input Multiple-Output (MIMO) transmission techniques, namely, the family of Spatial Modulation (SM) aided MIMOs (SM-MIMO), has emerged. These systems are capable of exploiting the spatial dimensions (i.e., the antenna indices) as an additional dimension invoked for transmitting information, apart from the traditional Amplitude and Phase Modulation (APM). SM is capable of efficiently operating in diverse MIMO configurations in the context of future communication systems. It constitutes a promising transmission candidate for large-scale MIMO design and for the indoor optical wireless communication while relying on a single-Radio Frequency (RF) chain. Moreover, SM may be also viewed as an entirely new hybrid modulation scheme, which is still in its infancy. This paper aims for providing a general survey of the SM design framework as well as of its intrinsic limits. In particular, we focus our attention on the associated transceiver design, on spatial constellation optimization, on link adaptation techniques, on distributed/cooperative protocol design issues, and on their meritorious variants.",
"title": ""
}
] |
[
{
"docid": "a280c56578d96797b1b7dc2e934b0c3e",
"text": "The Perspective-n-Point (PnP) problem seeks to estimate the pose of a calibrated camera from n 3D-to-2D point correspondences. There are situations, though, where PnP solutions are prone to fail because feature point correspondences cannot be reliably estimated (e.g. scenes with repetitive patterns or with low texture). In such scenarios, one can still exploit alternative geometric entities, such as lines, yielding the so-called Perspective-n-Line (PnL) algorithms. Unfortunately, existing PnL solutions are not as accurate and efficient as their point-based counterparts. In this paper we propose a novel approach to introduce 3D-to-2D line correspondences into a PnP formulation, allowing to simultaneously process points and lines. For this purpose we introduce an algebraic line error that can be formulated as linear constraints on the line endpoints, even when these are not directly observable. These constraints can then be naturally integrated within the linear formulations of two state-of-the-art point-based algorithms, the OPnP [45] and the EPnP [24], allowing them to indistinctly handle points, lines, or a combination of them. Exhaustive experiments show that the proposed formulation brings remarkable boost in performance compared to only point or only line based solutions, with a negligible computational overhead compared to the original OPnP and EPnP.",
"title": ""
},
{
"docid": "55928e118303b080d49a399da1f9dba3",
"text": "This paper describes a customized database and a comprehensive set of queries that can be used for systematic benchmarking of relational database systems. Designing this database and a set of carefully tuned benchmarks represents a first attempt in developing a scientific methodology for performance evaluation of database management systems. We have used this database to perform a comparative evaluation of the database machine DIRECT, the \"university\" and \"commercial\" versions of the INGRES database system, the relational database system ORACLE, and the IDM 500 database machine. We present a subset of our measurements (for the single user case only), that constitute a preliminary performance evaluation of these systems.",
"title": ""
},
{
"docid": "a01333e16abb503cf6d26c54ac24d473",
"text": "Topic models could have a huge impact on improving the ways users find and discover content in digital libraries and search interfaces through their ability to automatically learn and apply subject tags to each and every item in a collection, and their ability to dynamically create virtual collections on the fly. However, much remains to be done to tap this potential, and empirically evaluate the true value of a given topic model to humans. In this work, we sketch out some sub-tasks that we suggest pave the way towards this goal, and present methods for assessing the coherence and interpretability of topics learned by topic models. Our large-scale user study includes over 70 human subjects evaluating and scoring almost 500 topics learned from collections from a wide range of genres and domains. We show how scoring model -- based on pointwise mutual information of word-pair using Wikipedia, Google and MEDLINE as external data sources - performs well at predicting human scores. This automated scoring of topics is an important first step to integrating topic modeling into digital libraries",
"title": ""
},
{
"docid": "01c53962a4aebd75eb68860ee28447bd",
"text": "A power-scalable 2 Byte I/O operating at 12 Gb/s per lane is reported. The source-synchronous I/O includes controllable TX driver amplitude, flexible RX equalization, and multiple deskew modes. This allows power reduction when operating over low-loss, low-skew interconnects, while at the same time supporting higher-loss channels without loss of bandwidth. Transceiver circuit innovations are described including a low-skew transmission-line clock distribution, a 4:1 serializer with quadrature quarter-rate clocks, and a phase rotator based on current-integrating phase interpolators. Measurements of a test chip fabricated in 32 nm SOI CMOS technology demonstrate 1.4 pJ/b efficiency over 0.75” Megtron-6 PCB traces, and 1.9 pJ/b efficiency over 20” Megtron-6 PCB traces.",
"title": ""
},
{
"docid": "86e4b8f3f1608292437968b1165ccac5",
"text": "Activation of oncogenes and loss of tumour suppressors promote metabolic reprogramming in cancer, resulting in enhanced nutrient uptake to supply energetic and biosynthetic pathways. However, nutrient limitations within solid tumours may require that malignant cells exhibit metabolic flexibility to sustain growth and survival. Here, we highlight these adaptive mechanisms and also discuss emerging approaches to probe tumour metabolism in vivo and their potential to expand the metabolic repertoire of malignant cells even further.",
"title": ""
},
{
"docid": "ee617dacdb47fd02a797f2968aaa784f",
"text": "The Internet of Things (IoT) is defined as a paradigm in which objects equipped with sensors, actuators, and processors communicate with each other to serve a meaningful purpose. In this paper, we survey state-of-the-art methods, protocols, and applications in this new emerging area. This survey paper proposes a novel taxonomy for IoT technologies, highlights some of the most important technologies, and profiles some applications that have the potential to make a striking difference in human life, especially for the differently abled and the elderly. As compared to similar survey papers in the area, this paper is far more comprehensive in its coverage and exhaustively covers most major technologies spanning from sensors to applications.",
"title": ""
},
{
"docid": "df1ea45a4b20042abd99418ff6d1f44e",
"text": "This paper combines wavelet transforms with basic detection theory to develop a new unsupervised method for robustly detecting and localizing spikes in noisy neural recordings. The method does not require the construction of templates, or the supervised setting of thresholds. We present extensive Monte Carlo simulations, based on actual extracellular recordings, to show that this technique surpasses other commonly used methods in a wide variety of recording conditions. We further demonstrate that falsely detected spikes corresponding to our method resemble actual spikes more than the false positives of other techniques such as amplitude thresholding. Moreover, the simplicity of the method allows for nearly real-time execution.",
"title": ""
},
{
"docid": "9fc869c7e7d901e418b1b69d636cbd33",
"text": "Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a “black art that requires expert experiences” (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks. The optimized implementation of our BiLSTM-CRF architecture is publicly available.1 This publication explains in detail the experimental setup and discusses the results. A condensed version of this paper was presented at EMNLP 2017 (Reimers and Gurevych, 2017).2",
"title": ""
},
{
"docid": "9d6a0b31bf2b64f1ec624222a2222e2a",
"text": "This is the translation of a paper by Marc Prensky, the originator of the famous metaphor digital natives digital immigrants. Here, ten years after the birth of that successful metaphor, Prensky outlines that, while the distinction between digital natives and immigrants will progressively become less important, new concepts will be needed to represent the continuous evolution of the relationship between man and digital technologies. In this paper Prensky introduces the concept of digital wisdom, a human quality which develops as a result of the empowerment that the natural human skills can receive through a creative and clever use of digital technologies. KEY-WORDS Digital natives, digital immigrants, digital wisdom, digital empowerment. Prensky M. (2010). H. Sapiens Digitale: dagli Immigrati digitali e nativi digitali alla saggezza digitale. TD-Tecnologie Didattiche, 50, pp. 17-24 17 I problemi del mondo d’oggi non possono essere risolti facendo ricorso allo stesso tipo di pensiero che li ha creati",
"title": ""
},
{
"docid": "41b305c49b74063f16e5eb07bcb905d9",
"text": "Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are 2 of M (one output unity, all others zero) and a squarederror or cross-entropy cost function is used. Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Estimation accuracy depends on network complexity, the amount of training data, and the degree to which training data reflect true likelihood distributions and u priori class probabilities. Interpretation of network outputs as Bayesian probabilities allows outputs from multiple networks to be combined for higher level decision making, simplifies creation of rejection thresholds, makes it possible to compensate for differences between pattern class probabilities in training and test data, allows outputs to be used to minimize alternative risk functions, and suggests alternative measures of network performance.",
"title": ""
},
{
"docid": "e099186ceed71e03276ab168ecf79de7",
"text": "Twelve patients with deafferentation pain secondary to central nervous system lesions were subjected to chronic motor cortex stimulation. The motor cortex was mapped as carefully as possible and the electrode was placed in the region where muscle twitch of painful area can be observed with the lowest threshold. 5 of the 12 patients reported complete absence of previous pain with intermittent stimulation at 1 year following the initiation of this therapy. Improvements in hemiparesis was also observed in most of these patients. The pain of these patients was typically barbiturate-sensitive and morphine-resistant. Another 3 patients had some degree of residual pain but considerable reduction of pain was still obtained by stimulation. Thus, 8 of the 12 patients (67%) had continued effect of this therapy after 1 year. In 3 patients, revisions of the electrode placement were needed because stimulation became incapable of inducing muscle twitch even with higher stimulation intensity. The effect of stimulation on pain and capability of producing muscle twitch disappeared simultaneously in these cases and the effect reappeared after the revisions, indicating that appropriate stimulation of the motor cortex is definitely necessary for obtaining satisfactory pain control in these patients. None of the patients subjected to this therapy developed neither observable nor electroencephalographic seizure activity.",
"title": ""
},
{
"docid": "73e6082c387eab6847b8ca853f38c6f3",
"text": "OBJECTIVES\nThis study explored the effectiveness of group music intervention against agitated behavior in elderly persons with dementia.\n\n\nMETHODS\nThis was an experimental study using repeated measurements. Subjects were elderly persons who suffered from dementia and resided in nursing facilities. In total, 104 participants were recruited by permuted block randomization and of the 100 subjects who completed this study, 49 were in the experimental group and 51 were in the control group. The experimental group received a total of twelve 30-min group music intervention sessions, conducted twice a week for six consecutive weeks, while the control group participated in normal daily activities. In order to measure the effectiveness of the therapeutic sessions, assessments were conducted before the intervention, at the 6th and 12th group sessions, and at 1 month after cessation of the intervention. Longitudinal effects were analyzed by means of generalized estimating equations (GEEs).\n\n\nRESULTS\nAfter the group music therapy intervention, the experimental group showed better performance at the 6th and 12th sessions, and at 1 month after cessation of the intervention based on reductions in agitated behavior in general, physically non-aggressive behavior, verbally non-aggressive behavior, and physically aggressive behavior, while a reduction in verbally aggressive behavior was shown only at the 6th session.\n\n\nCONCLUSIONS\nGroup music intervention alleviated agitated behavior in elderly persons with dementia. We suggest that nursing facilities for demented elderly persons incorporate group music intervention in routine activities in order to enhance emotional relaxation, create inter-personal interactions, and reduce future agitated behaviors.",
"title": ""
},
{
"docid": "f44fad35f68957ff27e9cfb97758cc2d",
"text": "Boosting combines weak classifiers to form highly accurate predictors. Although the case of binary classification is well understood, in the multiclass setting, the “correct” requirements on the weak classifier, or the notion of the most efficient boosting algorithms are missing. In this paper, we create a broad and general framework, within which we make precise and identify the optimal requirements on the weak-classifier, as well as design the most effective, in a certain sense, boosting algorithms that assume such requirements.",
"title": ""
},
{
"docid": "5a4c9b6626d2d740246433972ad60f16",
"text": "We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows:",
"title": ""
},
{
"docid": "95e1d5dc90f7fc6ece51f61585842f3d",
"text": "This paper investigates how the splitting cri teria and pruning methods of decision tree learning algorithms are in uenced by misclas si cation costs or changes to the class distri bution Splitting criteria that are relatively insensitive to costs class distributions are found to perform as well as or better than in terms of expected misclassi cation cost splitting criteria that are cost sensitive Con sequently there are two opposite ways of deal ing with imbalance One is to combine a cost insensitive splitting criterion with a cost in sensitive pruning method to produce a deci sion tree algorithm little a ected by cost or prior class distribution The other is to grow a cost independent tree which is then pruned in a cost sensitive manner",
"title": ""
},
{
"docid": "87276bf7802a209a9e8fae2a95ff93c2",
"text": "Traditional two wheels differential drive normally used on mobile robots have manoeuvrability limitations and take time to sort out. Most teams use two driving wheels (with one or two cast wheels), four driving wheels and even three driving wheels. A three wheel drive with omni-directional wheel has been tried with success, and was implemented on fast moving autonomous mobile robots. This paper deals with the mathematical kinematics description of such mobile platform, it describes the advantages and also the type of control used.",
"title": ""
},
{
"docid": "13cb793ca9cdf926da86bb6fc630800a",
"text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.",
"title": ""
},
{
"docid": "1c11c14bcc1e83a3fba3ef5e4c52d69b",
"text": "Ontologies have become the de-facto modeling tool of choice, employed in many applications and prominently in the semantic web. Nevertheless, ontology construction remains a daunting task. Ontological bootstrapping, which aims at automatically generating concepts and their relations in a given domain, is a promising technique for ontology construction. Bootstrapping an ontology based on a set of predefined textual sources, such as web services, must address the problem of multiple, largely unrelated concepts. In this paper, we propose an ontology bootstrapping process for web services. We exploit the advantage that web services usually consist of both WSDL and free text descriptors. The WSDL descriptor is evaluated using two methods, namely Term Frequency/Inverse Document Frequency (TF/IDF) and web context generation. Our proposed ontology bootstrapping process integrates the results of both methods and applies a third method to validate the concepts using the service free text descriptor, thereby offering a more accurate definition of ontologies. We extensively validated our bootstrapping method using a large repository of real-world web services and verified the results against existing ontologies. The experimental results indicate high precision. Furthermore, the recall versus precision comparison of the results when each method is separately implemented presents the advantage of our integrated bootstrapping approach.",
"title": ""
},
{
"docid": "0580342f7efb379fc417d2e5e48c4b73",
"text": "The use of System Dynamics Modeling in Supply Chain Management has only recently re-emerged after a lengthy slack period. Current research on System Dynamics Modelling in supply chain management focuses on inventory decision and policy development, time compression, demand amplification, supply chain design and integration, and international supply chain management. The paper first gives an overview of recent research work in these areas, followed by a discussion of research issues that have evolved, and presents a taxonomy of research and development in System Dynamics Modelling in supply chain management.",
"title": ""
}
] |
scidocsrr
|
14d11227c990c49308552e01212dc9c3
|
Humans prefer curved visual objects.
|
[
{
"docid": "5afe5504566e60cbbb50f83501eee06c",
"text": "This paper explores theoretical issues in ergonomics related to semantics and the emotional content of design. The aim is to find answers to the following questions: how to design products triggering \"happiness\" in one's mind; which product attributes help in the communication of positive emotions; and finally, how to evoke such emotions through a product. In other words, this is an investigation of the \"meaning\" that could be designed into a product in order to \"communicate\" with the user at an emotional level. A literature survey of recent design trends, based on selected examples of product designs and semantic applications to design, including the results of recent design awards, was carried out in order to determine the common attributes of their design language. A review of Good Design Award winning products that are said to convey and/or evoke emotions in the users has been done in order to define good design criteria. These criteria have been discussed in relation to user emotional responses and a selection of these has been given as examples.",
"title": ""
}
] |
[
{
"docid": "64e0a1345e5a181191c54f6f9524c96d",
"text": "Social media based brand communities are communities initiated on the platform of social media. In this article, we explore whether brand communities based on social media (a special type of online brand communities) have positive effects on the main community elements and value creation practices in the communities as well as on brand trust and brand loyalty. A survey based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on community markers (i.e., shared consciousness, shared rituals and traditions, and obligations to society), which have positive effects on value creation practices (i.e., social networking, community engagement, impressions management, and brand use). Such communities could enhance brand loyalty through brand use and impression management practices. We show that brand trust has a full mediating role in converting value creation practices into brand loyalty. Implications for practice and future research opportunities are discussed.",
"title": ""
},
{
"docid": "c2558388fb20454fa6f4653b1e4ab676",
"text": "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.",
"title": ""
},
{
"docid": "2a78461c1949b0cf6b119ae99c08847f",
"text": "Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the handdesigned extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github. io/large-scale-curiosity/.",
"title": ""
},
{
"docid": "bf14fb39f07e01bd6dc01b3583a726b6",
"text": "To provide a general context for library implementations of open source software (OSS), the purpose of this paper is to assess and evaluate the awareness and adoption of OSS by the LIS professionals working in various engineering colleges of Odisha. The study is based on survey method and questionnaire technique was used for collection data from the respondents. The study finds that although the LIS professionals of engineering colleges of Odisha have knowledge on OSS, their uses in libraries are in budding stage. Suggests that for the widespread use of OSS in engineering college libraries of Odisha, a cooperative and participatory organisational system, positive attitude of authorities and LIS professionals, proper training provision for LIS professionals need to be developed.",
"title": ""
},
{
"docid": "14838947ee3b95c24daba5a293067730",
"text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.",
"title": ""
},
{
"docid": "f08c6829b353c45b6a9a6473b4f9a201",
"text": "In this paper, we study the Symmetric Regularized Long Wave (SRLW) equations by finite difference method. We design some numerical schemes which preserve the original conservative properties for the equations. The first scheme is two-level and nonlinear-implicit. Existence of its difference solutions are proved by Brouwer fixed point theorem. It is proved by the discrete energy method that the scheme is uniquely solvable, unconditionally stable and second-order convergent for U in L1 norm, and for N in L2 norm on the basis of the priori estimates. The second scheme is three-level and linear-implicit. Its stability and second-order convergence are proved. Both of the two schemes are conservative so can be used for long time computation. However, they are coupled in computing so need more CPU time. Thus we propose another three-level linear scheme which is not only conservative but also uncoupled in computation, and give the numerical analysis on it. Numerical experiments demonstrate that the schemes are accurate and efficient. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "af07a7f4ffe29dda52bca62a803272fe",
"text": "OBJECTIVE\nTo evaluate the effectiveness and tolerance of intraarticular injection (IAI) of triamcinolone hexacetonide (TH) for the treatment of osteoarthritis (OA) of hand interphalangeal (IP) joints.\n\n\nMETHODS\nSixty patients who underwent IAI at the most symptomatic IP joint were randomly assigned to receive TH/lidocaine (LD; n = 30) with TH 20 mg/ml and LD 2%, or just LD (n = 30). The injected joint was immobilized with a splint for 48 h in both groups. Patients were assessed at baseline and at 1, 4, 8, and 12 weeks by a blinded observer. The following variables were assessed: pain at rest [visual analog scale (VAS)r], pain at movement (VASm), swelling (physician VASs), goniometry, grip and pinch strength, hand function, treatment improvement, daily requirement of paracetamol, and local adverse effects. The proposed treatment (IAI with TH/LD) was successful if statistical improvement (p < 0.05) was achieved in at least 2 of 3 VAS. Repeated-measures ANOVA test was used to analyze intervention response.\n\n\nRESULTS\nFifty-eight patients (96.67%) were women, and the mean age was 60.7 years (± 8.2). The TH/LD group showed greater improvement than the LD group for VASm (p = 0.014) and physician VASs (p = 0.022) from the first week until the end of the study. In other variables, there was no statistical difference between groups. No significant adverse effects were observed.\n\n\nCONCLUSION\nThe IAI with TH/LD has been shown to be more effective than the IAI with LD for pain on movement and joint swelling in patients with OA of the IP joints. Regarding pain at rest, there was no difference between groups.\n\n\nTRIAL REGISTRATION NUMBER\nClinicalTrials.gov (NCT02102620).",
"title": ""
},
{
"docid": "e583cf382c9a58a6f09acfcb345a381f",
"text": "DXC Technology were asked to participate in a Cyber Vulnerability Investigation into organizations in the Defense sector in the UK. Part of this work was to examine the influence of socio-technical and/or human factors on cyber security – where possible linking factors to specific technical risks. Initial research into the area showed that (commercially, at least) most approaches to developing security culture in organisations focus on end users and deal solely with training and awareness regarding identifying and avoiding social engineering attacks and following security procedures. The only question asked and answered is how to ensure individuals conform to security policy and avoid such attacks. But experience of recent attacks (e.g., Wannacry, Sony hacks) show that responses to cyber security requirements are not just determined by the end users’ level of training and awareness, but grow out of the wider organizational culture – with failures at different levels of the organization. This is a known feature of socio-technical research. As a result, we have sought to develop and apply a different approach to measuring security culture, based on discovering the distribution of beliefs and values (and resulting patterns of behavior) throughout the organization. Based on our experience, we show a way we can investigate these patterns of behavior and use them to identify socio-technical vulnerabilities by comparing current and ‘ideal’ behaviors. In doing so, we also discuss how this approach can be further developed and successfully incorporated into commercial practice, while retaining scientific validity.",
"title": ""
},
{
"docid": "b50b43bcc69f840e4ba4e26529788cab",
"text": "Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks. In this paper, we analyze failure cases of state-ofthe-art detectors and observe that most hard false positives result from classification instead of localization. We conjecture that: (1) Shared feature representation is not optimal due to the mismatched goals of feature learning for classification and localization; (2) multi-task learning helps, yet optimization of the multi-task loss may result in sub-optimal for individual tasks; (3) large receptive field for different scales leads to redundant context information for small objects. We demonstrate the potential of detector classification power by a simple, effective, and widely-applicable Decoupled Classification Refinement (DCR) network. DCR samples hard false positives from the base classifier in Faster RCNN and trains a RCNN-styled strong classifier. Experiments show new stateof-the-art results on PASCAL VOC and COCO without any bells and whistles.",
"title": ""
},
{
"docid": "fb1f3f300bcd48d99f0a553a709fdc89",
"text": "This work includes a high step up voltage gain DC-DC converter for DC microgrid applications. The DC microgrid can be utilized for rural electrification, UPS support, Electronic lighting systems and Electrical vehicles. The whole system consists of a Photovoltaic panel (PV), High step up DC-DC converter with Maximum Power Point Tracking (MPPT) and DC microgrid. The entire system is optimized with both MPPT and converter separately. The MPP can be tracked by Incremental Conductance (IC) MPPT technique modified with D-Sweep (Duty ratio Sweep). D-sweep technique reduces the problem of multiple local maxima. Converter optimization includes a high step up DC-DC converter which comprises of both coupled inductor and switched capacitors. This increases the gain up to twenty times with high efficiency. Both converter optimization and MPPT optimization increases overall system efficiency. MATLAB/simulink model is implemented. Hardware of the system can be implemented by either voltage mode control or current mode control.",
"title": ""
},
{
"docid": "c043e7a5d5120f5a06ef6decc06c184a",
"text": "Entities are further categorized into those that are the object of the measurement (‘assayed components’) and those, if any, that are subjected to targeted and controlled experimental interventions (‘perturbations/interventions’). These two core categories are related to the concepts ‘perturbagen’ and ‘target’ in the Bioassay Ontology (BAO2) and capture an important aspect of the design of experiments where multiple conditions are compared with each other in order to test whether a given perturbation (e.g., the presence or absence of a drug), causes a given response (e.g., a change in gene expression). Additional categories include ‘experimental variables’, ‘reporters’, ‘normalizing components’ and generic ‘biological components’ (Supplementary Data). We developed a web-based tool with a graphical user interface that allows computer-assisted manual extraction of the metadata model described above at the level of individual figure panels based on the information provided in figure legends and in the images. Files that contain raw or minimally processed data, when available, can furthermore be linked or uploaded and attached to the figure. As proof of principle, we have curated a compendium of over 18,000 experiments published across 23 journals. From the 721 papers processed, 381 papers were related to the field of autophagy, and the rest were annotated during the publication process of accepted manuscripts at four partner molecular biology journals. Both sets of papers were processed identically. Out of the 18,157 experimental panels annotated, 77% included at least one ‘intervention/assayed component’ pair, and this supported the broad applicability of the perturbation-centric SourceData model. We provide a breakdown of entities by categories in Supplementary Figure 1. We note that the presence of a perturbation is not a requirement for the model. As such, the SourceData model is also applicable in cases such as correlative observations. The SourceData model is independent of data type (i.e., image-based or numerical values) and is well suited for cell and molecular biology experiments. 77% of the processed entities were explicitly mentioned in the text of the legend. For the remaining entities, curators added the terms based on the labels directly displayed on the image of the figure. SourceData: a semantic platform for curating and searching figures",
"title": ""
},
{
"docid": "5a573ae9fad163c6dfe225f59b246b7f",
"text": "The sharp increase of plastic wastes results in great social and environmental pressures, and recycling, as an effective way currently available to reduce the negative impacts of plastic wastes, represents one of the most dynamic areas in the plastics industry today. Froth flotation is a promising method to solve the key problem of recycling process, namely separation of plastic mixtures. This review surveys recent literature on plastics flotation, focusing on specific features compared to ores flotation, strategies, methods and principles, flotation equipments, and current challenges. In terms of separation methods, plastics flotation is divided into gamma flotation, adsorption of reagents, surface modification and physical regulation.",
"title": ""
},
{
"docid": "c7862136579a8340f22db5d6f3ee5f12",
"text": "A novel lighting system was devised for 3D defect inspection in the wire bonding process. Gold wires of 20 microm in diameter were bonded to connect the integrated circuit (IC) chip with the substrate. Bonding wire defects can be classified as 2D type and 3D type. The 2D-type defects include missed, shifted, or shorted wires. These defects can be inspected from a 2D top-view image of the wire. The 3D-type bonding wire defects are sagging wires, and are difficult to inspect from a 2D top-view image. A structured lighting system was designed and developed to facilitate all 2D-type and 3D-type defect inspection. The devised lighting system can be programmed to turn the structured LEDs on or off independently. Experiments show that the devised illumination system is effective for wire bonding inspection and will be valuable for further applications.",
"title": ""
},
{
"docid": "1ef2e54d021f9d149600f0bc7bebb0cd",
"text": "The field of open-domain conversation generation using deep neural networks has attracted increasing attention from researchers for several years. However, traditional neural language models tend to generate safe, generic reply with poor logic and no emotion. In this paper, an emotional conversation generation orientated syntactically constrained bidirectional-asynchronous framework called E-SCBA is proposed to generate meaningful (logical and emotional) reply. In E-SCBA, pre-generated emotion keyword and topic keyword are asynchronously introduced into the reply during the generation, and the process of decoding is much different from the most existing methods that generates reply from the first word to the end. A newly designed bidirectional-asynchronous decoder with the multi-stage strategy is proposed to support this idea, which ensures the fluency and grammaticality of reply by making full use of syntactic constraint. Through the experiments, the results show that our framework not only improves the diversity of replies, but gains a boost on both logic and emotion compared with baselines as well.",
"title": ""
},
{
"docid": "64ddf475e5fcf7407e4dfd65f95a68a8",
"text": "Fuzzy PID controllers have been developed and applied to many fields for over a period of 30 years. However, there is no systematic method to design membership functions (MFs) for inputs and outputs of a fuzzy system. Then optimizing the MFs is considered as a system identification problem for a nonlinear dynamic system which makes control challenges. This paper presents a novel online method using a robust extended Kalman filter to optimize a Mamdani fuzzy PID controller. The robust extended Kalman filter (REKF) is used to adjust the controller parameters automatically during the operation process of any system applying the controller to minimize the control error. The fuzzy PID controller is tuned about the shape of MFs and rules to adapt with the working conditions and the control performance is improved significantly. The proposed method in this research is verified by its application to the force control problem of an electro-hydraulic actuator. Simulations and experimental results show that proposed method is effective for the online optimization of the fuzzy PID controller. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "74aaf19d143d86b52c09e726a70a2ac0",
"text": "This paper presents simulation and experimental investigation results of steerable integrated lens antennas (ILAs) operating in the 60 GHz frequency band. The feed array of the ILAs is comprised by four switched aperture coupled microstrip antenna (ACMA) elements that allows steering between four different antenna main beam directions in one plane. The dielectric lenses of the designed ILAs are extended hemispherical quartz (ε = 3.8) lenses with the radiuses of 7.5 and 12.5 mm. The extension lengths of the lenses are selected through the electromagnetic optimization in order to achieve the maximum ILAs directivities and also the minimum directivity degradations of the outer antenna elements in the feed array (± 3 mm displacement) relatively to the inner ones (± 1 mm displacement). Simulated maximum directivities of the boresight beam of the designed ILAs are 19.8 dBi and 23.8 dBi that are sufficient for the steerable antennas for the millimeter-wave WLAN/WPAN communication systems. The feed ACMA array together with the waveguide to microstrip transition dedicated for experimental investigations is fabricated on high frequency and low cost Rogers 4003C substrate. Single Pole Double Through (SPDT) switches from Hittite are used in order to steer the ILA prototypes main beam directions. The experimental results of the fabricated electronically steerable quartz ILA prototypes prove the simulation results and show ±35° and ±22° angle sector coverage for the lenses with the 7.5 and 12.5 mm radiuses respectively.",
"title": ""
},
{
"docid": "e9358f48172423a421ef5edf6fe909f9",
"text": "PURPOSE\nTo describe a modification of the computer self efficacy scale for use in clinical settings and to report on the modified scale's reliability and construct validity.\n\n\nMETHODS\nThe computer self efficacy scale was modified to make it applicable for clinical settings (for use with older people or people with disabilities using everyday technologies). The modified scale was piloted, then tested with patients in an Australian inpatient rehabilitation setting (n = 88) to determine the internal consistency using Cronbach's alpha coefficient. Construct validity was assessed by correlation of the scale with age and technology use. Factor analysis using principal components analysis was undertaken to identify important constructs within the scale.\n\n\nRESULTS\nThe modified computer self efficacy scale demonstrated high internal consistency with a standardised alpha coefficient of 0.94. Two constructs within the scale were apparent; using the technology alone, and using the technology with the support of others. Scores on the scale were correlated with age and frequency of use of some technologies thereby supporting construct validity.\n\n\nCONCLUSIONS\nThe modified computer self efficacy scale has demonstrated reliability and construct validity for measuring the self efficacy of older people or people with disabilities when using everyday technologies. This tool has the potential to assist clinicians in identifying older patients who may be more open to using new technologies to maintain independence.",
"title": ""
},
{
"docid": "b12bae586bc49a12cebf11cca49c0386",
"text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"title": ""
},
{
"docid": "2959b7da07ce8b0e6825819566bce9ab",
"text": "Social isolation among the elderly is a concern in developed countries. Using a randomized trial, this study examined the effect of a social isolation prevention program on loneliness, depression, and subjective well-being of the elderly in Japan. Among the elderly people who relocated to suburban Tokyo, 63 who responded to a pre-test were randomized and assessed 1 and 6 months after the program. Four sessions of a group-based program were designed to prevent social isolation by improving community knowledge and networking with other participants and community \"gatekeepers.\" The Life Satisfaction Index A (LSI-A), Geriatric Depression Scale (GDS), Ando-Osada-Kodama (AOK) loneliness scale, social support, and other variables were used as outcomes of this study. A linear mixed model was used to compare 20 of the 21 people in the intervention group to 40 of the 42 in the control group, and showed that the intervention program had a significant positive effect on LSI-A, social support, and familiarity with services scores and a significant negative effect on AOK over the study period. The program had no significant effect on depression. The findings of this study suggest that programs aimed at preventing social isolation are effective when they utilize existing community resources, are tailor-made based on the specific needs of the individual, and target people who can share similar experiences.",
"title": ""
},
{
"docid": "42979dd6ad989896111ef4de8d26b2fb",
"text": "Online dating services let users expand their dating pool beyond their social network and specify important characteristics of potential partners. To assess compatibility, users share personal information — e.g., identifying details or sensitive opinions about sexual preferences or worldviews — in profiles or in one-on-one communication. Thus, participating in online dating poses inherent privacy risks. How people reason about these privacy risks in modern online dating ecosystems has not been extensively studied. We present the results of a survey we designed to examine privacy-related risks, practices, and expectations of people who use or have used online dating, then delve deeper using semi-structured interviews. We additionally analyzed 400 Tinder profiles to explore how these issues manifest in practice. Our results reveal tensions between privacy and competing user values and goals, and we demonstrate how these results can inform future designs.",
"title": ""
}
] |
scidocsrr
|
a7ecc679e00a090a141312f80c738635
|
PowerSpy: Location Tracking using Mobile Device Power Analysis
|
[
{
"docid": "5e286453dfe55de305b045eaebd5f8fd",
"text": "Target tracking is an important element of surveillance, guidance or obstacle avoidance, whose role is to determine the number, position and movement of targets. The fundamental building block of a tracking system is a filter for recursive state estimation. The Kalman filter has been flogged to death as the work-horse of tracking systems since its formulation in the 60's. In this talk we look beyond the Kalman filter at sequential Monte Carlo methods, collectively referred to as particle filters. Particle filters have become a popular method for stochastic dynamic estimation problems. This popularity can be explained by a wave of optimism among practitioners that traditionally difficult nonlinear/non-Gaussian dynamic estimation problems can now be solved accurately and reliably using this methodology. The computational cost of particle filters have often been considered their main disadvantage, but with ever faster computers and more efficient particle filter algorithms, this argument is becoming less relevant. The talk is organized in two parts. First we review the historical development and current status of particle filtering and its relevance to target tracking. We then consider in detail several tracking applications where conventional (Kalman based) methods appear inappropriate (unreliable or inaccurate) and where we instead need the potential benefits of particle filters. 1 The paper was written together with David Salmond, QinetiQ, UK.",
"title": ""
},
{
"docid": "74227709f4832c3978a21abb9449203b",
"text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.",
"title": ""
}
] |
[
{
"docid": "64a730ce8aad5d4679409be43a291da7",
"text": "Background In the last years, it has been seen a shifting on society's consumption patterns, from mass consumption to second-hand culture. Moreover, consumer's perception towards second-hand stores, has been changing throughout the history of second-hand markets, according to the society's values prevailing in each time. Thus, the purchase intentions regarding second-hand clothes are influence by motivational and moderating factors according to the consumer's perception. Therefore, it was employed the theory of Guiot and Roux (2010) on motivational factors towards second-hand shopping and previous researches on moderating factors towards second-hand shopping. Purpose The purpose of this study is to explore consumer's perception and their purchase intentions towards second-hand clothing stores. Method For this, a qualitative and abductive approach was employed, combined with an exploratory design. Semi-structured face-to-face interviews were conducted utilizing a convenience sampling approach. Conclusion The findings show that consumers perception and their purchase intentions are influenced by their age and the environment where they live. However, the environment affect people in different ways. From this study, it could be found that elderly consumers are influenced by values and beliefs towards second-hand clothes. Young people are very influenced by the concept of fashion when it comes to second-hand clothes. For adults, it could be observed that price and the sense of uniqueness driver their decisions towards second-hand clothes consumption. The main motivational factor towards second-hand shopping was price. On the other hand, risk of contamination was pointed as the main moderating factor towards second-hand purchase. The study also revealed two new motivational factors towards second-hand clothing shopping, such charity and curiosity. Managers of second-hand clothing stores can make use of these findings to guide their decisions, especially related to improvements that could be done in order to make consumers overcoming the moderating factors towards second-hand shopping. The findings of this study are especially useful for second-hand clothing stores in Borås, since it was suggested couple of improvements for those stores based on the participant's opinions.",
"title": ""
},
{
"docid": "7ddc7a3fffc582f7eee1d0c29914ba1a",
"text": "Cyclic neutropenia is an uncommon hematologic disorder characterized by a marked decrease in the number of neutrophils in the peripheral blood occurring at regular intervals. The neutropenic phase is characteristically associated with clinical symptoms such as recurrent fever, malaise, headaches, anorexia, pharyngitis, ulcers of the oral mucous membrane, and gingival inflammation. This case report describes a Japanese girl who has this disease and suffers from periodontitis and oral ulceration. Her case has been followed up for the past 5 years from age 7 to 12. The importance of regular oral hygiene, careful removal of subgingival plaque and calculus, and periodic and thorough professional mechanical tooth cleaning was emphasized to arrest the progress of periodontal breakdown. Local antibiotic application with minocycline ointment in periodontal pockets was beneficial as an ancillary treatment, especially during neutropenic periods.",
"title": ""
},
{
"docid": "75060c7027db4e75bc42f3f3c84cad9b",
"text": "In this paper, we investigate whether superior performance on corporate social responsibility (CSR) strategies leads to better access to finance. We hypothesize that better access to finance can be attributed to a) reduced agency costs due to enhanced stakeholder engagement and b) reduced informational asymmetry due to increased transparency. Using a large cross-section of firms, we find that firms with better CSR performance face significantly lower capital constraints. Moreover, we provide evidence that both of the hypothesized mechanisms, better stakeholder engagement and transparency around CSR performance, are important in reducing capital constraints. The results are further confirmed using several alternative measures of capital constraints, a paired analysis based on a ratings shock to CSR performance, an instrumental variables and also a simultaneous equations approach. Finally, we show that the relation is driven by both the social and the environmental dimension of CSR.",
"title": ""
},
{
"docid": "66382b88e0faa573251d5039ccd65d6c",
"text": "In this communication, we present a new circularly-polarized array antenna using 2×2 linearly-polarized sub grid arrays in a low temperature co-fired ceramic technology for highly-integrated 60-GHz radio. The sub grid arrays are sequentially rotated and excited with a 90°-phase increment to radiate circularly-polarized waves. The feeding network of the array antenna is based on stripline quarter-wave matched T-junctions. The array antenna has a size of 15×15×0.9 mm3. Simulated and measured results confirm wide impedance, axial ratio, pattern, and gain bandwidths.",
"title": ""
},
{
"docid": "6766977de80074325165a82eeb08d671",
"text": "We synthesized the literature on gamification of education by conducting a review of the literature on gamification in the educational and learning context. Based on our review, we identified several game design elements that are used in education. These game design elements include points, levels/stages, badges, leaderboards, prizes, progress bars, storyline, and feedback. We provided examples from the literature to illustrate the application of gamification in the educational context.",
"title": ""
},
{
"docid": "f83a16d393c78d6ba0e65a4659446e7e",
"text": "Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc.",
"title": ""
},
{
"docid": "b8def7be21f014693589ae99385412dd",
"text": "Automatic image captioning has received increasing attention in recent years. Although there are many English datasets developed for this problem, there is only one Turkish dataset and it is very small compared to its English counterparts. Creating a new dataset for image captioning is a very costly and time consuming task. This work is a first step towards transferring the available, large English datasets into Turkish. We translated English captioning datasets into Turkish by using an automated translation tool and we trained an image captioning model on the automatically obtained Turkish captions. Our experiments show that this model yields the best performance so far on Turkish captioning.",
"title": ""
},
{
"docid": "8dfdd829881074dc002247c9cd38eba8",
"text": "The limited battery lifetime of modern embedded systems and mobile devices necessitates frequent battery recharging or replacement. Solar energy and small-size photovoltaic (PV) systems are attractive solutions to increase the autonomy of embedded and personal devices attempting to achieve perpetual operation. We present a battery less solar-harvesting circuit that is tailored to the needs of low-power applications. The harvester performs maximum-power-point tracking of solar energy collection under nonstationary light conditions, with high efficiency and low energy cost exploiting miniaturized PV modules. We characterize the performance of the circuit by means of simulation and extensive testing under various charging and discharging conditions. Much attention has been given to identify the power losses of the different circuit components. Results show that our system can achieve low power consumption with increased efficiency and cheap implementation. We discuss how the scavenger improves upon state-of-the-art technology with a measured power consumption of less than 1 mW. We obtain increments of global efficiency up to 80%, diverging from ideality by less than 10%. Moreover, we analyze the behavior of super capacitors. We find that the voltage across the supercapacitor may be an unreliable indicator for the stored energy under some circumstances, and this should be taken into account when energy management policies are used.",
"title": ""
},
{
"docid": "249a09e24ce502efb4669603b54b433d",
"text": "Deep Neural Networks (DNNs) are universal function approximators providing state-ofthe-art solutions on wide range of applications. Common perceptual tasks such as speech recognition, image classification, and object tracking are now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack of a mathematical framework providing an explicit and interpretable input-output formula for any topology, (2) quantification of DNNs stability regarding adversarial examples (i.e. modified inputs fooling DNN predictions whilst undetectable to humans), (3) absence of generalization guarantees and controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to apply DNNs to domains where expert labeling is scarce as in the medical field. Answering those points would provide theoretical perspectives for further developments based on a common ground. Furthermore, DNNs are now deployed in tremendous societal applications, pushing the need to fill this theoretical gap to ensure control, reliability, and interpretability. 1 ar X iv :1 71 0. 09 30 2v 3 [ st at .M L ] 6 N ov 2 01 7",
"title": ""
},
{
"docid": "b8cf5e3802308fe941848fea51afddab",
"text": "Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses.",
"title": ""
},
{
"docid": "43e5146e4a7723cf391b013979a1da32",
"text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several examples.",
"title": ""
},
{
"docid": "0321ef8aeb0458770cd2efc35615e11c",
"text": "Entity-relationship-structured data is becoming more important on the Web. For example, large knowledge bases have been automatically constructed by information extraction from Wikipedia and other Web sources. Entities and relationships can be represented by subject-property-object triples in the RDF model, and can then be precisely searched by structured query languages like SPARQL. Because of their Boolean-match semantics, such queries often return too few or even no results. To improve recall, it is thus desirable to support users by automatically relaxing or reformulating queries in such a way that the intention of the original user query is preserved while returning a sufficient number of ranked results. In this paper we describe comprehensive methods to relax SPARQL-like triplepattern queries in a fully automated manner. Our framework produces a set of relaxations by means of statistical language models for structured RDF data and queries. The query processing algorithms merge the results of different relaxations into a unified result list, with ranking based on any ranking function for structured queries over RDF-data. Our experimental evaluation, with two different datasets about movies and books, shows the effectiveness of the automatically generated relaxations and the improved quality of query results based on assessments collected on the Amazon Mechanical Turk platform.",
"title": ""
},
{
"docid": "290b56471b64e150e40211f7a51c1237",
"text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.",
"title": ""
},
{
"docid": "4c16117954f9782b3a22aff5eb50537a",
"text": "Domain transfer is an exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels. However, most successful applications to date require the two domains to be closely related (e.g., image-to-image, video-video), utilizing similar or shared networks to transform domain-specific properties like texture, coloring, and line shapes. Here, we demonstrate that it is possible to transfer across modalities (e.g., image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (e.g., variational autoencoder and a generative adversarial network). We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space. The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations. Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions.",
"title": ""
},
{
"docid": "3b7cfe02a34014c84847eea4790037e2",
"text": "Non-technical losses (NTL) such as electricity theft cause significant harm to our economies, as in some countries they may range up to 40% of the total electricity distributed. Detecting NTLs requires costly on-site inspections. Accurate prediction of NTLs for customers using machine learning is therefore crucial. To date, related research largely ignore that the two classes of regular and non-regular customers are highly imbalanced, that NTL proportions may change and mostly consider small data sets, often not allowing to deploy the results in production. In this paper, we present a comprehensive approach to assess three NTL detection models for different NTL proportions in large real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and Support Vector Machine. This work has resulted in appreciable results that are about to be deployed in a leading industry solution. We believe that the considerations and observations made in this contribution are necessary for future smart meter research in order to report their effectiveness on imbalanced and large real world data sets.",
"title": ""
},
{
"docid": "aea4b65d1c30e80e7f60a52dbecc78f3",
"text": "The aim of this paper is to automate the car and the car parking as well. It discusses a project which presents a miniature model of an automated car parking system that can regulate and manage the number of cars that can be parked in a given space at any given time based on the availability of parking spot. Automated parking is a method of parking and exiting cars using sensing devices. The entering to or leaving from the parking lot is commanded by an Android based application. We have studied some of the existing systems and it shows that most of the existing systems aren't completely automated and require a certain level of human interference or interaction in or with the system. The difference between our system and the other existing systems is that we aim to make our system as less human dependent as possible by automating the cars as well as the entire parking lot, on the other hand most existing systems require human personnel (or the car owner) to park the car themselves. To prove the effectiveness of the system proposed by us we have developed and presented a mathematical model which will be discussed in brief further in the paper.",
"title": ""
},
{
"docid": "bb94ef2ab26fddd794a5b469f3b51728",
"text": "This study examines the treatment outcome of a ten weeks dance movement therapy intervention on quality of life (QOL). The multicentred study used a subject-design with pre-test, post-test, and six months follow-up test. 162 participants who suffered from stress were randomly assigned to the dance movement therapy treatment group (TG) (n = 97) and the wait-listed control group (WG) (65). The World Health Organization Quality of Life Questionnaire 100 (WHOQOL-100) and Munich Life Dimension List were used in both groups at all three measurement points. Repeated measures ANOVA revealed that dance movement therapy participants in all QOL dimensions always more than the WG. In the short term, they significantly improved in the Psychological domain (p > .001, WHOQOL; p > .01, Munich Life Dimension List), Social relations/life (p > .10, WHOQOL; p > .10, Munich Life Dimension List), Global value (p > .05, WHOQOL), Physical health (p > .05, Munich Life Dimension List), and General life (p > .10, Munich Life Dimension List). In the long term, dance movement therapy significantly enhanced the psychological domain (p > .05, WHOQOL; p > .05, Munich Life Dimension List), Spirituality (p > .10, WHOQOL), and General life (p > .05, Munich Life Dimension List). Dance movement therapy is effective in the shortand long-term to improve QOL. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1b5bc53b1039f3e7aecbc8dcb2f3b9a8",
"text": "Agricultural lands occupy 37% of the earth's land surface. Agriculture accounts for 52 and 84% of global anthropogenic methane and nitrous oxide emissions. Agricultural soils may also act as a sink or source for CO2, but the net flux is small. Many agricultural practices can potentially mitigate greenhouse gas (GHG) emissions, the most prominent of which are improved cropland and grazing land management and restoration of degraded lands and cultivated organic soils. Lower, but still significant mitigation potential is provided by water and rice management, set-aside, land use change and agroforestry, livestock management and manure management. The global technical mitigation potential from agriculture (excluding fossil fuel offsets from biomass) by 2030, considering all gases, is estimated to be approximately 5500-6000Mt CO2-eq.yr-1, with economic potentials of approximately 1500-1600, 2500-2700 and 4000-4300Mt CO2-eq.yr-1 at carbon prices of up to 20, up to 50 and up to 100 US$ t CO2-eq.-1, respectively. In addition, GHG emissions could be reduced by substitution of fossil fuels for energy production by agricultural feedstocks (e.g. crop residues, dung and dedicated energy crops). The economic mitigation potential of biomass energy from agriculture is estimated to be 640, 2240 and 16 000Mt CO2-eq.yr-1 at 0-20, 0-50 and 0-100 US$ t CO2-eq.-1, respectively.",
"title": ""
},
{
"docid": "d9214591462b0780ede6d58dab42f48c",
"text": "Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.",
"title": ""
},
{
"docid": "514d9326cb54cec16f4dfb05deca3895",
"text": "Photo publishing in Social Networks and other Web2.0 applications has become very popular due to the pervasive availability of cheap digital cameras, powerful batch upload tools and a huge amount of storage space. A portion of uploaded images are of a highly sensitive nature, disclosing many details of the users' private life. We have developed a web service which can detect private images within a user's photo stream and provide support in making privacy decisions in the sharing context. In addition, we present a privacy-oriented image search application which automatically identifies potentially sensitive images in the result set and separates them from the remaining pictures.",
"title": ""
}
] |
scidocsrr
|
3de2bb9f44e7ca53fcd55dc4e98f32ec
|
ANTECEDENTS AND DISTINCTIONS BETWEEN ONLINE TRUST AND DISTRUST : PREDICTING HIGH-AND LOW-RISK INTERNET BEHAVIORS
|
[
{
"docid": "4fa7ee44cdc4b0cd439723e9600131bd",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "30a617e3f7e492ba840dfbead690ae39",
"text": "Information systems professionals must pay attention to online customer retention. Drawing on the relationship marketing literature, we formulated and tested a model to explain B2C user repurchase intention from the perspective of relationship quality. The model was empirically tested through a survey conducted in Northern Ireland. Results showed that online relationship quality and perceived website usability positively impacted customer repurchase intention. Moreover, online relationship quality was positively influenced by perceived vendor expertise in order fulfillment, perceived vendor reputation, and perceived website usability, whereas distrust in vendor behavior negatively influenced online relationship quality. Implications of these findings are discussed. 2011 Elsevier B.V. All rights reserved. § This work was partially supported by Strategic Research Grant at City University of Hong Kong, China (No. CityU 7002521), and the National Nature Science Foundation of China (No. 70773008). * Corresponding author at: P7722, City University of Hong Kong, Hong Kong, China. Tel.: +852 27887492; fax: +852 34420370. E-mail address: [email protected] (Y. Fang).",
"title": ""
}
] |
[
{
"docid": "3f58f24dbc2d75b258c003fd6396f505",
"text": "The stochastic multi-armed bandit problem is an important model for studying the explorationexploitation tradeoff in reinforcement learning. Although many algorithms for the problem are well-understood theoretically, empirical confirmation of their effectiveness is generally scarce. This paper presents a thorough empirical study of the most popular multi-armed bandit algorithms. Three important observations can be made from our results. Firstly, simple heuristics such as -greedy and Boltzmann exploration outperform theoretically sound algorithms on most settings by a significant margin. Secondly, the performance of most algorithms varies dramatically with the parameters of the bandit problem. Our study identifies for each algorithm the settings where it performs well, and the settings where it performs poorly. These properties are not described by current theory, even though they can be exploited in practice in the design of heuristics. Thirdly, the algorithms’ performance relative each to other is affected only by the number of bandit arms and the variance of the rewards. This finding may guide the design of subsequent empirical evaluations. In the second part of the paper, we turn our attention to an important area of application of bandit algorithms: clinical trials. Although the design of clinical trials has been one of the principal practical problems motivating research on multi-armed bandits, bandit algorithms have never been evaluated as potential treatment allocation strategies. Using data from a real study, we simulate the outcome that a 2001-2002 clinical trial would have had if bandit algorithms had been used to allocate patients to treatments. We find that an adaptive trial would have successfully treated at least 50% more patients, while significantly reducing the number of adverse effects and increasing patient retention. At the end of the trial, the best treatment could have still been identified with a high level of statistical confidence. Our findings demonstrate that bandit algorithms are attractive alternatives to current adaptive treatment allocation strategies.",
"title": ""
},
{
"docid": "1da4635f5fcfe102b52a9ba9bb032def",
"text": "This paper presents a corpus study of evaluative and speculative language. Knowledge of such language would be useful in many applications, such as text categorization and summarization. Analyses of annotator agreement and of characteristics of subjective language are performed. This study yields knowledge needed to design e ective machine learning systems for identifying subjective language.",
"title": ""
},
{
"docid": "18a524545090542af81e0a66df3a1395",
"text": "What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process.\n When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses.\n We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.",
"title": ""
},
{
"docid": "44582f087f9bb39d6e542ff7b600d1c7",
"text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.",
"title": ""
},
{
"docid": "8b060d80674bd3f329a675f1a3f4bce2",
"text": "Smartphones are ubiquitous devices that offer endless possibilities for health-related applications such as Ambient Assisted Living (AAL). They are rich in sensors that can be used for Human Activity Recognition (HAR) and monitoring. The emerging problem now is the selection of optimal combinations of these sensors and existing methods to accurately and efficiently perform activity recognition in a resource and computationally constrained environment. To accomplish efficient activity recognition on mobile devices, the most discriminative features and classification algorithms must be chosen carefully. In this study, sensor fusion is employed to improve the classification results of a lightweight classifier. Furthermore, the recognition performance of accelerometer, gyroscope and magnetometer when used separately and simultaneously on a feature-level sensor fusion is examined to gain valuable knowledge that can be used in dynamic sensing and data collection. Six ambulatory activities, namely, walking, running, sitting, standing, walking upstairs and walking downstairs, are inferred from low-sensor data collected from the right trousers pocket of the subjects and feature selection is performed to further optimize resource use.",
"title": ""
},
{
"docid": "9b9a04a859b51866930b3fb4d93653b6",
"text": "BACKGROUND\nResults of several studies have suggested a probable etiologic association between Epstein-Barr virus (EBV) and leukemias; therefore, the aim of this study was to investigate the association of EBV in childhood leukemia.\n\n\nMETHODS\nA direct isothermal amplification method was developed for detection of the latent membrane protein 1 (LMP1) of EBV in the peripheral blood of 80 patients with leukemia (54 had lymphoid leukemia and 26 had myeloid leukemia) and of 20 hematologically healthy control subjects.\n\n\nRESULTS\nEBV LMP1 gene transcripts were found in 29 (36.3%) of the 80 patients with leukemia but in none of the healthy controls (P < .0001). Of the 29 EBV(+) cases, 23 (79.3%), 5 (17.3%), and 1 (3.4%) were acute lymphoblastic leukemia, acute myeloid leukemia, and chronic myeloid leukemia, respectively.\n\n\nCONCLUSION\nEBV LMP1 gene transcriptional activity was observed in a significant proportion of patients with acute lymphoblastic leukemia. EBV infection in patients with lymphoid leukemia may be a factor involved in the high incidence of pediatric leukemia in the Sudan.",
"title": ""
},
{
"docid": "438747d014f4bc65d7e5c2d7a1abaaa0",
"text": "Phishing refers to fraudulent social engineering techniques used to elicit sensitive information from unsuspecting victims. In this paper, our scheme is aimed at detecting phishing mails which do not contain any links but bank on the victim's curiosity by luring them into replying with sensitive information. We exploit the common features among all such phishing emails such as non-mentioning of the victim's name in the email, a mention of monetary incentive and a sentence inducing the recipient to reply. This textual analysis can be further combined with header analysis of the email so that a final combined evaluation on the basis of both these scores can be done. We have shown that this method is far better than the existing Phishing Email Detection techniques as this covers emails without links while the pre-existing methods were based on the presumption of link(s).",
"title": ""
},
{
"docid": "bb896fd511a8b6306cc9f2a17639cd71",
"text": "We present the results of a user study that compares different ways of representing Dual-Scale data charts. Dual-Scale charts incorporate two different data resolutions into one chart in order to emphasize data in regions of interest or to enable the comparison of data from distant regions. While some design guidelines exist for these types of charts, there is currently little empirical evidence on which to base their design. We fill this gap by discussing the design space of Dual-Scale cartesian-coordinate charts and by experimentally comparing the performance of different chart types with respect to elementary graphical perception tasks such as comparing lengths and distances. Our study suggests that cut-out charts which include collocated full context and focus are the best alternative, and that superimposed charts in which focus and context overlap on top of each other should be avoided.",
"title": ""
},
{
"docid": "91f1509dd2c6b22d1553b3a5a8a618e9",
"text": "Witten and Frank 's textbook was one of two books that 1 used for a data mining class in the Fall o f 2001. T h e book covers all major methods o f data mining that p roduce a knowledge representa t ion as output . Knowledge representa t ion is hereby unders tood as a representat ion that can be studied, unders tood, and interpreted by human beings, at least in principle. Thus , neural networks and genetic a lgor i thms are excluded f rom the topics of this textbook. We need to say \"can be unders tood in pr inciple\" because a large decision tree or a large rule set may be as hard to interpret as a neural network.",
"title": ""
},
{
"docid": "c227cae0ec847a227945f1dec0b224d2",
"text": "We present a highly flexible and efficient software pipeline for programmable triangle voxelization. The pipeline, entirely written in CUDA, supports both fully conservative and thin voxelizations, multiple boolean, floating point, vector-typed render targets, user-defined vertex and fragment shaders, and a bucketing mode which can be used to generate 3D A-buffers containing the entire list of fragments belonging to each voxel. For maximum efficiency, voxelization is implemented as a sort-middle tile-based rasterizer, while the A-buffer mode, essentially performing 3D binning of triangles over uniform grids, uses a sort-last pipeline. Despite its major flexibility, the performance of our tile-based rasterizer is always competitive with and sometimes more than an order of magnitude superior to that of state-of-the-art binary voxelizers, whereas our bucketing system is up to 4 times faster than previous implementations. In both cases the results have been achieved through the use of careful load-balancing and high performance sorting primitives.",
"title": ""
},
{
"docid": "a70e664e2fcea37836cc55096295c4f4",
"text": "This article reviews published data on familial recurrent hydatidiform mole with particular reference to the genetic basis of this condition, the likely outcome of subsequent pregnancies in affected women and the risk of persistent trophoblastic disease following molar pregnancies in these families. Familial recurrent hydatidiform mole is characterized by recurrent complete hydatidiform moles of biparental, rather than the more usual androgenetic, origin. Although the specific gene defect in these families has not been identified, genetic mapping has shown that in most families the gene responsible is located in a 1.1 Mb region on chromosome 19q13.4. Mutations in this gene result in dysregulation of imprinting in the female germ line with abnormal development of both embryonic and extraembryonic tissue. Subsequent pregnancies in women diagnosed with this condition are likely to be complete hydatidiform moles. In 152 pregnancies in affected women, 113 (74%) were complete hydatidiform moles, 26 (17%) were miscarriages, 6 (4%) were partial hydatidiform moles, and 7 (5%) were normal pregnancies. Molar pregnancies in women with familial recurrent hydatidiform mole have a risk of progressing to persistent trophoblastic disease similar to that of androgenetic complete hydatidiform mole.",
"title": ""
},
{
"docid": "057df3356022c31db27b1f165c827524",
"text": "Eating disorders in dancers are thought to be common, but the exact rates remain to be clarified. The aim of this study is to systematically compile and analyse the rates of eating disorders in dancers. A literature search, appraisal and meta-analysis were conducted. Thirty-three relevant studies were published between 1966 and 2013 with sufficient data for extraction. Primary data were extracted as raw numbers or confidence intervals. Risk ratios and 95% confidence intervals were calculated for controlled studies. The overall prevalence of eating disorders was 12.0% (16.4% for ballet dancers), 2.0% (4% for ballet dancers) for anorexia, 4.4% (2% for ballet dancers) for bulimia and 9.5% (14.9% for ballet dancers) for eating disorders not otherwise specified (EDNOS). The dancer group had higher mean scores on the EAT-26 and the Eating Disorder Inventory subscales. Dancers, in general, had a higher risk of suffering from eating disorders in general, anorexia nervosa and EDNOS, but no higher risk of suffering from bulimia nervosa. The study concluded that as dancers had a three times higher risk of suffering from eating disorders, particularly anorexia nervosa and EDNOS, specifically designed services for this population should be considered.",
"title": ""
},
{
"docid": "8c24f4e178ebe403da3f90f05b97ac17",
"text": "The success of the Human Genome Project and the powerful tools of molecular biology have ushered in a new era of medicine and nutrition. The pharmaceutical industry expects to leverage data from the Human Genome Project to develop new drugs based on the genetic constitution of the patient; likewise, the food industry has an opportunity to position food and nutritional bioactives to promote health and prevent disease based on the genetic constitution of the consumer. This new era of molecular nutrition--that is, nutrient-gene interaction--can unfold in dichotomous directions. One could focus on the effects of nutrients or food bioactives on the regulation of gene expression (ie, nutrigenomics) or on the impact of variations in gene structure on one's response to nutrients or food bioactives (ie, nutrigenetics). The challenge of the public health nutritionist will be to balance the needs of the community with those of the individual. In this regard, the excitement and promise of molecular nutrition should be tempered by the need to validate the scientific data emerging from the disciplines of nutrigenomics and nutrigenetics and the need to educate practitioners and communicate the value to consumers-and to do it all within a socially responsible bioethical framework.",
"title": ""
},
{
"docid": "e440ad1afbbfbf5845724fd301051d92",
"text": "The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of highand low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing. TYPE OF PAPER AND",
"title": ""
},
{
"docid": "d30e2123e3c21823263ceadf3b332485",
"text": "In this paper a fractional order proportional-derivative (FO-PD) control strategy is presented and applied to AR. Drone quadrotor system. The controller parameters are calculated based on specifying a certain gain crossover frequency, a phase margin and a robustness to gain variations. Its performance is compared against two other integer order controllers; i) Extended Prediction Self-Adaptive Control (EPSAC) approach to Model Predictive Control (MPC) ii) Integer order PD controller. The closed loop control simulations applied on the AR. Drone system indicate the proposed controller outperforms the integer order PD control. Additionally, the proposed controller has less complexity but similar performance as MPC based control.",
"title": ""
},
{
"docid": "055e41fd6ace430ea9593a30e3dd02d2",
"text": "Every day we are exposed to different ideas, or memes, competing with each other for our attention. Previous research explained popularity and persistence heterogeneity of memes by assuming them in competition for limited attention resources, distributed in a heterogeneous social network. Little has been said about what characteristics make a specific meme more likely to be successful. We propose a similarity-based explanation: memes with higher similarity to other memes have a significant disadvantage in their potential popularity. We employ a meme similarity measure based on semantic text analysis and computer vision to prove that a meme is more likely to be successful and to thrive if its characteristics make it unique. Our results show that indeed successful memes are located in the periphery of the meme similarity space and that our similarity measure is a promising predictor of a meme success.",
"title": ""
},
{
"docid": "6706ad68059944988c41ba96e6d67f7c",
"text": "This paper investigates the motives, behavior, and characteristics shaping mutual fund managers’ willingness to incorporate Environmental, Social and Governance (ESG) issues into investment decision making. Using survey evidence from fund managers from five different countries, we demonstrate that this predisposition is the stronger, the shorter their average forecasting horizon and the higher their level of reliance on business risk in portfolio management is. We also find that the propensity to incorporate ESG factors is positively related to an increasing level of risk aversion, an increasing importance of salary change and senior management approval/disapproval as motivating factors as well as length of professional experience in current fund and increasing significance of assessment by superiors in remuneration. Overall, our evidence suggests that ESG diligence among fund managers serves mainly as a method for mitigating risk and is typically motivated by herding; it is much less important as a tool for additional value creation. The prevalent use of ESG criteria in mitigating risk is in contrast with traditional approach, but it is in line with behavioral finance theory. Additionally, our results also show a strong difference in the length of the forecasting horizon between continental European and Anglo-Saxon fund managers.",
"title": ""
},
{
"docid": "27a4b74d3c47fc25a8564cd824aa9e66",
"text": "Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "e3c7135441b17701caa4f2fee71837be",
"text": "We dissected 50 head halves of 25 Japanese cadavers (10 males, 15 females) to investigate the innervations of the levator veli palatini (LVP) and superior constrictor pharyngis. The branches supplying the LVP were classified into the following three types according to their origins: supplying branches that originated from the pharyngeal branch of the glossopharyngeal nerve (type I, four sides, 8%), branches that originated from a communicating branch between the pharyngeal branches of the glossopharyngeal and vagus nerves (type II, 36 sides, 72%), and those that originated from the pharyngeal branch of the vagus nerve (type III, 10 sides, 20%). In previous studies, supplying branches of type I were seldom described. Regarding the innervation of the superior constrictor, some variations were observed, and we consider it likely that there is a close relationship between these variations and the type of innervation of the LVP.",
"title": ""
},
{
"docid": "f827c29bb9dd6073e626b7457775000c",
"text": "Inter vehicular communication is a technology where vehicles act as different nodes to form a network. In a vehicular network different vehicles communicate among each other via wireless access .Authentication is very crucial security service for inter vehicular communication (IVC) in Vehicular Information Network. It is because, protecting vehicles from any attempt to cause damage (misuse) to their private data and the attacks on their privacy. In this survey paper, we investigate the authentication issues for vehicular information network architecture based on the communication principle of named data networking (NDN). This paper surveys the most emerging paradigm of NDN in vehicular information network. So, we aims this survey paper helps to improve content naming, addressing, data aggregation and mobility for IVC in the vehicular information network.",
"title": ""
}
] |
scidocsrr
|
ee89ad602cb9fc23256bf80cde48ed9e
|
Crowdsourcing, Attention and Productivity
|
[
{
"docid": "fc40a4af9411d0e9f494b13cbb916eac",
"text": "P (P2P) file sharing networks are an important medium for the distribution of information goods. However, there is little empirical research into the optimal design of these networks under real-world conditions. Early speculation about the behavior of P2P networks has focused on the role that positive network externalities play in improving performance as the network grows. However, negative network externalities also arise in P2P networks because of the consumption of scarce network resources or an increased propensity of users to free ride in larger networks, and the impact of these negative network externalities—while potentially important—has received far less attention. Our research addresses this gap in understanding by measuring the impact of both positive and negative network externalities on the optimal size of P2P networks. Our research uses a unique dataset collected from the six most popular OpenNap P2P networks between December 19, 2000, and April 22, 2001. We find that users contribute additional value to the network at a decreasing rate and impose costs on the network at an increasing rate, while the network increases in size. Our results also suggest that users are less likely to contribute resources to the network as the network size increases. Together, these results suggest that the optimal size of these centralized P2P networks is bounded—At some point the costs that a marginal user imposes on the network will exceed the value they provide to the network. This finding is in contrast to early predictions that larger P2P networks would always provide more value to users than smaller networks. Finally, these results also highlight the importance of considering user incentives—an important determinant of resource sharing in P2P networks—in network design.",
"title": ""
}
] |
[
{
"docid": "ca683d498e690198ca433050c3d91fd0",
"text": "Cross-graph Relational Learning (CGRL) refers to the problem of predicting the strengths or labels of multi-relational tuples of heterogeneous object types, through the joint inference over multiple graphs which specify the internal connections among each type of objects. CGRL is an open challenge in machine learning due to the daunting number of all possible tuples to deal with when the numbers of nodes in multiple graphs are large, and because the labeled training instances are extremely sparse as typical. Existing methods such as tensor factorization or tensor-kernel machines do not work well because of the lack of convex formulation for the optimization of CGRL models, the poor scalability of the algorithms in handling combinatorial numbers of tuples, and/or the non-transductive nature of the learning methods which limits their ability to leverage unlabeled data in training. This paper proposes a novel framework which formulates CGRL as a convex optimization problem, enables transductive learning using both labeled and unlabeled tuples, and offers a scalable algorithm that guarantees the optimal solution and enjoys a linear time complexity with respect to the sizes of input graphs. In our experiments with a subset of DBLP publication records and an Enzyme multi-source dataset, the proposed method successfully scaled to the large cross-graph inference problem, and outperformed other representative approaches significantly.",
"title": ""
},
{
"docid": "8c1d51dd52bc14e8952d9e319eaacf16",
"text": "This paper presents an approach to text recognition in natural scene images. Unlike most existing works which assume that texts are horizontal and frontal parallel to the image plane, our method is able to recognize perspective texts of arbitrary orientations. For individual character recognition, we adopt a bag-of-key points approach, in which Scale Invariant Feature Transform (SIFT) descriptors are extracted densely and quantized using a pre-trained vocabulary. Following [1, 2], the context information is utilized through lexicons. We formulate word recognition as finding the optimal alignment between the set of characters and the list of lexicon words. Furthermore, we introduce a new dataset called StreetViewText-Perspective, which contains texts in street images with a great variety of viewpoints. Experimental results on public datasets and the proposed dataset show that our method significantly outperforms the state-of-the-art on perspective texts of arbitrary orientations.",
"title": ""
},
{
"docid": "2aa492360133f8020abc3d02ec328a4a",
"text": "This paper conducts a performance analysis of two popular private blockchain platforms, Hyperledger Fabric and Ethereum (private deployment), to assess the performance and limitations of these state-of-the-art platforms. Blockchain, a decentralized transaction and data management technology, is said to be the technology that will have similar impacts as the Internet had on people's lives. Many industries have become interested in adopting blockchain in their IT systems, but scalability is an often- cited concern of current blockchain technology. Therefore, the goals of this preliminary performance analysis are twofold. First, a methodology for evaluating a blockchain platform is developed. Second, the analysis results are presented to inform practitioners in making decisions regarding adoption of blockchain technology in their IT systems. The experimental results, based on varying number of transactions, show that Hyperledger Fabric consistently outperforms Ethereum across all evaluation metrics which are execution time, latency and throughput. Additionally, both platforms are still not competitive with current database systems in term of performances in high workload scenarios.",
"title": ""
},
{
"docid": "16814284bc8ab287b8add1bf8930fee7",
"text": "It is cumbersome to write machine learning and graph algorithms in data-parallel models such as MapReduce and Dryad. We observe that these algorithms are based on matrix computations and, hence, are inefficient to implement with the restrictive programming and communication interface of such frameworks.\n In this paper we show that array-based languages such as R [3] are suitable for implementing complex algorithms and can outperform current data parallel solutions. Since R is single-threaded and does not scale to large datasets, we have built Presto, a distributed system that extends R and addresses many of its limitations. Presto efficiently shares sparse structured data, can leverage multi-cores, and dynamically partitions data to mitigate load imbalance. Our results show the promise of this approach: many important machine learning and graph algorithms can be expressed in a single framework and are substantially faster than those in Hadoop and Spark.",
"title": ""
},
{
"docid": "53dabbc33a041872783a109f953afd0f",
"text": "We present an analysis of parser performance on speech data, comparing word type and token frequency distributions with written data, and evaluating parse accuracy by length of input string. We find that parser performance tends to deteriorate with increasing length of string, more so for spoken than for written texts. We train an alternative parsing model with added speech data and demonstrate improvements in accuracy on speech-units, with no deterioration in performance on written text.",
"title": ""
},
{
"docid": "cd70cc8378fcfd5e4fdb06d62e3a7135",
"text": "Omni-directional visual content is a form of representing graphical and cinematic media content which provides subjects with the ability to freely change their direction of view. Along with virtual reality, omnidirectional imaging is becoming a very important type of the modern media content. This brings new challenges to the omnidirectional visual content processing, especially in the field of compression and quality evaluation. More specifically, the ability to assess quality of omnidirectional images in reliable manner is a crucial step to provide a rich quality of immersive experience. In this paper we introduce a testbed suitable for subjective evaluations of omnidirectional visual contents. We also show the results of a conducted pilot experiment to illustrate the applicability of the proposed testbed.",
"title": ""
},
{
"docid": "868c3c6de73d53f54ca6090e9559007f",
"text": "To generate useful summarization of data while maintaining privacy of sensitive information is a challenging task, especially in the big data era. The privacy-preserving principal component algorithm proposed in [1] is a promising approach when a low rank data summarization is desired. However, the analysis in [1] is limited to the case of a single principal component, which makes use of bounds on the vector-valued Bingham distribution in the unit sphere. By exploring the non-commutative structure of data matrices in the full Stiefel manifold, we extend the analysis to an arbitrary number of principal components. Our results are obtained by analyzing the asymptotic behavior of the matrix-variate Bingham distribution using tools from random matrix theory.",
"title": ""
},
{
"docid": "eeb19aa678342a2ff327283537d22f87",
"text": "We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with data-driven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera. One of the key contributions of this method is a double layer representation consisting of a complete parametric body shape inside, and a gradually fused outer surface layer. A pre-defined node graph on the body surface parameterizes the non-rigid deformations near the body, and a free-form dynamically changing graph parameterizes the outer surface layer far from the body, which allows more general reconstruction. We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance. Moreover, the inner body shape is optimized online and forced to fit inside the outer surface layer. Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios.",
"title": ""
},
{
"docid": "b58c248a9da827ce3286be0a31b934fd",
"text": "Requirement Engineering (RE) plays an important role in the success of software development life cycle. As RE is the starting point of the life cycle, any changes in requirements will be costly and time consuming. Failure in determining accurate requirements leads to errors in specifications and therefore to a mal system architecture. In addition, most of software development environments are characterized by user requests to change some requirements.Scrum as one of agile development methods that gained a great attention because of its ability to deal with the changing environments. This paper presents and discusses the current situation of RE activities in Scrum, how Scrum benefits from RE techniques and future challenges in this respect.",
"title": ""
},
{
"docid": "d2430788229faccdeedd080b97d1741c",
"text": "Potentially, empowerment has much to offer health promotion. However, some caution needs to be exercised before the notion is wholeheartedly embraced as the major goal of health promotion. The lack of a clear theoretical underpinning, distortion of the concept by different users, measurement ambiguities, and structural barriers make 'empowerment' difficult to attain. To further discussion, th is paper proposes several assertions about the definition, components, process and outcome of 'empowerment', including the need for a distinction between psychological and community empowerment. These assertions and a model of community empowerment are offered in an attempt to clarify an important issue for health promotion.",
"title": ""
},
{
"docid": "244ae725a4dffb70d71fdb5c5382d2c3",
"text": ".................................................................................................................................... i Acknowledgements ................................................................................................................. iii List of Abbreviations .............................................................................................................. vi List of Figures ........................................................................................................................ vii List of Tables ......................................................................................................................... viii",
"title": ""
},
{
"docid": "5213aa65c5a291f0839046607dcf5f6c",
"text": "The distribution and mobility of chromium in the soils and sludge surrounding a tannery waste dumping area was investigated to evaluate its vertical and lateral movement of operational speciation which was determined in six steps to fractionate the material in the soil and sludge into (i) water soluble, (ii) exchangeable, (iii) carbonate bound, (iv) reducible, (v) oxidizable, and (vi) residual phases. The present study shows that about 63.7% of total chromium is mobilisable, and 36.3% of total chromium is nonbioavailable in soil, whereas about 30.2% of total chromium is mobilisable, and 69.8% of total chromium is non-bioavailable in sludge. In contaminated sites the concentration of chromium was found to be higher in the reducible phase in soils (31.3%) and oxidisable phases in sludge (56.3%) which act as the scavenger of chromium in polluted soils. These results also indicate that iron and manganese rich soil can hold chromium that will be bioavailable to plants and biota. Thus, results of this study can indicate the status of bioavailable of chromium in this area, using sequential extraction technique. So a suitable and proper management of handling tannery sludge in the said area will be urgently needed to the surrounding environment as well as ecosystems.",
"title": ""
},
{
"docid": "96718ecc3de9cc1b719a49cc2092f6f7",
"text": "n-gram statistical language model has been successfully applied to capture programming patterns to support code completion and suggestion. However, the approaches using n-gram face challenges in capturing the patterns at higher levels of abstraction due to the mismatch between the sequence nature in n-grams and the structure nature of syntax and semantics in source code. This paper presents GraLan, a graph-based statistical language model and its application in code suggestion. GraLan can learn from a source code corpus and compute the appearance probabilities of any graphs given the observed (sub)graphs. We use GraLan to develop an API suggestion engine and an AST-based language model, ASTLan. ASTLan supports the suggestion of the next valid syntactic template and the detection of common syntactic templates. Our empirical evaluation on a large corpus of open-source projects has shown that our engine is more accurate in API code suggestion than the state-of-the-art approaches, and in 75% of the cases, it can correctly suggest the API with only five candidates. ASTLan also has high accuracy in suggesting the next syntactic template and is able to detect many useful and common syntactic templates.",
"title": ""
},
{
"docid": "b4444a17513770702a389d0b9a373ef6",
"text": "The cluster between Internet of Things (IoT) and social networks (SNs) enables the connection of people to the ubiquitous computing universe. In this framework, the information coming from the environment is provided by the IoT, and the SN brings the glue to allow human-to-device interactions. This paper explores the novel paradigm for ubiquitous computing beyond IoT, denoted by Social Internet of Things (SIoT). Although there have been early-stage studies in social-driven IoT, they merely use one or some properties of SIoT to improve a number of specific performance variables. Therefore, this paper first addresses a complete view on SIoT and key perspectives to envision the real ubiquitous computing. Thereafter, a literature review is presented along with the evolutionary history of IoT research from Intranet of Things to SIoT. Finally, this paper proposes a generic SIoT architecture and presents a discussion about enabling technologies, research challenges, and open issues.",
"title": ""
},
{
"docid": "138fc7af52066e890b45afd96debbe91",
"text": "We present a general scheme for analyzing the performance of a generic localization algorithm for multilateration (MLAT) systems (or for other distributed sensor, passive localization technology). MLAT systems are used for airport surface surveillance and are based on time difference of arrival measurements of Mode S signals (replies and 1,090 MHz extended squitter, or 1090ES). In the paper, we propose to consider a localization algorithm as composed of two components: a data model and a numerical method, both being properly defined and described. In this way, the performance of the localization algorithm can be related to the proper combination of statistical and numerical performances. We present and review a set of data models and numerical methods that can describe most localization algorithms. We also select a set of existing localization algorithms that can be considered as the most relevant, and we describe them under the proposed classification. We show that the performance of any localization algorithm has two components, i.e., a statistical one and a numerical one. The statistical performance is related to providing unbiased and minimum variance solutions, while the numerical one is related to ensuring the convergence of the solution. Furthermore, we show that a robust localization (i.e., statistically and numerI. A. Mantilla-Gaviria · J. V. Balbastre-Tejedor Instituto ITACA, Universidad Politécnica de Valencia, Camino de Vera S/N, 46022 Edificio 8G, Acceso B, Valencia, Spain e-mail: [email protected] J. V. Balbastre-Tejedor e-mail: [email protected] M. Leonardi · G. Galati (B) DIE, Tor Vergata University, Via del Politecnico 1, 00133 Rome, Italy e-mail: [email protected]; [email protected] M. Leonardi e-mail: [email protected] ically efficient) strategy, for airport surface surveillance, has to be composed of two specific kind of algorithms. Finally, an accuracy analysis, by using real data, is performed for the analyzed algorithms; some general guidelines are drawn and conclusions are provided.",
"title": ""
},
{
"docid": "6adf6cd920abf2987be8963b2f1641d6",
"text": "This paper presents a diffusion method for generating terrains from a set of parameterized curves that characterize the landform features such as ridge lines, riverbeds or cliffs. Our approach provides the user with an intuitive vector-based feature-oriented control over the terrain. Different types of constraints (such as elevation, slope angle and roughness) can be attached to the curves so as to define the shape of the terrain. The terrain is generated from the curve representation by using an efficient multigrid diffusion algorithm. The algorithm can be efficiently implemented on the GPU, which allows the user to interactively create a vast variety of landscapes.",
"title": ""
},
{
"docid": "5ea9810117c2bf6fd036a9a544af5ffb",
"text": "Graph convolutional networks (GCNs) have been widely used for classifying graph nodes in the semi-supervised setting. Previous work have shown that GCNs are vulnerable to the perturbation on adjacency and feature matrices of existing nodes. However, it is unrealistic to change existing nodes in many applications, such as existing users in social networks. In this paper, we design algorithms to attack GCNs by adding fake nodes. A greedy algorithm is proposed to generate adjacency and feature matrices of fake nodes, aiming to minimize the classification accuracy on the existing nodes. In addition, we introduce a discriminator to classify fake nodes from real nodes, and propose a Greedy-GAN attack to simultaneously update the discriminator and the attacker, to make fake nodes indistinguishable to the real ones. Our non-targeted attack decreases the accuracy of GCN down to 0.10, and our targeted attack reaches a success rate of 99% on the whole datasets, and 94% on average for attacking a single target node.",
"title": ""
},
{
"docid": "84c362cb2d4a737d7ea62d85b9144722",
"text": "This paper considers mixed, or random coeff icients, multinomial logit (MMNL) models for discrete response, and establishes the following results: Under mild regularity conditions, any discrete choice model derived from random utilit y maximization has choice probabiliti es that can be approximated as closely as one pleases by a MMNL model. Practical estimation of a parametric mixing family can be carried out by Maximum Simulated Likelihood Estimation or Method of Simulated Moments, and easily computed instruments are provided that make the latter procedure fairl y eff icient. The adequacy of a mixing specification can be tested simply as an omitted variable test with appropriately defined artificial variables. An application to a problem of demand for alternative vehicles shows that MMNL provides a flexible and computationally practical approach to discrete response analysis. Acknowledgments: Both authors are at the Department of Economics, University of Cali fornia, Berkeley CA 94720-3880. Correspondence should be directed to [email protected]. We are indebted to the E. Morris Cox fund for research support, and to Moshe Ben-Akiva, David Brownstone, Denis Bolduc, Andre de Palma, and Paul Ruud for useful comments. This paper was first presented at the University of Paris X in June 1997.",
"title": ""
},
{
"docid": "02c204377e279bf7edeba4c130ae58d1",
"text": "Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource utilization efficiency of the edge device, and solve the problem about service computing of the delay-sensitive applications. This paper researches on the framework of the fog computing, and adopts Cloud Atomization Technology to turn physical nodes in different levels into virtual machine nodes. On this basis, this paper uses the graph partitioning theory to build the fog computing's load balancing algorithm based on dynamic graph partitioning. The simulation results show that the framework of the fog computing after Cloud Atomization can build the system network flexibly, and dynamic load balancing mechanism can effectively configure system resources as well as reducing the consumption of node migration brought by system changes.",
"title": ""
},
{
"docid": "fe2bc36e704b663c8b9a72e7834e6c7e",
"text": "Driven by deep learning, there has been a surge of specialized processors for matrix multiplication, referred to as Tensor Core Units (TCUs). These TCUs are capable of performing matrix multiplications on small matrices (usually 4× 4 or 16×16) to accelerate the convolutional and recurrent neural networks in deep learning workloads. In this paper we leverage NVIDIA’s TCU to express both reduction and scan with matrix multiplication and show the benefits — in terms of program simplicity, efficiency, and performance. Our algorithm exercises the NVIDIA TCUs which would otherwise be idle, achieves 89%− 98% of peak memory copy bandwidth, and is orders of magnitude faster (up to 100× for reduction and 3× for scan) than state-of-the-art methods for small segment sizes — common in machine learning and scientific applications. Our algorithm achieves this while decreasing the power consumption by up to 22% for reduction and 16% for scan.",
"title": ""
}
] |
scidocsrr
|
566a6f2de0beccb2a5ca94a42ef6305a
|
Interval-based Queries over Multiple Streams with Missing Timestamps
|
[
{
"docid": "f13000c4870a85e491f74feb20f9b2d4",
"text": "Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to “classical” stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.",
"title": ""
},
{
"docid": "86b12f890edf6c6561536a947f338feb",
"text": "Looking for qualified reading resources? We have process mining discovery conformance and enhancement of business processes to check out, not only review, yet also download them or even read online. Discover this great publication writtern by now, simply right here, yeah just right here. Obtain the data in the sorts of txt, zip, kindle, word, ppt, pdf, as well as rar. Once again, never ever miss out on to read online as well as download this publication in our site here. Click the link. Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files.",
"title": ""
}
] |
[
{
"docid": "266114ecdd54ce1c5d5d0ec42c04ed4d",
"text": "A multiscale image registration technique is presented for the registration of medical images that contain significant levels of noise. An overview of the medical image registration problem is presented, and various registration techniques are discussed. Experiments using mean squares, normalized correlation, and mutual information optimal linear registration are presented that determine the noise levels at which registration using these techniques fails. Further experiments in which classical denoising algorithms are applied prior to registration are presented, and it is shown that registration fails in this case for significantly high levels of noise, as well. The hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [20] is presented, and accurate registration of noisy images is achieved by obtaining a hierarchical multiscale decomposition of the images and registering the resulting components. This approach enables successful registration of images that contain noise levels well beyond the level at which ordinary optimal linear registration fails. Image registration experiments demonstrate the accuracy and efficiency of the multiscale registration technique, and for all noise levels, the multiscale technique is as accurate as or more accurate than ordinary registration techniques.",
"title": ""
},
{
"docid": "9aa0fef27776e833b755ee8549ba820b",
"text": "CNNs have made an undeniable impact on computer vision through the ability to learn high-capacity models with large annotated training sets. One of their remarkable properties is the ability to transfer knowledge from a large source dataset to a (typically smaller) target dataset. This is usually accomplished through fine-tuning a fixed-size network on new target data. Indeed, virtually every contemporary visual recognition system makes use of fine-tuning to transfer knowledge from ImageNet. In this work, we analyze what components and parameters change during fine-tuning, and discover that increasing model capacity allows for more natural model adaptation through fine-tuning. By making an analogy to developmental learning, we demonstrate that growing a CNN with additional units, either by widening existing layers or deepening the overall network, significantly outperforms classic fine-tuning approaches. But in order to properly grow a network, we show that newly-added units must be appropriately normalized to allow for a pace of learning that is consistent with existing units. We empirically validate our approach on several benchmark datasets, producing state-of-the-art results.",
"title": ""
},
{
"docid": "ff04d4c2b6b39f53e7ddb11d157b9662",
"text": "Chiu proposed a clustering algorithm adjusting the numeric feature weights automatically for k-anonymity implementation and this approach gave a better clustering quality over the traditional generalization and suppression methods. In this paper, we propose an improved weighted-feature clustering algorithm which takes the weight of categorical attributes and the thesis of optimal k-partition into consideration. To show the effectiveness of our method, we do some information loss experiments to compare it with greedy k-member clustering algorithm.",
"title": ""
},
{
"docid": "7b2b69429f821c996c3a0cc605253368",
"text": "Real-time video and image processing is used in a wide variety of applications from video surveillance and traffic management to medical imaging applications. These operations typically require very high computation power. Standard definition NTSC video is digitized at 720x480 or full D1 resolution at 30 frames per second, which results in a 31MHz pixel rate. With multiple adaptive convolution stages to detect or eliminate different features within the image, the filtering operation receives input data at a rate of over 1 giga samples per second. Coupled with new high-resolution standards and multi-channel environments, processing requirements can be even higher. Achieving this level of processing power using programmable DSP requires multiple processors. A single FPGA with an embedded soft processor can deliver the requisite level of computing power more cost-effectively, while simplifying board complexity.",
"title": ""
},
{
"docid": "538406cd49ca1add375e287354908740",
"text": "A broader approach to research in huj man development is proposed that focuses on the pro\\ gressive accommodation, throughout the life span, between the growing human organism and the changing environments in which it actually lives and grows. \\ The latter include not only the immediate settings containing the developing person but also the larger social contexts, both formal and informal, in which these settings are embedded. In terms of method, the approach emphasizes the use of rigorousj^d^igned exp_erjments, both naturalistic and contrived, beginning in the early stages of the research process. The changing relation between person and environment is conceived in systems terms. These systems properties are set forth in a series of propositions, each illustrated by concrete research examples. This article delineates certain scientific limitations in prevailing approaches to research on human development and suggests broader perspectives in theory, method, and substance. The point of departure for this undertaking is the view that, especially in recent decades, research in human development has pursued a divided course, with each direction tangential to genuine scientific progress. To corrupt a contemporary metaphor, we risk being caught between a rock and a soft place. The rock is rigor, and the soft place relevance. As I have argued elsewhere (Bronfenbrenner, 1974; Note 1), the emphasis on rigor has led to experiments that are elegantly designed but often limited in scope. This limitation derives from the fact that many of these experiments involve situations that are unfamiliar, artificial, and short-lived and that call for unusual behaviors that are difficult to generalize to other settings. From this perspective, it can be said that much of contemporary developmental psychology is the science of the strange behavior of children in strange situations with strange adults for the briefest possible periods of time.* Partially in reaction to such shortcomings, other workers have stressed the need for social relevance in research, but often with indifference to or open rejection of rigor. In its more extreme manifestations, this trend has taken the form of excluding the scientists themselves from the research process. For example, one major foundation has recently stated as its new policy that, henceforth, grants for research will be awarded only to persons who are themselves the victims of social injusticeA Other, less radical expressions of this trend in-1 volve reliance on existential approaches in which 1 \"experience\" takes the place of observation and I analysis is foregone in favor of a more personalized I and direct \"understanding\" gained through inti\\ mate involvement in the field situation. More, N. common, and more scientifically defensible, is an /\" emphasis on naturalistic observation, but with the / stipulation that it be unguided by any hypotheses i formulated in advance and uncontaminated by V structured experimental designs imposed prior to /",
"title": ""
},
{
"docid": "609c3a75308eb951079373feb88432ae",
"text": "We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets. DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie one from Wikipedia and the other from IMDb written by two different authors. We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize answers from the other version. This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version. Further, since the two versions have different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating external background knowledge. Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences. Indeed, we observe that state-ofthe-art neural RC models which have achieved near human performance on the SQuAD dataset (Rajpurkar et al., 2016b), even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research avenues wherein DuoRC could complement other RC datasets to explore novel neural approaches for studying language understanding.",
"title": ""
},
{
"docid": "5ddbaa58635d706215ae3d61fe13e46c",
"text": "Recent years have seen growing interest in the problem of sup er-resolution restoration of video sequences. Whereas in the traditional single image re storation problem only a single input image is available for processing, the task of reconst ructing super-resolution images from multiple undersampled and degraded images can take adv antage of the additional spatiotemporal data available in the image sequence. In particula r, camera and scene motion lead to frames in the source video sequence containing similar, b ut not identical information. The additional information available in these frames make poss ible reconstruction of visually superior frames at higher resolution than that of the original d ta. In this paper we review the current state of the art and identify promising directions f or future research. The authors are with the Laboratory for Image and Signal Analysis (LIS A), University of Notre Dame, Notre Dame, IN 46556. E-mail: [email protected] .",
"title": ""
},
{
"docid": "af928cd35b6b33ce1cddbf566f63e607",
"text": "Machine Learning has been the quintessential solution for many AI problems, but learning is still heavily dependent on the specific training data. Some learning models can be incorporated with a prior knowledge in the Bayesian set up, but these learning models do not have the ability to access any organised world knowledge on demand. In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism. We introduce a convolution-based model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space. We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task. Using this method we show significant improvement in performance for text classification with News20, DBPedia datasets and natural language inference with Stanford Natural Language Inference (SNLI) dataset. We also demonstrate that a deep learning model can be trained well with substantially less amount of labeled training data, when it has access to organised world knowledge in the form of knowledge graph.",
"title": ""
},
{
"docid": "bb5092ba6da834b3c5ebd8483ab5e9f0",
"text": "Wireless Sensor Networks (WSNs) are a promising technology with applications in many areas such as environment monitoring, agriculture, the military field or health-care, to name but a few. Unfortunately, the wireless connectivity of the sensors opens doors to many security threats, and therefore, cryptographic solutions must be included on-board these devices and preferably in their design phase. In this vein, Random Number Generators (RNGs) play a critical role in security solutions such as authentication protocols or key-generation algorithms. In this article is proposed an avant-garde proposal based on the cardiac signal generator we carry with us (our heart), which can be recorded with medical or even low-cost sensors with wireless connectivity. In particular, for the extraction of random bits, a multi-level decomposition has been performed by wavelet analysis. The proposal has been tested with one of the largest and most publicly available datasets of electrocardiogram signals (202 subjects and 24 h of recording time). Regarding the assessment, the proposed True Random Number Generator (TRNG) has been tested with the most demanding batteries of statistical tests (ENT, DIEHARDERand NIST), and this has been completed with a bias, distinctiveness and performance analysis. From the analysis conducted, it can be concluded that the output stream of our proposed TRNG behaves as a random variable and is suitable for securing WSNs.",
"title": ""
},
{
"docid": "fd5c5ff7c97b9d6b6bfabca14631b423",
"text": "The composition and activity of the gut microbiota codevelop with the host from birth and is subject to a complex interplay that depends on the host genome, nutrition, and life-style. The gut microbiota is involved in the regulation of multiple host metabolic pathways, giving rise to interactive host-microbiota metabolic, signaling, and immune-inflammatory axes that physiologically connect the gut, liver, muscle, and brain. A deeper understanding of these axes is a prerequisite for optimizing therapeutic strategies to manipulate the gut microbiota to combat disease and improve health.",
"title": ""
},
{
"docid": "e38f369fb206e1a8034ce00a0ec25869",
"text": "A large body of research work and efforts have been focused on detecting fake news and building online fact-check systems in order to debunk fake news as soon as possible. Despite the existence of these systems, fake news is still wildly shared by online users. It indicates that these systems may not be fully utilized. After detecting fake news, what is the next step to stop people from sharing it? How can we improve the utilization of these fact-check systems? To fill this gap, in this paper, we (i) collect and analyze online users called guardians, who correct misinformation and fake news in online discussions by referring fact-checking URLs; and (ii) propose a novel fact-checking URL recommendation model to encourage the guardians to engage more in fact-checking activities. We found that the guardians usually took less than one day to reply to claims in online conversations and took another day to spread verified information to hundreds of millions of followers. Our proposed recommendation model outperformed four state-of-the-art models by 11%~33%. Our source code and dataset are available at http://web.cs.wpi.edu/~kmlee/data/gau.html.",
"title": ""
},
{
"docid": "71723d953f1f4ace7c2501fd2c4e5a9f",
"text": "Among all the unique characteristics of a human being, handwriting carries the richest information to gain the insights into the physical, mental and emotional state of the writer. Graphology is the art of studying and analysing handwriting, a scientific method used to determine a person’s personality by evaluating various features from the handwriting. The prime features of handwriting such as the page margins, the slant of the alphabets, the baseline etc. can tell a lot about the individual. To make this method more efficient and reliable, introduction of machines to perform the feature extraction and mapping to various personality traits can be done. This compliments the graphologists, and also increases the speed of analysing handwritten samples. Various approaches can be used for this type of computer aided graphology. In this paper, a novel approach of machine learning technique to implement the automated handwriting analysis tool is discussed.",
"title": ""
},
{
"docid": "3940ccc6f409140582680de1fdc0f610",
"text": "Fermentation of food components by microbes occurs both during certain food production processes and in the gastro-intestinal tract. In these processes specific compounds are produced that originate from either biotransformation reactions or biosynthesis, and that can affect the health of the consumer. In this review, we summarize recent advances highlighting the potential to improve the nutritional status of a fermented food by rational choice of food-fermenting microbes. The vast numbers of microbes residing in the human gut, the gut microbiota, also give rise to a broad array of health-active molecules. Diet and functional foods are important modulators of the gut microbiota activity that can be applied to improve host health. A truly multidisciplinary approach is required to increase our understanding of the molecular mechanisms underlying health beneficial effects that arise from the interaction of diet, microbes and the human body.",
"title": ""
},
{
"docid": "eb4284f45dfe66e4195de12d13f2decc",
"text": "An entry of X is denoted by Xi1,...,id where each index iμ ∈ {1, . . . , nμ} refers to the μth mode of the tensor for μ = 1, . . . , d. For simplicity, we will assume that X has real entries, but it is of course possible to define complex tensors or, more generally, tensors over arbitrary fields. A wide variety of applications lead to problems where the data or the desired solution can be represented by a tensor. In this survey, we will focus on tensors that are induced by the discretization of a multivariate function; we refer to the survey [169] and to the books [175, 241] for the treatment of tensors containing observed data. The simplest way a given multivariate function f(x1, x2, . . . , xd) on a tensor product domain Ω = [0, 1] leads to a tensor is by sampling f on a tensor grid. In this case, each entry of the tensor contains the function value at the corresponding position in the grid. The function f itself may, for example, represent the solution to a high-dimensional partial differential equation (PDE). As the order d increases, the number of entries in X increases exponentially for constant n = n1 = · · · = nd. This so called curse of dimensionality prevents the explicit storage of the entries except for very small values of d. Even for n = 2, storing a tensor of order d = 50 would require 9 petabyte! It is therefore essential to approximate tensors of higher order in a compressed scheme, for example, a low-rank tensor decomposition. Various such decompositions have been developed, see Section 2. An important difference to tensors containing observed data, a tensor X induced by a function is usually not given directly but only as the solution of some algebraic equation, e.g., a linear system or eigenvalue problem. This requires the development of solvers for such equations working within the compressed storage scheme. Such algorithms are discussed in Section 3. The range of applications of low-rank tensor techniques is quickly expanding. For example, they have been used for addressing:",
"title": ""
},
{
"docid": "2c5e280525168d71d1a48fec047b5a23",
"text": "This paper presents the implementation of four channel Electromyography (EMG) signal acquisition system for acquiring the EMG signal of the lower limb muscles during ankle joint movements. Furthermore, some post processing and statistical analysis for the recorded signal were presented. Four channels were implemented using instrumentation amplifier (INA114) for pre-amplification stage then the amplified signal subjected to the band pass filter to eliminate the unwanted signals. Operational amplifier (OPA2604) was involved for the main amplification stage to get the output signal in volts. The EMG signals were detected during movement of the ankle joint of a healthy subject. Then the signal was sampled at the rate of 2 kHz using NI6009 DAQ and Labview used for displaying and storing the acquired signal. For EMG temporal representation, mean absolute value (MAV) analysis algorithm is used to investigate the level of the muscles activity. This data will be used in future as a control input signal to drive the ankle joint exoskeleton robot.",
"title": ""
},
{
"docid": "1ee444fda98b312b0462786f5420f359",
"text": "After years of banning consumer devices (e.g., iPads and iPhone) and applications (e.g., DropBox, Evernote, iTunes) organizations are allowing employees to use their consumer tools in the workplace. This IT consumerization phenomenon will have serious consequences on IT departments which have historically valued control, security, standardization and support (Harris et al. 2012). Based on case studies of three organizations in different stages of embracing IT consumerization, this study identifies the conflicts IT consumerization creates for IT departments. All three organizations experienced similar goal and behavior conflicts, while identity conflict varied depending upon the organizations’ stage implementing consumer tools (e.g., embryonic, initiating or institutionalized). Theoretically, this study advances IT consumerization research by applying a role conflict perspective to understand consumerization’s impact on the IT department.",
"title": ""
},
{
"docid": "fee574207e3985ea3c697f831069fa8b",
"text": "This paper focuses on the utilization of wireless networkin g in the robotics domain. Many researchers have already equipped their robot s with wireless communication capabilities, stimulated by the observation that multi-robot systems tend to have several advantages over their single-robot counterpa r s. Typically, this integration of wireless communication is tackled in a quite pragmat ic manner, only a few authors presented novel Robotic Ad Hoc Network (RANET) prot oc ls that were designed specifically with robotic use cases in mind. This is in harp contrast with the domain of vehicular ad hoc networks (VANET). This observati on is the starting point of this paper. If the results of previous efforts focusing on VANET protocols could be reused in the RANET domain, this could lead to rapid progre ss in the field of networked robots. To investigate this possibility, this paper rovides a thorough overview of the related work in the domain of robotic and vehicular ad h oc networks. Based on this information, an exhaustive list of requirements is d efined for both types. It is concluded that the most significant difference lies in the fact that VANET protocols are oriented towards low throughput messaging, while R ANET protocols have to support high throughput media streaming as well. Althoug h not always with equal importance, all other defined requirements are valid for bot h protocols. This leads to the conclusion that cross-fertilization between them is an appealing approach for future RANET research. To support such developments, this pap er concludes with the definition of an appropriate working plan.",
"title": ""
},
{
"docid": "38e9aa4644edcffe87dd5ae497e99bbe",
"text": "Hashtags, created by social network users, have gained a huge popularity in recent years. As a kind of metatag for organizing information, hashtags in online social networks, especially in Instagram, have greatly facilitated users' interactions. In recent years, academia starts to use hashtags to reshape our understandings on how users interact with each other. #like4like is one of the most popular hashtags in Instagram with more than 290 million photos appended with it, when a publisher uses #like4like in one photo, it means that he will like back photos of those who like this photo. Different from other hashtags, #like4like implies an interaction between a photo's publisher and a user who likes this photo, and both of them aim to attract likes in Instagram. In this paper, we study whether #like4like indeed serves the purpose it is created for, i.e., will #like4like provoke more likes? We first perform a general analysis of #like4like with 1.8 million photos collected from Instagram, and discover that its quantity has dramatically increased by 1,300 times from 2012 to 2016. Then, we study whether #like4like will attract likes for photo publishers; results show that it is not #like4like but actually photo contents attract more likes, and the lifespan of a #like4like photo is quite limited. In the end, we study whether users who like #like4like photos will receive likes from #like4like publishers. However, results show that more than 90% of the publishers do not keep their promises, i.e., they will not like back others who like their #like4like photos; and for those who keep their promises, the photos which they like back are often randomly selected.",
"title": ""
},
{
"docid": "a33147bd85b4ecf4f2292e4406abfc26",
"text": "Accident detection systems help reduce fatalities stemming from car accidents by decreasing the response time of emergency responders. Smartphones and their onboard sensors (such as GPS receivers and accelerometers) are promising platforms for constructing such systems. This paper provides three contributions to the study of using smartphone-based accident detection systems. First, we describe solutions to key issues associated with detecting traffic accidents, such as preventing false positives by utilizing mobile context information and polling onboard sensors to detect large accelerations. Second, we present the architecture of our prototype smartphone-based accident detection system and empirically analyze its ability to resist false positives as well as its capabilities for accident reconstruction. Third, we discuss how smartphone-based accident detection can reduce overall traffic congestion and increase the preparedness of emergency responders.",
"title": ""
},
{
"docid": "33e6abc5ed78316cc03dae8ba5a0bfc8",
"text": "In this paper, we present a deep learning architecture which addresses the problem of 3D semantic segmentation of unstructured point clouds. Compared to previous work, we introduce grouping techniques which define point neighborhoods in the initial world space and the learned feature space. Neighborhoods are important as they allow to compute local or global point features depending on the spatial extend of the neighborhood. Additionally, we incorporate dedicated loss functions to further structure the learned point feature space: the pairwise distance loss and the centroid loss. We show how to apply these mechanisms to the task of 3D semantic segmentation of point clouds and report state-of-the-art performance on indoor and outdoor datasets. ar X iv :1 81 0. 01 15 1v 2 [ cs .C V ] 8 D ec 2 01 8 2 F. Engelmann et al.",
"title": ""
}
] |
scidocsrr
|
1cc9a70cfaeccb02ce0268b6343f06d0
|
Formative Assessment: A Critical Review
|
[
{
"docid": "d4d309d48404fb498c4a7c716804a80a",
"text": "There has been a recent upsurge of interest in exploring how choices of methods and timing of instruction affect the rate and persistence of learning. The authors review three lines of experimentation—all conducted using educationally relevant materials and time intervals— that call into question important aspects of common instructional practices. First, research reveals that testing, although typically used merely as an assessment device, directly potentiates learning and does so more effectively than other modes of study. Second, recent analysis of the temporal dynamics of learning show that learning is most durable when study time is distributed over much greater periods of time than is customary in educational settings. Third, the inter-leaving of different types of practice problems (which is quite rare in math and science texts) markedly improves learning. The authors conclude by discussing the frequently observed dissociation between people's perceptions of which learning procedures are most effective and which procedures actually promote durable learning. T he experimental study of human learning and memory began more than 100 years ago and has developed into a major enterprise in behavioral science. Although this work has revealed some striking laboratory phenomena and elegant quantitative principles, it is disappointing that it has not thus far given teachers, learners, and curriculum designers much in the way of concrete and nonobvious advice that they can use to make learning more efficient and durable. In the past several years, however, there has been a new burst of effort by researchers to identify and test concrete principles that have this potential, yielding a slew of recommended strategies that have been listed in recent reports (e. Some of the most promising results involve the effects of testing on learning and different ways of scheduling study events. Those skeptical of behavioral research might assume that principles of learning would already be fairly obvious to anyone who has been a student, yet the results of recent experimentation challenge some of the most widely used study practices. We discuss three topics, focusing on the effects of testing, the role of temporal spacing, and the effects of interleaving different types of materials. Tests of student mastery of content material are customarily viewed as assessment devices, used to provide incentives for students (and in some cases teachers and school systems as well). However, memory research going back some years has revealed that a test that requires a learner to retrieve some piece of …",
"title": ""
},
{
"docid": "a9c120f7d3d71fb8f1d35ded1bce17ea",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aera.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
}
] |
[
{
"docid": "72e5b92632824d3633539727125763bc",
"text": "NB-IoT system focues on indoor coverage, low cost, long battery life, and enabling a large number of connected devices. The NB-IoT system in the inband mode should share the antenna with the LTE system and support mult-PRB to cover many terminals. Also, the number of used antennas should be minimized for price competitiveness. In this paper, the structure and implementation of the NB-IoT base station system will be describe.",
"title": ""
},
{
"docid": "e1b9795030dac51172c20a49113fac23",
"text": "Bin packing problems are a class of optimization problems that have numerous applications in the industrial world, ranging from efficient cutting of material to packing various items in a larger container. We consider here only rectangular items cut off an infinite strip of material as well as off larger sheets of fixed dimensions. This problem has been around for many years and a great number of publications can be found on the subject. Nevertheless, it is often difficult to reconcile a theoretical paper and practical application of it. The present work aims to create simple but, at the same time, fast and efficient algorithms, which would allow one to write high-speed and capable software that can be used in a real-time application.",
"title": ""
},
{
"docid": "c3c3add0c42f3b98962c4682a72b1865",
"text": "This paper compares to investigate output characteristics according to a conventional and novel stator structure of axial flux permanent magnet (AFPM) motor for cooling fan drive system. Segmented core of stator has advantages such as easy winding and fast manufacture speed. However, a unit cost increase due to cutting off tooth tip to constant slot width. To solve the problem, this paper proposes a novel stator structure with three-step segmented core. The characteristics of AFPM were analyzed by time-stepping three dimensional finite element analysis (3D FEA) in two stator models, when stator cores are cutting off tooth tips from rectangular core and three step segmented core. Prototype motors were manufactured based on analysis results, and were tested as a motor.",
"title": ""
},
{
"docid": "d87abfd50876da09bce301831f71605f",
"text": "Recent advances in topic models have explored complicated structured distributions to represent topic correlation. For example, the pachinko allocation model (PAM) captures arbitrary, nested, and possibly sparse correlations between topics using a directed acyclic graph (DAG). While PAM provides more flexibility and greater expressive power than previous models like latent Dirichlet allocation (LDA), it is also more difficult to determine the appropriate topic structure for a specific dataset. In this paper, we propose a nonparametric Bayesian prior for PAM based on a variant of the hierarchical Dirichlet process (HDP). Although the HDP can capture topic correlations defined by nested data structure, it does not automatically discover such correlations from unstructured data. By assuming an HDP-based prior for PAM, we are able to learn both the number of topics and how the topics are correlated. We evaluate our model on synthetic and real-world text datasets, and show that nonparametric PAM achieves performance matching the best of PAM without manually tuning the number of topics.",
"title": ""
},
{
"docid": "652b91b4ca941bcc53bb22a714c13b52",
"text": "As social media has permeated large parts of the population it simultaneously has become a way to reach many people e.g. with political messages. One way to efficiently reach those people is the application of automated computer programs that aim to simulate human behaviour so called social bots. These bots are thought to be able to potentially influence users’ opinion about a topic. To gain insight in the use of these bots in the run-up to the German Bundestag elections, we collected a dataset from Twitter consisting of tweets regarding a German state election in May 2017. The strategies and influence of social bots were analysed based on relevant features and network visualization. 61 social bots were identified. Possibly due to the concentration on German language as well as the elections regionality, identified bots showed no signs of collective political strategies and low to none influence. Implications are discussed.",
"title": ""
},
{
"docid": "523983cad60a81e0e6694c8d90ab9c3d",
"text": "Cognition and comportment are subserved by interconnected neural networks that allow high-level computational architectures including parallel distributed processing. Cognitive problems are not resolved by a sequential and hierarchical progression toward predetermined goals but instead by a simultaneous and interactive consideration of multiple possibilities and constraints until a satisfactory fit is achieved. The resultant texture of mental activity is characterized by almost infinite richness and flexibility. According to this model, complex behavior is mapped at the level of multifocal neural systems rather than specific anatomical sites, giving rise to brain-behavior relationships that are both localized and distributed. Each network contains anatomically addressed channels for transferring information content and chemically addressed pathways for modulating behavioral tone. This approach provides a blueprint for reexploring the neurological foundations of attention, language, memory, and frontal lobe function.",
"title": ""
},
{
"docid": "195b68a3d0d12354c256c2a1ddeb2b28",
"text": "Reinforcement learning (RL) is a popular machine learning technique that has many successes in learning how to play classic style games. Applying RL to first person shooter (FPS) games is an interesting area of research as it has the potential to create diverse behaviors without the need to implicitly code them. This paper investigates the tabular Sarsa (λ) RL algorithm applied to a purpose built FPS game. The first part of the research investigates using RL to learn bot controllers for the tasks of navigation, item collection, and combat individually. Results showed that the RL algorithm was able to learn a satisfactory strategy for navigation control, but not to the quality of the industry standard pathfinding algorithm. The combat controller performed well against a rule-based bot, indicating promising preliminary results for using RL in FPS games. The second part of the research used pretrained RL controllers and then combined them by a number of different methods to create a more generalized bot artificial intelligence (AI). The experimental results indicated that RL can be used in a generalized way to control a combination of tasks in FPS bots such as navigation, item collection, and combat.",
"title": ""
},
{
"docid": "c62742c65b105a83fa756af9b1a45a37",
"text": "This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a conservatively large numerical precision at the outset or re-run paths multiple times in successively higher precision until success is achieved. To avoid unnecessary computational cost, it would be preferable to adaptively adjust the precision as the tracking proceeds in response to the local conditioning of the path. We present an algorithm that can be set to either reactively adjust precision in response to step failure or proactively set the precision using error estimates. We then test the relative merits of reactive and proactive adaptation on several examples arising as homotopies for solving systems of polynomial equations.",
"title": ""
},
{
"docid": "3ae880019b1954a2de5ab0d52519caab",
"text": "We propose a simple yet effective structural patch decomposition approach for multi-exposure image fusion (MEF) that is robust to ghosting effect. We decompose an image patch into three conceptually independent components: signal strength, signal structure, and mean intensity. Upon fusing these three components separately, we reconstruct a desired patch and place it back into the fused image. This novel patch decomposition approach benefits MEF in many aspects. First, as opposed to most pixel-wise MEF methods, the proposed algorithm does not require post-processing steps to improve visual quality or to reduce spatial artifacts. Second, it handles RGB color channels jointly, and thus produces fused images with more vivid color appearance. Third and most importantly, the direction of the signal structure component in the patch vector space provides ideal information for ghost removal. It allows us to reliably and efficiently reject inconsistent object motions with respect to a chosen reference image without performing computationally expensive motion estimation. We compare the proposed algorithm with 12 MEF methods on 21 static scenes and 12 deghosting schemes on 19 dynamic scenes (with camera and object motion). Extensive experimental results demonstrate that the proposed algorithm not only outperforms previous MEF algorithms on static scenes but also consistently produces high quality fused images with little ghosting artifacts for dynamic scenes. Moreover, it maintains a lower computational cost compared with the state-of-the-art deghosting schemes.11The MATLAB code of the proposed algorithm will be made available online. Preliminary results of Section III-A [1] were presented at the IEEE International Conference on Image Processing, Canada, 2015.",
"title": ""
},
{
"docid": "24a6ad4d167290bec62a044580635aa0",
"text": "We introduce HyperLex—a data set and evaluation resource that quantifies the extent of the semantic category membership, that is, type-of relation, also known as hyponymy–hypernymy or lexical entailment (LE) relation between 2,616 concept pairs. Cognitive psychology research has established that typicality and category/class membership are computed in human semantic memory as a gradual rather than binary relation. Nevertheless, most NLP research and existing large-scale inventories of concept category membership (WordNet, DBPedia, etc.) treat category membership and LE as binary. To address this, we asked hundreds of native English speakers to indicate typicality and strength of category membership between a diverse range of concept pairs on a crowdsourcing platform. Our results confirm that category membership and LE are indeed more gradual than binary. We then compare these human judgments with the predictions of automatic systems, which reveals a huge gap between human performance and state-of-the-art LE, distributional and representation learning models, and substantial differences between the models themselves. We discuss a pathway for improving semantic models to overcome this discrepancy, and indicate future application areas for improved graded LE systems.",
"title": ""
},
{
"docid": "b4e9cfc0dbac4a5d7f76001e73e8973d",
"text": "Style transfer aims to apply the style of an exemplar model to a target one, while retaining the target’s structure. The main challenge in this process is to algorithmically distinguish style from structure, a high-level, potentially ill-posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source-to-target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source-to-target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.",
"title": ""
},
{
"docid": "8b66ffe2afae5f1f46b7803d80422248",
"text": "This paper describes the torque production capabilities of electrical machines with planar windings and presents an automated procedure for coils conductors' arrangement. The procedure has been applied on an ironless axial flux slotless permanent magnet machines having stator windings realized using printed circuit board (PCB) coils. An optimization algorithm has been implemented to find a proper arrangement of PCB traces in order to find the best compromise between the maximization of average torque and the minimization of torque ripple. A time-efficient numerical model has been developed to reduce computational load and thus make the optimization based design feasible.",
"title": ""
},
{
"docid": "59e3a7004bd2e1e75d0b1c6f6d2a67d0",
"text": "Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.",
"title": ""
},
{
"docid": "cdca4a6cb35cbc674c06465c742dfe50",
"text": "The generation of new lymphatic vessels through lymphangiogenesis and the remodelling of existing lymphatics are thought to be important steps in cancer metastasis. The past decade has been exciting in terms of research into the molecular and cellular biology of lymphatic vessels in cancer, and it has been shown that the molecular control of tumour lymphangiogenesis has similarities to that of tumour angiogenesis. Nevertheless, there are significant mechanistic differences between these biological processes. We are now developing a greater understanding of the specific roles of distinct lymphatic vessel subtypes in cancer, and this provides opportunities to improve diagnostic and therapeutic approaches that aim to restrict the progression of cancer.",
"title": ""
},
{
"docid": "2e2e8219b7870529e8ca17025190aa1b",
"text": "M multitasking competes with television advertising for consumers’ attention, but may also facilitate immediate and measurable response to some advertisements. This paper explores whether and how television advertising influences online shopping. We construct a massive data set spanning $3.4 billion in spending by 20 brands, measures of brands’ website traffic and transactions, and ad content measures for 1,224 commercials. We use a quasi-experimental design to estimate whether and how TV advertising influences changes in online shopping within two-minute pre/post windows of time. We use nonadvertising competitors’ online shopping in a difference-in-differences approach to measure the same effects in two-hour windows around the time of the ad. The findings indicate that television advertising does influence online shopping and that advertising content plays a key role. Action-focus content increases direct website traffic and sales. Information-focus and emotion-focus ad content actually reduce website traffic while simultaneously increasing purchases, with a positive net effect on sales for most brands. These results imply that brands seeking to attract multitaskers’ attention and dollars must select their advertising copy carefully.",
"title": ""
},
{
"docid": "d6976361b44aab044c563e75056744d6",
"text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).",
"title": ""
},
{
"docid": "332a30e8d03d4f8cc03e7ab9b809ec9f",
"text": "The study of electromyographic (EMG) signals has gained increased attention in the last decades since the proper analysis and processing of these signals can be instrumental for the diagnosis of neuromuscular diseases and the adaptive control of prosthetic devices. As a consequence, various pattern recognition approaches, consisting of different modules for feature extraction and classification of EMG signals, have been proposed. In this paper, we conduct a systematic empirical study on the use of Fractal Dimension (FD) estimation methods as feature extractors from EMG signals. The usage of FD as feature extraction mechanism is justified by the fact that EMG signals usually show traces of selfsimilarity and by the ability of FD to characterize and measure the complexity inherent to different types of muscle contraction. In total, eight different methods for calculating the FD of an EMG waveform are considered here, and their performance as feature extractors is comparatively assessed taking into account nine well-known classifiers of different types and complexities. Results of experiments conducted on a dataset involving seven distinct types of limb motions are reported whereby we could observe that the normalized version of the Katz's estimation method and the Hurst exponent significantly outperform the others according to a class separability measure and five well-known accuracy measures calculated over the induced classifiers. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "27d05b4a9a766e17b6a49879e983f93c",
"text": "Data mining of Social networks is a new but interesting field within Data Mining. We leverage the power of sentiment analysis to detect bullying instances in Twitter. We are interested in understanding bullying in social networks, especially in Twitter. To best of our understanding, there is no previous work on using sentiment analysis to detect bullying instances. Our training data set consists of Twitter messages containing commonly used terms of abuse, which are considered noisy labels. These data are publicly available and can be easily retrieved by directly accessing the Twitter streaming API. For the classification of Twitter messages, also known as tweets, we use the Naïve Bayes classifier. It‟s accuracy was close to 70% when trained with “commonly terms of abuse” data. The main contribution of this paper is the idea of using sentiment analysis to detect bullying instances.",
"title": ""
},
{
"docid": "93bc26aa1a020f178692f40f4542b691",
"text": "The \"Fast Fourier Transform\" has now been widely known for about a year. During that time it has had a major effect on several areas of computing, the most striking example being techniques of numerical convolution, which have been completely revolutionized. What exactly is the \"Fast Fourier Transform\"?",
"title": ""
},
{
"docid": "745451b3ca65f3388332232b370ea504",
"text": "This article develops a framework that applies to single securities to test whether asset pricing models can explain the size, value, and momentum anomalies. Stock level beta is allowed to vary with firm-level size and book-to-market as well as with macroeconomic variables. With constant beta, none of the models examined capture any of the market anomalies. When beta is allowed to vary, the size and value effects are often explained, but the explanatory power of past return remains robust. The past return effect is captured by model mispricing that varies with macroeconomic variables.",
"title": ""
}
] |
scidocsrr
|
5c0abbfca7d7300f5c954314f733aa0d
|
Maturity assessment models : a design science research approach
|
[
{
"docid": "7ca5eac9be1ba8c1738862f24dd707d2",
"text": "This essay develops the philosophical foundations for design research in the Technology of Information Systems (TIS). Traditional writings on philosophy of science cannot fully describe this mode of research, which dares to intervene and improve to realize alternative futures instead of explaining or interpreting the past to discover truth. Accordingly, in addition to philosophy of science, the essay draws on writings about the act of designing, philosophy of technology and the substantive (IS) discipline. I define design research in TIS as in(ter)vention in the representational world defined by the hierarchy of concerns following semiotics. The complementary nature of the representational (internal) and real (external) environments provides the basis to articulate the dual ontological and epistemological bases. Understanding design research in TIS in this manner suggests operational principles in the internal world as the form of knowledge created by design researchers, and artifacts that embody these are seen as situated instantiations of normative theories that affect the external phenomena of interest. Throughout the paper, multiple examples illustrate the arguments. Finally, I position the resulting ‘method’ for design research vis-à-vis existing research methods and argue for its legitimacy as a viable candidate for research in the IS discipline.",
"title": ""
}
] |
[
{
"docid": "09ee1b6d80facc1c21248e855f17a17d",
"text": "AIM\nTo examine the relationship between calf circumference and muscle mass, and to evaluate the suitability of calf circumference as a surrogate marker of muscle mass for the diagnosis of sarcopenia among middle-aged and older Japanese men and women.\n\n\nMETHODS\nA total of 526 adults aged 40-89 years participated in the present cross-sectional study. The maximum calf circumference was measured in a standing position. Appendicular skeletal muscle mass was measured using dual-energy X-ray absorptiometry, and the skeletal muscle index was calculated as appendicular skeletal muscle mass divided by the square of the height (kg/m(2)). The cut-off values for sarcopenia were defined as a skeletal muscle index of less than -2 standard deviations of the mean value for Japanese young adults, as defined previously.\n\n\nRESULTS\nCalf circumference was positively correlated with appendicular skeletal muscle (r = 0.81 in men, r = 0.73 in women) and skeletal muscle index (r = 0.80 in men, r = 0.69 in women). In receiver operating characteristic analysis, the optimal calf circumference cut-off values for predicting sarcopenia were 34 cm (sensitivity 88%, specificity 91%) in men and 33 cm (sensitivity 76%, specificity 73%) in women.\n\n\nCONCLUSIONS\nCalf circumference was positively correlated with appendicular skeletal muscle mass and skeletal muscle index, and could be used as a surrogate marker of muscle mass for diagnosing sarcopenia. The suggested cut-off values of calf circumference for predicting low muscle mass are <34 cm in men and <33 cm in women.",
"title": ""
},
{
"docid": "f26cc4afade8625576ff631e1ff4f3b4",
"text": "Electromigration and voltage drop (IR-drop) are two major reliability issues in modern IC design. Electromigration gradually creates permanently open or short circuits due to excessive current densities; IR-drop causes insufficient power supply, thus degrading performance or even inducing functional errors because of nonzero wire resistance. Both types of failure can be triggered by insufficient wire widths. Although expanding the wire width alleviates electromigration and IR-drop, unlimited expansion not only increases the routing cost, but may also be infeasible due to the limited routing resource. In addition, electromigration and IR-drop manifest mainly in the power/ground (P/G) network. Therefore, taking wire widths into consideration is desirable to prevent electromigration and IR-drop at P/G routing. Unlike mature digital IC designs, P/G routing in analog ICs has not yet been well studied. In a conventional design, analog designers manually route P/G networks by implementing greedy strategies. However, the growing scale of analog ICs renders manual routing inefficient, and the greedy strategies may be ineffective when electromigration and IR-drop are considered. This study distances itself from conventional manual design and proposes an automatic analog P/G router that considers electromigration and IR-drops. First, employing transportation formulation, this article constructs an electromigration-aware rectilinear Steiner tree with the minimum routing cost. Second, without changing the solution quality, wires are bundled to release routing space for enhancing routability and relaxing congestion. A wire width extension method is subsequently adopted to reduce wire resistance for IR-drop safety. Compared with high-tech designs, the proposed approach achieves equally optimal solutions for electromigration avoidance, with superior efficiencies. Furthermore, via industrial design, experimental results also show the effectiveness and efficiency of the proposed algorithm for electromigration prevention and IR-drop reduction.",
"title": ""
},
{
"docid": "22a3d3ac774a5da4f165e90edcbd1666",
"text": "One of the difficulties of neural machine translation (NMT) is the recall and appropriate translation of low-frequency words or phrases. In this paper, we propose a simple, fast, and effective method for recalling previously seen translation examples and incorporating them into the NMT decoding process. Specifically, for an input sentence, we use a search engine to retrieve sentence pairs whose source sides are similar with the input sentence, and then collect n-grams that are both in the retrieved target sentences and aligned with words that match in the source sentences, which we call “translation pieces”. We compute pseudoprobabilities for each retrieved sentence based on similarities between the input sentence and the retrieved source sentences, and use these to weight the retrieved translation pieces. Finally, an existing NMT model is used to translate the input sentence, with an additional bonus given to outputs that contain the collected translation pieces. We show our method improves NMT translation results up to 6 BLEU points on three narrow domain translation tasks where repetitiveness of the target sentences is particularly salient. It also causes little increase in the translation time, and compares favorably to another alternative retrievalbased method with respect to accuracy, speed, and simplicity of implementation.",
"title": ""
},
{
"docid": "1819af3b3d96c182b7ea8a0e89ba5bbe",
"text": "The fingerprint is one of the oldest and most widely used biometric modality for person identification. Existing automatic fingerprint matching systems perform well when the same sensor is used for both enrollment and verification (regular matching). However, their performance significantly deteriorates when different sensors are used (cross-matching, fingerprint sensor interoperability problem). We propose an automatic fingerprint verification method to solve this problem. It was observed that the discriminative characteristics among fingerprints captured with sensors of different technology and interaction types are ridge orientations, minutiae, and local multi-scale ridge structures around minutiae. To encode this information, we propose two minutiae-based descriptors: histograms of gradients obtained using a bank of Gabor filters and binary gradient pattern descriptors, which encode multi-scale local ridge patterns around minutiae. In addition, an orientation descriptor is proposed, which compensates for the spurious and missing minutiae problem. The scores from the three descriptors are fused using a weighted sum rule, which scales each score according to its verification performance. Extensive experiments were conducted using two public domain benchmark databases (FingerPass and Multi-Sensor Optical and Latent Fingerprint) to show the effectiveness of the proposed system. The results showed that the proposed system significantly outperforms the state-of-the-art methods based on minutia cylinder-code (MCC), MCC with scale, VeriFinger—a commercial SDK, and a thin-plate spline model.",
"title": ""
},
{
"docid": "8573ad563268d5301b38c161c67b2a87",
"text": "A fracture theory for a heterogeneous aggregate material which exhibits a gradual strainsoftening due to microcracking and contains aggregate pieces that are not necessarily small compared to struttural dimensions is developed. Only Mode I is considered. The fracture is modeled as a blunt smeared crack band, which is justified by the random nature of the microstructure. Simple triaxial stress-strain relations which model the strain-softening and describe the effect of gradual microcracking in the crack band are derived. It is shown that it is easier to use compliance rather than stiffness matrices and that it suffices to adjust a single diagonal term of the compliance matrix. The limiting case of this matrix for complete (continuous) cracking is shown to be identical to the inverse of the well-known stiffness matrix for a perfectly cracked material. The material fracture properties are characterized by only three paPlameters -fracture energy, uniaxial strength limit and width of the crack band (fracture Process zone), while the strain-softening modulus is a function of these parameters. A m~thod of determining the fracture energy from measured complete stressstrain relations is' also given. Triaxial stress effects on fracture can be taken into account. The theory is verljied by comparisons with numerous experimental data from the literature. Satisfactory fits of maximum load data as well as resistance curves are achieved and values of the three matetial parameters involved, namely the fracture energy, the strength, and the width of crack b~nd front, are determined from test data. The optimum value of the latter width is found to be about 3 aggregate sizes, which is also justified as the minimum acceptable for a homogeneous continuum modeling. The method of implementing the theory in a finite element code is al$o indicated, and rules for achieving objectivity of results with regard to the analyst's choice of element size are given. Finally, a simple formula is derived to predict from the tensile strength and aggregate size the fracture energy, as well as the strain-softening modulus. A statistical analysis of the errors reveals a drastic improvement compared to the linear fracture th~ory as well as the strength theory. The applicability of fracture mechanics to concrete is thz4 solidly established.",
"title": ""
},
{
"docid": "172e46f40cc459d0ba8033fead3f35b3",
"text": "Given an arbitrary mesh, we present a method to construct a progressive mesh (PM) such that all meshes in the PM sequence share a common texture parametrization. Our method considers two important goals simultaneously. It minimizes texture stretch (small texture distances mapped onto large surface distances) to balance sampling rates over all locations and directions on the surface. It also minimizes texture deviation (“slippage” error based on parametric correspondence) to obtain accurate textured mesh approximations. The method begins by partitioning the mesh into charts using planarity and compactness heuristics. It creates a stretch-minimizing parametrization within each chart, and resizes the charts based on the resulting stretch. Next, it simplifies the mesh while respecting the chart boundaries. The parametrization is re-optimized to reduce both stretch and deviation over the whole PM sequence. Finally, the charts are packed into a texture atlas. We demonstrate using such atlases to sample color and normal maps over several models.",
"title": ""
},
{
"docid": "ac6410d8891491d050b32619dc2bdd50",
"text": "Due to the increase of generation sources in distribution networks, it is becoming very complex to develop and maintain models of these networks. Network operators need to determine reduced models of distribution networks to be used in grid management functions. This paper presents a novel method that synthesizes steady-state models of unbalanced active distribution networks with the use of dynamic measurements (time series) from phasor measurement units (PMUs). Since phasor measurement unit (PMU) measurements may contain errors and bad data, this paper presents the application of a Kalman filter technique for real-time data processing. In addition, PMU data capture the power system's response at different time-scales, which are generated by different types of power system events; the presented Kalman filter has been improved to extract the steady-state component of the PMU measurements to be fed to the steady-state model synthesis application. Performance of the proposed methods has been assessed by real-time hardware-in-the-loop simulations on a sample distribution network.",
"title": ""
},
{
"docid": "c0f1d62b1d1e519f60200e2df7e58833",
"text": "Domain name systems and certificate authority systems may have security and trust problems in their implementation. This article summarizes how these systems work and what the implementation problems may be. There are blockchain-based decentralized solutions that claim to overcome those problems. We provide a brief explanation on how blockchain systems work, and their strengths are explained. DNS security challenges are given. Blockchain-based DNS solutions are classified and described in detail according to their services. The advantages and feasibility of these implementations are discussed. Last but not least, the possibility of the decentralized Internet is questioned.",
"title": ""
},
{
"docid": "92600ef3d90d5289f70b10ccedff7a81",
"text": "In this paper, the chicken farm monitoring system is proposed and developed based on wireless communication unit to transfer data by using the wireless module combined with the sensors that enable to detect temperature, humidity, light and water level values. This system is focused on the collecting, storing, and controlling the information of the chicken farm so that the high quality and quantity of the meal production can be produced. This system is developed to solve several problems in the chicken farm which are many human workers is needed to control the farm, high cost in maintenance, and inaccurate data collected at one point. The proposed methodology really helps in finishing this project within the period given. Based on the research that has been carried out, the system that can monitor and control environment condition (temperature, humidity, and light) has been developed by using the Arduino microcontroller. This system also is able to collect data and operate autonomously.",
"title": ""
},
{
"docid": "91d59b5e08c711e25d83785c198d9ae1",
"text": "The increase in the wireless users has led to the spectrum shortage problem. Federal Communication Commission (FCC) showed that licensed spectrum bands are underutilized, specially TV bands. The IEEE 802.22 standard was proposed to exploit these white spaces in the (TV) frequency spectrum. Cognitive Radio allows unlicensed users to use licensed bands while safeguarding the priority of licensed users. Cognitive Radio is composed of two types of users, licensed users also known as Primary Users(PUs) and unlicensed users also known as Secondary Users(SUs).SUs use the resources when spectrum allocated to PU is vacant, as soon as PU become active, the SU has to leave the channel for PU. Hence the opportunistic access is provided by CR to SUs whenever the channel is vacant. Cognitive Users sense the spectrum continuously and share this sensing information to other SUs, during this spectrum sensing, the network is vulnerable to so many attacks. One of these attacks is Primary User Emulation Attack (PUEA), in which the malicious secondary users can mimic the characteristics of primary users thereby causing legitimate SUs to erroneously identify the attacker as a primary user, and to gain access to wireless channels. PUEA is of two types: Selfish and Malicious attacker. A selfish attacker aims in stealing Bandwidth form legitimate SUs for its own transmissions while malicious attacker mimic the characteristics of PU.",
"title": ""
},
{
"docid": "b2d256cd40e67e3eadd3f5d613ad32fa",
"text": "Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservice model is a pressing issue. There are constantly developing new methods of testing both individual microservices and cloud applications at a whole. This article presents our vision of a framework for the validation of the microservice cloud applications, providing an integrated approach for the implementation of various testing methods of such applications, from basic unit tests to continuous stability testing.",
"title": ""
},
{
"docid": "322f6321bc34750344064d474206fddb",
"text": "BACKGROUND AND PURPOSE\nThis study was undertaken to elucidate whether and how age influences stroke outcome.\n\n\nMETHODS\nThis prospective and community-based study comprised 515 consecutive acute stroke patients. Computed tomographic scan was performed in 79% of patients. Activities of daily living (ADL) and neurological status were assessed weekly during hospital stay using the Barthel Index (BI) and the Scandinavian Stroke Scale (SSS), respectively. Information regarding social condition and comorbidity before stroke was also registered. A multiple regression model was used to analyze the independent influence of age on stroke outcome.\n\n\nRESULTS\nAge was not related to the type of stroke lesion or infarct size. However, age independently influenced initial BI (-4 points per 10 years, P < .01), initial SSS (-2 points per 10 years, P = .01), and discharge BI (-3 points per 10 years, P < .01). No independent influence of age was found regarding mortality within 3 months, discharge SSS, length of hospital stay, and discharge placement. ADL improvement was influenced independently by age (-3 points per 10 years, P < .01), whereas age had no influence on neurological improvement or on speed of recovery.\n\n\nCONCLUSIONS\nAge independently influences stroke outcome selectively in ADL-related aspects (BI) but not in neurological aspects (SSS), suggesting a poorer compensatory ability in elderly stroke patients. Therefore, rehabilitation of elderly stroke patients should be focused more on ADL and compensation rather than on the recovery of neurological status, and age itself should not be a selection criterion for rehabilitation.",
"title": ""
},
{
"docid": "0c67afcb351c53c1b9e2b4bcf3b0dc08",
"text": "The Scrum methodology is an agile software development process that works as a project management wrapper around existing engineering practices to iteratively and incrementally develop software. With Scrum, for a developer to receive credit for his or her work, he or she must demonstrate the new functionality provided by a feature at the end of each short iteration during an iteration review session. Such a short-term focus without the checks and balances of sound engineering practices may lead a team to neglect quality. In this paper we present the experiences of three teams at Microsoft using Scrum with an additional nine sound engineering practices. Our results indicate that these teams were able to improve quality, productivity, and estimation accuracy through the combination of Scrum and nine engineering practices.",
"title": ""
},
{
"docid": "9f0206aca2f3cccfb2ca1df629c32c7a",
"text": "Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that \"All models are wrong but some are useful.\" We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a \"do it yourself kit\" for explanations, allowing a practitioner to directly answer \"what if questions\" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.",
"title": ""
},
{
"docid": "58b7fa3dade7f95457d794addf8c7ae1",
"text": "Synchronic and Diachronic Dutch Books are used to justify the use of probability measures to quantify the beliefs held by a rational agent. The argument has been used to reject any non-Bayesian representation of degrees of beliefs. We show that the transferable belief model resists the criticism even though it is not a Bayesian model. We analyze the ‘Peter, Paul and Mary’ example and show how it resists to Dutch Books.",
"title": ""
},
{
"docid": "32b8f971302926fd75f418df0aef91a3",
"text": "Cartoon-to-photo facial translation could be widely used in different applications, such as law enforcement and anime remaking. Nevertheless, current general-purpose imageto-image models usually produce blurry or unrelated results in this task. In this paper, we propose a Cartoon-to-Photo facial translation with Generative Adversarial Networks (CP-GAN) for inverting cartoon faces to generate photo-realistic and related face images. In order to produce convincing faces with intact facial parts, we exploit global and local discriminators to capture global facial features and three local facial regions, respectively. Moreover, we use a specific content network to capture and preserve face characteristic and identity between cartoons and photos. As a result, the proposed approach can generate convincing high-quality faces that satisfy both the characteristic and identity constraints of input cartoon faces. Compared with recent works on unpaired image-to-image translation, our proposed method is able to generate more realistic and correlative images.",
"title": ""
},
{
"docid": "760a303502d732ece14e3ea35c0c6297",
"text": "Data centers are experiencing a remarkable growth in the number of interconnected servers. Being one of the foremost data center design concerns, network infrastructure plays a pivotal role in the initial capital investment and ascertaining the performance parameters for the data center. Legacy data center network (DCN) infrastructure lacks the inherent capability to meet the data centers growth trend and aggregate bandwidth demands. Deployment of even the highest-end enterprise network equipment only delivers around 50% of the aggregate bandwidth at the edge of network. The vital challenges faced by the legacy DCN architecture trigger the need for new DCN architectures, to accommodate the growing demands of the ‘cloud computing’ paradigm. We have implemented and simulated the state of the art DCN models in this paper, namely: (a) legacy DCN architecture, (b) switch-based, and (c) hybrid models, and compared their effectiveness by monitoring the network: (a) throughput and (b) average packet delay. The presented analysis may be perceived as a background benchmarking study for the further research on the simulation and implementation of the DCN-customized topologies and customized addressing protocols in the large-scale data centers. We have performed extensive simulations under various network traffic patterns to ascertain the strengths and inadequacies of the different DCN architectures. Moreover, we provide a firm foundation for further research and enhancement in DCN architectures. Copyright © 2012 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "938f49e103d0153c82819becf96f126c",
"text": "Humans interpret texts with respect to some background information, or world knowledge, and we would like to develop automatic reading comprehension systems that can do the same. In this paper, we introduce a task and several models to drive progress towards this goal. In particular, we propose the task of rare entity prediction: given a web document with several entities removed, models are tasked with predicting the correct missing entities conditioned on the document context and the lexical resources. This task is challenging due to the diversity of language styles and the extremely large number of rare entities. We propose two recurrent neural network architectures which make use of external knowledge in the form of entity descriptions. Our experiments show that our hierarchical LSTM model performs significantly better at the rare entity prediction task than those that do not make use of external resources.",
"title": ""
},
{
"docid": "dc93d2204ff27c7d55a71e75d2ae4ca9",
"text": "Locating and securing an Alzheimer's patient who is outdoors and in wandering state is crucial to patient's safety. Although advances in geotracking and mobile technology have made locating patients instantly possible, reaching them while in wandering state may take time. However, a social network of caregivers may help shorten the time that it takes to reach and secure a wandering AD patient. This study proposes a new type of intervention based on novel mobile application architecture to form and direct a social support network of caregivers for locating and securing wandering patients as soon as possible. System employs, aside from the conventional tracking mechanism, a wandering detection mechanism, both of which operates through a tracking device installed a Subscriber Identity Module for Global System for Mobile Communications Network(GSM). System components are being implemented using Java. Family caregivers will be interviewed prior to and after the use of the system and Center For Epidemiologic Studies Depression Scale, Patient Health Questionnaire and Zarit Burden Interview will be applied to them during these interviews to find out the impact of the system in terms of depression, anxiety and burden, respectively.",
"title": ""
},
{
"docid": "83ccee768c29428ea8a575b2e6faab7d",
"text": "Audio-based cough detection has become more pervasive in recent years because of its utility in evaluating treatments and the potential to impact the quality of life for individuals with chronic cough. We critically examine the current state of the art in cough detection, concluding that existing approaches expose private audio recordings of users and bystanders. We present a novel algorithm for detecting coughs from the audio stream of a mobile phone. Our system allows cough sounds to be reconstructed from the feature set, but prevents speech from being reconstructed intelligibly. We evaluate our algorithm on data collected in the wild and report an average true positive rate of 92% and false positive rate of 0.5%. We also present the results of two psychoacoustic experiments which characterize the tradeoff between the fidelity of reconstructed cough sounds and the intelligibility of reconstructed speech.",
"title": ""
}
] |
scidocsrr
|
c9fad2fad59192a15a471c09f339c8c5
|
Potential Use of Bacillus coagulans in the Food Industry
|
[
{
"docid": "57856c122a6f8a0db8423a1af9378b3e",
"text": "Probiotics are defined as live microorganisms, which when administered in adequate amounts, confer a health benefit on the host. Health benefits have mainly been demonstrated for specific probiotic strains of the following genera: Lactobacillus, Bifidobacterium, Saccharomyces, Enterococcus, Streptococcus, Pediococcus, Leuconostoc, Bacillus, Escherichia coli. The human microbiota is getting a lot of attention today and research has already demonstrated that alteration of this microbiota may have far-reaching consequences. One of the possible routes for correcting dysbiosis is by consuming probiotics. The credibility of specific health claims of probiotics and their safety must be established through science-based clinical studies. This overview summarizes the most commonly used probiotic microorganisms and their demonstrated health claims. As probiotic properties have been shown to be strain specific, accurate identification of particular strains is also very important. On the other hand, it is also demonstrated that the use of various probiotics for immunocompromised patients or patients with a leaky gut has also yielded infections, sepsis, fungemia, bacteraemia. Although the vast majority of probiotics that are used today are generally regarded as safe and beneficial for healthy individuals, caution in selecting and monitoring of probiotics for patients is needed and complete consideration of risk-benefit ratio before prescribing is recommended.",
"title": ""
},
{
"docid": "f9355d27f36d7ecfbd77385968ac95e2",
"text": "The present study was conducted to investigate the effects of dietary supplementation of Bacillus coagulans on growth, feed utilization, digestive enzyme activity, innate immune response and disease resistance of freshwater prawn Macrobrachium rosenbergii. Three treatment groups (designated as T1, T2 and T3) and a control group (C), each in triplicates, were established. The prawn in the control were fed a basal diet and those in T1, T2 and T3 were fed basal diet containing B. coagulans at 105, 107 and 109 cfu g−1, respectively. After 60 days, growth performance and feed utilization were found significantly higher (P < 0.05) in prawn fed T3 diet. The specific activities of protease, amylase and lipase digestive enzymes were significantly higher (P < 0.05) for T3. Innate immunity in terms of lysozyme and respiratory burst activities were significantly elevated (P < 0.05) in all the probiotic treatment groups as compared to control. Challenge study with Vibrio harveyi revealed significant increase (P < 0.05) in disease resistance of freshwater prawn in T2 and T3 groups. The results collectively suggested that supplementation of B. coagulans as probiotic in the diet at approximately 109 cfu g−1 can improve the growth performance, feed utilization, digestive enzyme activity, innate immune response and disease resistance of freshwater prawn.",
"title": ""
}
] |
[
{
"docid": "f5b500c143fd584423ee8f0467071793",
"text": "Drug-Drug Interactions (DDIs) are major causes of morbidity and treatment inefficacy. The prediction of DDIs for avoiding the adverse effects is an important issue. There are many drug-drug interaction pairs, it is impossible to do in vitro or in vivo experiments for all the possible pairs. The limitation of DDIs research is the high costs. Many drug interactions are due to alterations in drug metabolism by enzymes. The most common among these enzymes are cytochrome P450 enzymes (CYP450). Drugs can be substrate, inhibitor or inducer of CYP450 which will affect metabolite of other drugs. This paper proposes enzyme action crossing attribute creation for DDIs prediction. Machine learning techniques, k-Nearest Neighbor (k-NN), Neural Networks (NNs), and Support Vector Machine (SVM) were used to find DDIs for simvastatin based on enzyme action crossing. SVM preformed the best providing the predictions at the accuracy of 70.40 % and of 81.85 % with balance and unbalance class label datasets respectively. Enzyme action crossing method provided the new attribute that can be used to predict drug-drug interactions.",
"title": ""
},
{
"docid": "8cc12987072c983bc45406a033a467aa",
"text": "Vehicular drivers and shift workers in industry are at most risk of handling life critical tasks. The drivers traveling long distances or when they are tired, are at risk of a meeting an accident. The early hours of the morning and the middle of the afternoon are the peak times for fatigue driven accidents. The difficulty in determining the incidence of fatigue-related accidents is due, at least in part, to the difficulty in identifying fatigue as a causal or causative factor in accidents. In this paper we propose an alternative approach for fatigue detection in vehicular drivers using Respiration (RSP) signal to reduce the losses of the lives and vehicular accidents those occur due to cognitive fatigue of the driver. We are using basic K-means algorithm with proposed two modifications as classifier for detection of Respiration signal two state fatigue data recorded from the driver. The K-means classifiers [11] were trained and tested for wavelet feature of Respiration signal. The extracted features were treated as individual decision making parameters. From test results it could be found that some of the wavelet features could fetch 100 % classification accuracy.",
"title": ""
},
{
"docid": "885281566381b396594a7508e5f255c8",
"text": "The last decade has witnessed the emergence and aesthetic maturation of amateur multimedia on an unprecedented scale, from video podcasts to machinima, and Flash animations to user-created metaverses. Today, especially in academic circles, this pop culture phenomenon is little recognized and even less understood. This paper explores creativity in amateur multimedia using three theorizations of creativity—those of HCI, postructuralism, and technological determinism. These theorizations frame a semiotic analysis of numerous commonly used multimedia authoring platforms, which demonstrates a deep convergence of multimedia authoring tool strategies that collectively project a conceptualization and practice of digital creativity. This conceptualization of digital creativity in authoring tools is then compared with hundreds of amateur-created artifacts. These analyses reveal relationships among emerging amateur multimedia aesthetics, common software authoring tools, and the three theorizations of creativity discussed.",
"title": ""
},
{
"docid": "a35f014424d952de95fbbd4ccab696b1",
"text": "Stroke can cause high morbidity and mortality, and ischemic stroke (IS) and transient ischemic attack (TIA) patients have a high stroke recurrence rate. Antiplatelet agents are the standard therapy for these patients, but it is often difficult for clinicians to select the best therapy from among the multiple treatment options. We therefore performed a network meta-analysis to estimate the efficacy of antiplatelet agents for secondary prevention of recurrent stroke. We systematically searched 3 databases (PubMed, Embase, and Cochrane) for relevant studies published through August 2015. The primary end points of this meta-analysis were overall stroke, hemorrhagic stroke, and fatal stroke. A total of 30 trials were included in our network meta-analysis and abstracted data. Among the therapies evaluated in the included trials, the estimates for overall stroke and hemorrhagic stroke for cilostazol (Cilo) were significantly better than those for aspirin (odds ratio [OR] = .64, 95% credibility interval [CrI], .45-.91; OR = .23, 95% CrI, .08-.58). The estimate for fatal stroke was highest for Cilo plus aspirin combination therapy, followed by Cilo therapy. The results of our meta-analysis indicate that Cilo significantly improves overall stroke and hemorrhagic stroke in IS or TIA patients and reduces fatal stroke, but with low statistical significance. Our results also show that Cilo was significantly more efficient than other therapies in Asian patients; therefore, future trials should focus on Cilo treatment for secondary prevention of recurrent stroke in non-Asian patients.",
"title": ""
},
{
"docid": "9f6adc749faf41f182eff752b7c80c63",
"text": "s Physicists use differential equations to describe the physical dynamical world, and the solutions of these equations constitute our understanding of the world. During the hundreds of years, scientists developed several ways to solve these equations, i.e., the analytical solutions and the numerical solutions. However, for some complex equations, there may be no analytical solutions, and the numerical solutions may encounter the curse of the extreme computational cost if the accuracy is the first consideration. Solving equations is a high-level human intelligence work and a crucial step towards general artificial intelligence (AI), where deep reinforcement learning (DRL) may contribute. This work makes the first attempt of applying (DRL) to solve nonlinear differential equations both in discretized and continuous format with the governing equations (physical laws) embedded in the DRL network, including ordinary differential equations (ODEs) and partial differential equations (PDEs). The DRL network consists of an actor that outputs solution approximations policy and a critic that outputs the critic of the actor's output solution. Deterministic policy network is employed as the actor, and governing equations are embedded in the critic. The effectiveness of the DRL solver in Schrödinger equation, Navier-Stocks, Van der Pol equation, Burgers' equation and the equation of motion are discussed. * These authors contributed to the work equally and should be regarded as co-first authors. † Corresponding author. E-mail address: [email protected]. 2 Introduction Differential equations, including ordinary differential equations (ODEs) and partial differential equations (PDEs), formalize the description of the dynamical nature of the world around us. However, solving these equations is a challenge due to extreme computational cost, although limited cases have analytical or numerical solutions1-3. Solving equations is a high-level human intelligence work and a crucial step towards general artificial intelligence. Therefore, the obstacle of extreme computational cost in numerical solution may be bypassed by using general AI techniques, such as deep learning and reinforcement learning4, 5, which are rapidly developed during the last decades. Recent years such efforts have been made, and three main kinds of the existed efforts using deep learning can be categorized into: 1) directly map to the solution represented by the deep neural network in the continuous manner as in the analytical solution6, data used to train the network is randomly sampled within the entire solution domain in each training batch, including initial conditions and boundary conditions; 2) directly map to the solution in the discretized manner as in the numerical solution7-9; and 3) indirectly map to the internal results or parameters of the numerical solutions, and use the internal results to derive the numerical solutions6, 10. The essence is to take advantage of the nonlinear representing ability of deep neural networks. The solutions are either directly output by the network or numerically derived from the outputs of the neural network, and the solution task is regarded as a weak-label task while the governing equation is treated as the weak-label to calculate the loss function of the network. The term ‘weaklabel’ is emphasized to make difference with the label in supervised learning, i.e., the true solutions are not known in these tasks, however, when we get a candidate solution by the neural network output, we can tell how far the output solution is to the true solution by the imbalance of the physical law. Because of the weak-label property, the solution using deep learning may be unstable for highdimensional ODEs/PDEs tasks. Hence, we propose a deep reinforcement learning (DRL) paradigm for the ODEs/PDEs solution. DRL is naturally suitable for weak-label tasks by the trial-error learning mechanism5, 11. Take the game of Go for example12, the only prior information about the task is the playing rules that defines win or lose, the label (or score) of each step is whether win or lose after the whole episode of playing rather than an exact score. 3 While employing reinforcement learning, we are essentially treating the solving of differential equations as a control task. The state is the known current-step solution (either the given initial condition or the intermediate DRL solution) of the differential equations, the action is the solution of the task, and the goal is to find a proper action to balance the governing equation with an acceptable error. A deep deterministic policy network is used to output action policy given a state, and the governing equation is used as the critic, gradients of the policy network is calculated based on the critic.",
"title": ""
},
{
"docid": "928f64f8ef9b3ea5e107ae9c49840b2c",
"text": "Mass spectrometry-based proteomics has greatly benefitted from enormous advances in high resolution instrumentation in recent years. In particular, the combination of a linear ion trap with the Orbitrap analyzer has proven to be a popular instrument configuration. Complementing this hybrid trap-trap instrument, as well as the standalone Orbitrap analyzer termed Exactive, we here present coupling of a quadrupole mass filter to an Orbitrap analyzer. This \"Q Exactive\" instrument features high ion currents because of an S-lens, and fast high-energy collision-induced dissociation peptide fragmentation because of parallel filling and detection modes. The image current from the detector is processed by an \"enhanced Fourier Transformation\" algorithm, doubling mass spectrometric resolution. Together with almost instantaneous isolation and fragmentation, the instrument achieves overall cycle times of 1 s for a top 10 higher energy collisional dissociation method. More than 2500 proteins can be identified in standard 90-min gradients of tryptic digests of mammalian cell lysate- a significant improvement over previous Orbitrap mass spectrometers. Furthermore, the quadrupole Orbitrap analyzer combination enables multiplexed operation at the MS and tandem MS levels. This is demonstrated in a multiplexed single ion monitoring mode, in which the quadrupole rapidly switches among different narrow mass ranges that are analyzed in a single composite MS spectrum. Similarly, the quadrupole allows fragmentation of different precursor masses in rapid succession, followed by joint analysis of the higher energy collisional dissociation fragment ions in the Orbitrap analyzer. High performance in a robust benchtop format together with the ability to perform complex multiplexed scan modes make the Q Exactive an exciting new instrument for the proteomics and general analytical communities.",
"title": ""
},
{
"docid": "e9b5dc63f981cc101521d8bbda1847d5",
"text": "The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB : A → B and FBA : B → A is commonly used by the state-of-the-art methods, like CycleGAN (Zhu et al., 2017), to learn this translation by introducing cycle consistency requirement to the learning problem, i.e. FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark data sets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-toend learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.",
"title": ""
},
{
"docid": "b50498964a73a59f54b3a213f2626935",
"text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.",
"title": ""
},
{
"docid": "6c5cabfa5ee5b9d67ef25658a4b737af",
"text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression",
"title": ""
},
{
"docid": "d468946ac66cb4889acd11a48cdebc66",
"text": "In this article, e-NOTIFY system is presented, which allows fast detection of traffic accidents, improving the assistance to injured passengers by reducing the response time of emergency services through the efficient communication of relevant information about the accident using a combination of V2V and V2I communications. The proposed system requires installing OBUs in the vehicles, in charge of detecting accidents and notifying them to an external CU, which will estimate the severity of the accident and inform the appropriate emergency services about the incident. This architecture replaces the current mechanisms for notification of accidents based on witnesses, who may provide incomplete or incorrect information after a long time. The development of a low-cost prototype shows that it is feasible to massively incorporate this system in existing vehicles.",
"title": ""
},
{
"docid": "1ade3a53c754ec35758282c9c51ced3d",
"text": "Radical hysterectomy represents the treatment of choice for FIGO stage IA2–IIA cervical cancer. It is associated with several serious complications such as urinary and anorectal dysfunction due to surgical trauma to the autonomous nervous system. In order to determine those surgical steps involving the risk of nerve injury during both classical and nerve-sparing radical hysterectomy, we investigated the relationships between pelvic fascial, vascular and nervous structures in a large series of embalmed and fresh female cadavers. We showed that the extent of potential denervation after classical radical hysterectomy is directly correlated with the radicality of the operation. The surgical steps that carry a high risk of nerve injury are the resection of the uterosacral and vesicouterine ligaments and of the paracervix. A nerve-sparing approach to radical hysterectomy for cervical cancer is feasible if specific resection limits, such as the deep uterine vein, are carefully identified and respected. However, a nerve-sparing surgical effort should be balanced with the oncological priorities of removal of disease and all its potential routes of local spread. L'hystérectomie radicale est le traitement de choix pour les cancers du col utérin de stade IA2–IIA de la Fédération Internationale de Gynécologie Obstétrique (FIGO). Cette intervention comporte plusieurs séquelles graves, telles que les dysfonctions urinaires ou ano-rectales, par traumatisme chirurgical des nerfs végétatifs pelviens. Pour mettre en évidence les temps chirurgicaux impliquant un risque de lésion nerveuse lors d'une hystérectomie radicale classique et avec préservation nerveuse, nous avons recherché les rapports entre le fascia pelvien, les structures vasculaires et nerveuses sur une large série de sujets anatomiques féminins embaumés et non embaumés. Nous avons montré que l'étendue de la dénervation potentielle après hystérectomie radicale classique était directement en rapport avec le caractère radical de l'intervention. Les temps chirurgicaux à haut risque pour des lésions nerveuses sont la résection des ligaments utéro-sacraux, des ligaments vésico-utérins et du paracervix. L'hystérectomie radicale avec préservation nerveuse est possible si des limites de résection spécifiques telle que la veine utérine profonde sont soigneusement identifiées et respectées. Cependant une chirurgie de préservation nerveuse doit être mise en balance avec les priorités carcinologiques d'exérèse du cancer et de toutes ses voies potentielles de dissémination locale.",
"title": ""
},
{
"docid": "3d78d929b1e11b918119abba4ef8348d",
"text": "Recent developments in mobile technologies have produced a new kind of device, a programmable mobile phone, the smartphone. Generally, smartphone users can program any application which is customized for needs. Furthermore, they can share these applications in online market. Therefore, smartphone and its application are now most popular keywords in mobile technology. However, to provide these customized services, smartphone needs more private information and this can cause security vulnerabilities. Therefore, in this work, we analyze security of smartphone based on its environments and describe countermeasures.",
"title": ""
},
{
"docid": "dcf038090e8423d4919fd0260635c8c4",
"text": "Automatic extraction of liver and tumor from CT volumes is a challenging task due to their heterogeneous and diffusive shapes. Recently, 2D and 3D deep convolutional neural networks have become popular in medical image segmentation tasks because of the utilization of large labeled datasets to learn hierarchical features. However, 3D networks have some drawbacks due to their high cost on computational resources. In this paper, we propose a 3D hybrid residual attention-aware segmentation method, named RA-UNet, to precisely extract the liver volume of interests (VOI) and segment tumors from the liver VOI. The proposed network has a basic architecture as a 3D U-Net which extracts contextual information combining lowlevel feature maps with high-level ones. Attention modules are stacked so that the attention-aware features change adaptively as the network goes “very deep” and this is made possible by residual learning. This is the first work that an attention residual mechanism is used to process medical volumetric images. We evaluated our framework on the public MICCAI 2017 Liver Tumor Segmentation dataset and the 3DIRCADb dataset. The results show that our architecture outperforms other state-ofthe-art methods. We also extend our RA-UNet to brain tumor segmentation on the BraTS2018 and BraTS2017 datasets, and the results indicate that RA-UNet achieves good performance on a brain tumor segmentation task as well.",
"title": ""
},
{
"docid": "47992375dbd3c5d0960c114d5a4854b2",
"text": "A new method is developed to represent probabilistic relations on multiple random events. Where previously knowledge bases containing probabilistic rules were used for this purpose, here a probabilitydistributionover the relations is directly represented by a Bayesian network. By using a powerful way of specifying conditional probability distributions in these networks, the resulting formalism is more expressive than the previous ones. Particularly, it provides for constraints on equalities of events, and it allows to define complex, nested combination functions.",
"title": ""
},
{
"docid": "a05ee39269d1022560d1024805c8d055",
"text": "Clean air is one of the most important needs for the well-being of human being health. In smart cities, timely and precise air pollution levels knowledge is vital for the successful setup of smart pollution systems. Recently, pollution and weather data in smart city have been bursting, and we have truly got into the era of big data. Ozone is considered as one of the most air pollutants with hurtful impact to human health. Existing methods used to predict the level of ozone uses shallow pollution prediction models and are still unsatisfactory in their accuracy to be used in many real-world applications. In order to increase the accuracy of prediction models we come up with the concept of using deep architecture models tested on big pollution and weather data. In this paper, a new deep learning-based ozone level prediction model is proposed, which considers the pollution and weather correlations integrally. This deep learning model is used to learn ozone level features, and it is trained using a grid search technique. A deep architecture model is utilized to represent ozone level features for prediction. Moreover, experiments demonstrate that the proposed method for ozone level prediction has superior performance. The outcome of this study can be helpful in predicting the ozone level pollution in Aarhus city as a model of smart cities for improving accuracy of ozone forecasting tools.",
"title": ""
},
{
"docid": "cc10178729ca27c413223472f1aa08be",
"text": "The automatic classification of ships from aerial images is a considerable challenge. Previous works have usually applied image processing and computer vision techniques to extract meaningful features from visible spectrum images in order to use them as the input for traditional supervised classifiers. We present a method for determining if an aerial image of visible spectrum contains a ship or not. The proposed architecture is based on Convolutional Neural Networks (CNN), and it combines neural codes extracted from a CNN with a k-Nearest Neighbor method so as to improve performance. The kNN results are compared to those obtained with the CNN Softmax output. Several CNN models have been configured and evaluated in order to seek the best hyperparameters, and the most suitable setting for this task was found by using transfer learning at different levels. A new dataset (named MASATI) composed of aerial imagery with more than 6000 samples has also been created to train and evaluate our architecture. The experimentation shows a success rate of over 99% for our approach, in contrast with the 79% obtained with traditional methods in classification of ship images, also outperforming other methods based on CNNs. A dataset of images (MWPU VHR-10) used in previous works was additionally used to evaluate the proposed approach. Our best setup achieves a success ratio of 86% with these data, significantly outperforming previous state-of-the-art ship classification methods.",
"title": ""
},
{
"docid": "2cab3b3bed055eff92703d23b1edc69d",
"text": "Due to their nonvolatile nature, excellent scalability, and high density, memristive nanodevices provide a promising solution for low-cost on-chip storage. Integrating memristor-based synaptic crossbars into digital neuromorphic processors (DNPs) may facilitate efficient realization of brain-inspired computing. This article investigates architectural design exploration of DNPs with memristive synapses by proposing two synapse readout schemes. The key design tradeoffs involving different analog-to-digital conversions and memory accessing styles are thoroughly investigated. A novel storage strategy optimized for feedforward neural networks is proposed in this work, which greatly reduces the energy and area cost of the memristor array and its peripherals.",
"title": ""
},
{
"docid": "e9ed26434ac4e17548a08a40ace99a0c",
"text": "An analytical study on air flow effects and resulting dynamics on the PACE Formula 1 race car is presented. The study incorporates Computational Fluid Dynamic analysis and simulation to maximize down force and minimize drag during high speed maneuvers of the race car. Using Star CCM+ software and mentoring provided by CD – Adapco, the simulation employs efficient meshing techniques and realistic loading conditions to understand down force on front and rear wing portions of the car as well as drag created by all exterior surfaces. Wing and external surface loading under high velocity runs of the car are illustrated. Optimization of wing orientations (direct angle of attack) and geometry modifications on outer surfaces of the car are performed to enhance down force and lessen drag for maximum stability and control during operation. The use of Surface Wrapper saved months of time in preparing the CAD model. The Transform tool and Contact Prevention tool in Star CCM+ proved to be an efficient means of correcting and modifying geometry instead of going back to the CAD model. The CFD simulations point out that the current front and rear wings do not generate the desired downforce and that the rear wing should be redesigned.",
"title": ""
},
{
"docid": "64770c350dc1d260e24a43760d4e641b",
"text": "A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.",
"title": ""
},
{
"docid": "3e2e2aace1ddade88f3c8a6b7157af6b",
"text": "Verb learning is clearly a function of observation of real-world contingencies; however, it is argued that such observational information is insufficient to account fully for vocabulary acquisition. This paper provides an experimental validation of Landau & Gleitman's (1985) syntactic bootstrapping procedure; namely, that children may use syntactic information to learn new verbs. Pairs of actions were presented simultaneously with a nonsense verb in one of two syntactic structures. The actions were subsequently separated, and the children (MA = 2;1) were asked to select which action was the referent for the verb. The children's choice of referent was found to be a function of the syntactic structure in which the verb had appeared.",
"title": ""
}
] |
scidocsrr
|
2930995062637dbbada7cb1d5ccf5562
|
Tweets vs. Mendeley readers: How do these two social media metrics differ?
|
[
{
"docid": "8e6d17b6d7919d76cebbcefcc854573e",
"text": "Vincent Larivière École de bibliothéconomie et des sciences de l’information, Université de Montréal, C.P. 6128, Succ. CentreVille, Montréal, QC H3C 3J7, Canada, and Observatoire des Sciences et des Technologies (OST), Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), Université du Québec à Montréal, CP 8888, Succ. Centre-Ville, Montréal, QC H3C 3P8, Canada. E-mail: [email protected]",
"title": ""
},
{
"docid": "0c162c4f83294c4f701eabbd69f171f7",
"text": "This paper aims to explore how the principles of a well-known Web 2.0 service, the world¿s largest social music service \"Last.fm\" (www.last.fm), can be applied to research, which potential it could have in the world of research (e.g. an open and interdisciplinary database, usage-based reputation metrics, and collaborative filtering) and which challenges such a model would face in academia. A real-world application of these principles, \"Mendeley\" (www.mendeley.com), will be demoed at the IEEE e-Science Conference 2008.",
"title": ""
}
] |
[
{
"docid": "338dcbb45ff0c1752eeb34ec1be1babe",
"text": "I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural",
"title": ""
},
{
"docid": "d08c24228e43089824357342e0fa0843",
"text": "This paper presents a new register assignment heuristic for procedures in SSA Form, whose interference graphs are chordal; the heuristic is called optimistic chordal coloring (OCC). Previous register assignment heuristics eliminate copy instructions via coalescing, in other words, merging nodes in the interference graph. Node merging, however, can not preserve the chordal graph property, making it unappealing for SSA-based register allocation. OCC is based on graph coloring, but does not employ coalescing, and, consequently, preserves graph chordality, and does not increase its chromatic number; in this sense, OCC is conservative as well as optimistic. OCC is observed to eliminate at least as many dynamically executed copy instructions as iterated register coalescing (IRC) for a set of chordal interference graphs generated from several Mediabench and MiBench applications. In many cases, OCC and IRC were able to find optimal or near-optimal solutions for these graphs. OCC ran 1.89x faster than IRC, on average.",
"title": ""
},
{
"docid": "512c0d3d9ad6d6a4d139a5e7e0bd3a4e",
"text": "The epidermal growth factor receptor (EGFR) contributes to the pathogenesis of head&neck squamous cell carcinoma (HNSCC). However, only a subset of HNSCC patients benefit from anti-EGFR targeted therapy. By performing an unbiased proteomics screen, we found that the calcium-activated chloride channel ANO1 interacts with EGFR and facilitates EGFR-signaling in HNSCC. Using structural mutants of EGFR and ANO1 we identified the trans/juxtamembrane domain of EGFR to be critical for the interaction with ANO1. Our results show that ANO1 and EGFR form a functional complex that jointly regulates HNSCC cell proliferation. Expression of ANO1 affected EGFR stability, while EGFR-signaling elevated ANO1 protein levels, establishing a functional and regulatory link between ANO1 and EGFR. Co-inhibition of EGFR and ANO1 had an additive effect on HNSCC cell proliferation, suggesting that co-targeting of ANO1 and EGFR could enhance the clinical potential of EGFR-targeted therapy in HNSCC and might circumvent the development of resistance to single agent therapy. HNSCC cell lines with amplification and high expression of ANO1 showed enhanced sensitivity to Gefitinib, suggesting ANO1 overexpression as a predictive marker for the response to EGFR-targeting agents in HNSCC therapy. Taken together, our results introduce ANO1 as a promising target and/or biomarker for EGFR-directed therapy in HNSCC.",
"title": ""
},
{
"docid": "4c7d66d767c9747fdd167f1be793d344",
"text": "In this paper, we introduce a new approach to location estimation where, instead of locating a single client, we simultaneously locate a set of wireless clients. We present a Bayesian hierarchical model for indoor location estimation in wireless networks. We demonstrate that our model achieves accuracy that is similar to other published models and algorithms. By harnessing prior knowledge, our model eliminates the requirement for training data as compared with existing approaches, thereby introducing the notion of a fully adaptive zero profiling approach to location estimation.",
"title": ""
},
{
"docid": "3b5d119416d602a31d5975bacd7acc8e",
"text": "We present a parametric family of regression models for interval-censored event-time (survival) data that accomodates both fixed (e.g. baseline) and time-dependent covariates. The model employs a three-parameter family of survival distributions that includes the Weibull, negative binomial, and log-logistic distributions as special cases, and can be applied to data with left, right, interval, or non-censored event times. Standard methods, such as Newton-Raphson, can be employed to estimate the model and the resulting estimates have an asymptotically normal distribution about the true values with a covariance matrix that is consistently estimated by the information function. The deviance function is described to assess model fit and a robust sandwich estimate of the covariance may also be employed to provide asymptotically robust inferences when the model assumptions do not apply. Spline functions may also be employed to allow for non-linear covariates. The model is applied to data from a long-term study of type 1 diabetes to describe the effects of longitudinal measures of glycemia (HbA1c) over time (the time-dependent covariate) on the risk of progression of diabetic retinopathy (eye disease), an interval-censored event-time outcome.",
"title": ""
},
{
"docid": "94e7b9cace3e37f08a62c5b968b2f84f",
"text": "Preclinical data suggest that chronic stress may cause cellular damage and mitochondrial dysfunction, potentially leading to the release of mitochondrial DNA (mtDNA) into the bloodstream. Major depressive disorder has been associated with an increased amount of mtDNA in leukocytes from saliva samples and blood; however, no previous studies have measured plasma levels of free-circulating mtDNA in a clinical psychiatric sample. In this study, free circulating mtDNA was quantified in plasma samples from 37 suicide attempters, who had undergone a dexamethasone suppression test (DST), and 37 healthy controls. We hypothesized that free circulating mtDNA would be elevated in the suicide attempters and would be associated with hypothalamic-pituitary-adrenal (HPA)-axis hyperactivity. Suicide attempters had significantly higher plasma levels of free-circulating mtDNA compared with healthy controls at different time points (pre- and post-DST; all P-values<2.98E-12, Cohen's d ranging from 2.55 to 4.01). Pre-DST plasma levels of mtDNA were positively correlated with post-DST cortisol levels (rho=0.49, P<0.003). Suicide attempters may have elevated plasma levels of free-circulating mtDNA, which are related to impaired HPA-axis negative feedback. This peripheral index is consistent with an increased cellular or mitochondrial damage. The specific cells and tissues contributing to plasma levels of free-circulating mtDNA are not known, as is the specificity of this finding for suicide attempters. Future studies are needed in order to better understand the relevance of increased free-circulating mtDNA in relation to the pathophysiology underlying suicidal behavior and depression.",
"title": ""
},
{
"docid": "e86281a0b5126a5b1aba84f1f945eb42",
"text": "We consider the following problem: There is a set of items (e.g., movies) and a group of agents (e.g., passengers on a plane); each agent has some intrinsic utility for each of the items. Our goal is to pick a set of K items that maximize the total derived utility of all the agents (i.e., in our example we are to pick K movies that we put on the plane’s entertainment system). However, the actual utility that an agent derives from a given item is only a fraction of its intrinsic one, and this fraction depends on how the agent ranks the item among available ones (in the movie example, the perceived value of a movie depends on the values of the other ones available). Extreme examples of our model include the setting where each agent derives utility from his or her most preferred item only (e.g., an agent will watch his or her favorite movie only), from his or her least preferred item only (e.g., the agent worries that he or she will be somehow forced to watch the worst available movie), or derives 1/K of the utility from each of the available items (e.g., the agent will pick a movie at random). Formally, to model this process of adjusting the derived utility, we use the mechanism of ordered weighted average (OWA) operators. Our contribution is twofold: First, we provide a formal specification of the model and provide concrete examples and settings where particular OWA operators are applicable. Second, we show that, in general, our problem is NP-hard but—depending on the OWA operator and the nature of agents’ utilities—there exist good, efficient approximation algorithms (sometimes even polynomial time approximation schemes). Interestingly, our results generalize and build upon those for proportional represented in multiwinner voting scenarios.",
"title": ""
},
{
"docid": "15cfa9005e68973cbca60f076180b535",
"text": "Much of the literature on fair classifiers considers the case of a single classifier used once, in isolation. We initiate the study of composition of fair classifiers. In particular, we address the pitfalls of näıve composition and give general constructions for fair composition. Focusing on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], we also extend our results to a large class of group fairness definitions popular in the recent literature. We exhibit several cases in which group fairness definitions give misleading signals under composition and conclude that additional context is needed to evaluate both group and individual fairness under composition.",
"title": ""
},
{
"docid": "11886d3f8e9fc3c2c2095a93e93e08b2",
"text": "The purpose of this chapter is to give a brief introduction to Monte Carlo simulations of classical statistical physics systems and their statistical analysis. To set the general theoretical frame, first some properties of phase transitions and simple models describing them are briefly recalled, before the concept of importance sampling Monte Carlo methods is introduced. The basic idea is illustrated by a few standard local update algorithms (Metropolis, heat-bath, Glauber). Then methods for the statistical analysis of the thus generated data are discussed. Special attention is payed to the choice of estimators, autocorrelation times and statistical error analysis. This is necessary for a quantitative description of the phenomenon of critical slowing down at continuous phase transitions. For illustration purposes, only the two-dimensional Ising model will be needed. To overcome the slowing-down problem, non-local cluster algorithms have been developed which will be described next. Then the general tool of reweighting techniques will be explained which is extremely important for finite-size scaling studies. This will be demonstrated in some detail by the sample study presented in the next section, where also methods for estimating spatial correlation functions will be discussed. The reweighting idea is also important for a deeper understanding of so-called generalized ensemble methods which may be viewed as dynamical reweighting algorithms. After first discussing simulated and parallel tempering methods, finally also the alternative approach using multicanonical ensembles and the Wang-Landau recursion are briefly outlined.",
"title": ""
},
{
"docid": "d3c491249b7df18b3ab993480d63e6d0",
"text": "There has been an increase in the number of colorimetric assay techniques for the determination of protein concentration over the past 20 years. This has resulted in a perceived increase in sensitivity and accuracy with the advent of new techniques. The present review considers these advances with emphasis on the potential use of such technologies in the assay of biopharmaceuticals. The techniques reviewed include Coomassie Blue G-250 dye binding (the Bradford assay), the Lowry assay, the bicinchoninic acid assay and the biuret assay. It is shown that each assay has advantages and disadvantages relative to sensitivity, ease of performance, acceptance in the literature, accuracy and reproducibility/coefficient of variation/laboratory-to-laboratory variation. A comparison of the use of several assays with the same sample population is presented. It is suggested that the most critical issue in the use of a chromogenic protein assay for the characterization of a biopharmaceutical is the selection of a standard for the calibration of the assay; it is crucial that the standard be representative of the sample. If it is not possible to match the standard with the sample from the perspective of protein composition, then it is preferable to use an assay that is not sensitive to the composition of the protein such as a micro-Kjeldahl technique, quantitative amino acid analysis or the biuret assay. In a complex mixture it might be inappropriate to focus on a general method of protein determination and much more informative to use specific methods relating to the protein(s) of particular interest, using either specific assays or antibody-based methods. The key point is that whatever method is adopted as the 'gold standard' for a given protein, this method needs to be used routinely for calibration.",
"title": ""
},
{
"docid": "f5b85ce051a97bee29a1c921e3146bc0",
"text": "BACKGROUND\nUnderstanding how environmental attributes can influence particular physical activity behaviors is a public health research priority. Walking is the most common physical activity behavior of adults; environmental innovations may be able to influence rates of participation.\n\n\nMETHOD\nReview of studies on relationships of objectively assessed and perceived environmental attributes with walking. Associations with environmental attributes were examined separately for exercise and recreational walking, walking to get to and from places, and total walking.\n\n\nRESULTS\nEighteen studies were identified. Aesthetic attributes, convenience of facilities for walking (sidewalks, trails); accessibility of destinations (stores, park, beach); and perceptions about traffic and busy roads were found to be associated with walking for particular purposes. Attributes associated with walking for exercise were different from those associated with walking to get to and from places.\n\n\nCONCLUSIONS\nWhile few studies have examined specific environment-walking relationships, early evidence is promising. Key elements of the research agenda are developing reliable and valid measures of environmental attributes and walking behaviors, determining whether environment-behavior relationships are causal, and developing theoretical models that account for environmental influences and their interactions with other determinants.",
"title": ""
},
{
"docid": "a57b4afca70c3cd47e38abd7b9b9df2e",
"text": "We demonstrate a hard-x-ray microscope that does not use a lens and is not limited to a small field of view or an object of finite size. The method does not suffer any of the physical constraints, convergence problems, or defocus ambiguities that often arise in conventional phase-retrieval diffractive imaging techniques. Calculation times are about a thousand times shorter than in current iterative algorithms. We need no a priori knowledge about the object, which can be a transmission function with both modulus and phase components. The technique has revolutionary implications for x-ray imaging of all classes of specimen.",
"title": ""
},
{
"docid": "83991055d207c47bc2d5af0d83bfcf9c",
"text": "BACKGROUND\nThe present study aimed at investigating the role of depression and attachment styles in predicting cell phone addiction.\n\n\nMETHODS\nIn this descriptive correlational study, a sample including 100 students of Payame Noor University (PNU), Reyneh Center, Iran, in the academic year of 2013-2014 was selected using volunteer sampling. Participants were asked to complete the adult attachment inventory (AAI), Beck depression inventory-13 (BDI-13) and the cell phone overuse scale (COS).\n\n\nFINDINGS\nResults of the stepwise multiple regression analysis showed that depression and avoidant attachment style were the best predictors of students' cell phone addiction (R(2) = 0.23).\n\n\nCONCLUSION\nThe results of this study highlighted the predictive value of depression and avoidant attachment style concerning students' cell phone addiction.",
"title": ""
},
{
"docid": "31b279fd7bd4a6ef5f25a8f241eb0b56",
"text": "Like many epithelial tumors, head and neck squamous cell carcinoma (HNSCC) contains a heterogeneous population of cancer cells. We developed an immunodeficient mouse model to test the tumorigenic potential of different populations of cancer cells derived from primary, unmanipulated human HNSCC samples. We show that a minority population of CD44(+) cancer cells, which typically comprise <10% of the cells in a HNSCC tumor, but not the CD44(-) cancer cells, gave rise to new tumors in vivo. Immunohistochemistry revealed that the CD44(+) cancer cells have a primitive cellular morphology and costain with the basal cell marker Cytokeratin 5/14, whereas the CD44(-) cancer cells resemble differentiated squamous epithelium and express the differentiation marker Involucrin. The tumors that arose from purified CD44(+) cells reproduced the original tumor heterogeneity and could be serially passaged, thus demonstrating the two defining properties of stem cells: ability to self-renew and to differentiate. Furthermore, the tumorigenic CD44(+) cells differentially express the BMI1 gene, at both the RNA and protein levels. By immunohistochemical analysis, the CD44(+) cells in the tumor express high levels of nuclear BMI1, and are arrayed in characteristic tumor microdomains. BMI1 has been demonstrated to play a role in self-renewal in other stem cell types and to be involved in tumorigenesis. Taken together, these data demonstrate that cells within the CD44(+) population of human HNSCC possess the unique properties of cancer stem cells in functional assays for cancer stem cell self-renewal and differentiation and form unique histological microdomains that may aid in cancer diagnosis.",
"title": ""
},
{
"docid": "1962428380a7ccb6e64d0c7669736e9d",
"text": "This target article presents an integrated evolutionary model of the development of attachment and human reproductive strategies. It is argued that sex differences in attachment emerge in middle childhood, have adaptive significance in both children and adults, and are part of sex-specific life history strategies. Early psychosocial stress and insecure attachment act as cues of environmental risk, and tend to switch development towards reproductive strategies favoring current reproduction and higher mating effort. However, due to sex differences in life history trade-offs between mating and parenting, insecure males tend to adopt avoidant strategies, whereas insecure females tend to adopt anxious/ambivalent strategies, which maximize investment from kin and mates. Females are expected to shift to avoidant patterns when environmental risk is more severe. Avoidant and ambivalent attachment patterns also have different adaptive values for boys and girls, in the context of same-sex competition in the peer group: in particular, the competitive and aggressive traits related to avoidant attachment can be favored as a status-seeking strategy for males. Finally, adrenarche is proposed as the endocrine mechanism underlying the reorganization of attachment in middle childhood, and the implications for the relationship between attachment and sexual development are explored. Sex differences in the development of attachment can be fruitfully integrated within the broader framework of adaptive plasticity in life history strategies, thus contributing to a coherent evolutionary theory of human development.",
"title": ""
},
{
"docid": "ca6c5b1b18532671713af8cfefa234be",
"text": "Automatic composition techniques are important in sense of upgrading musical applications for amateur musicians such as composition support systems. In this paper, we present an algorithm that can automatically generate songs from Japanese lyrics. The algorithm is designed by considering composition as an optimal-solution search problem under constraints given by the prosody of the lyrics. To verify the algorithm, we launched Orpheus which composes with the visitor’s lyrics on the web-site, and 56,000 songs were produced within a year. Evaluation results on the generated songs are also reported, indicating that Orpheus can help users to compose their own original Japanese songs.",
"title": ""
},
{
"docid": "5bf2662b043011999fa0c1cbb5099387",
"text": "With the introduction of new technology in our daily life, it is essential that this technology is used for the aid of the elderly. Falls cause a very high risk to the elderly's life. Accordingly, this paper's focus is on technology that would aid the elderly. These technologies include: Wearable- based, audio- based, and video-based fall detection systems. This paper surveys the literature regarding fall detection algorithms using those three branches and the various sensors they employ. Looking at wearable technology, the technology is cheap and accurate but inconvenient. Audio-based technology on the other hand is more convenient and is cheaper than video-based technology. However audio-based technology is hard to set up compared to video and wearable-based technologies. Video- based technology is accurate and easy to set up. At the moment, video-based technology is the most expensive compared to the other two, and it is also prone to occlusion. However as homes become smarter and prices for cameras continue to drop, it is expected that this technology will be the best of the three due to its versatility.",
"title": ""
},
{
"docid": "b52b27e83adf3c7466ab481092969f2e",
"text": "Test suite maintenance tends to have the biggest impact on the overall cost of test automation. Frequently modifications applied on a web application lead to have one or more test cases broken and repairing the test suite is a time-consuming and expensive task. \n This paper reports on an industrial case study conducted in a small Italian company investigating on the analysis of the effort to repair web test suites implemented using different UI locators (e.g., Identifiers and XPath). \n The results of our case study indicate that ID locators used in conjunction with LinkText is the best solution among the considered ones in terms of time required (and LOCs to modify) to repair the test suite to the new release of the application.",
"title": ""
},
{
"docid": "5789b91093323195f3ee4248a78b510b",
"text": "We propose a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user \"hints\" to an output colorization with a Convolutional Neural Network (CNN). Rather than using hand-defined rules, the network propagates user edits by fusing low-level cues along with high-level semantic information, learned from large-scale data. We train on a million images, with simulated user inputs. To guide the user towards efficient input selection, the system recommends likely colors based on the input image and current user inputs. The colorization is performed in a single feed-forward pass, enabling real-time use. Even with randomly simulated user inputs, we show that the proposed system helps novice users quickly create realistic colorizations, and offers large improvements in colorization quality with just a minute of use. In addition, we demonstrate that the framework can incorporate other user \"hints\" to the desired colorization, showing an application to color histogram transfer.",
"title": ""
},
{
"docid": "149c18850040c6073e84ad117b4e4eac",
"text": "Hemangiomas are the most common tumor of infantile period and usually involved sites are head and neck (%50), followed by trunk and extremities. Hemangioma is rarely described in genitals. We report a 17-months-old patient with a hemangioma of the preputium penis. The tumor was completely removed surgically and histological examination revealed an infantile hemangioma.",
"title": ""
}
] |
scidocsrr
|
f650a963b81accebef2c80e08e89931e
|
Institutionalization of IT Compliance: A Longitudinal Study
|
[
{
"docid": "69be35016630139445f693fd8beda509",
"text": "Developing information technology (IT) gov-ernance structures within an organization has always been challenging. This is particularly the case in organizations that have achieved growth through mergers and acquisitions. When the acquired organizations are geographically located in different regions than the host enterprise, the factors affecting this integration and the choice of IT governance structures are quite different than when this situation does not exist. This study performs an exploratory examination of the factors that affect the choice of IT governance structures in organizations that grow through mergers and acquisitions in developing countries using the results of a case study of an international telecommunications company. We find that in addition to the commonly recognized factors such as government regulation, competition and market stability, organizational culture, and IT competence, top management's predisposition toward a specific business strategy and governance structure can profoundly influence the choice of IT governance in organizations. Managerial implications are discussed.",
"title": ""
}
] |
[
{
"docid": "8d45954f6c038910586d55e9ca3ba924",
"text": "IAA produced by bacteria of the genus Azospirillum spp. can promote plant growth by stimulating root formation. Native Azospirillum spp., isolated from Irannian soils had been evaluated this ability in both qualitative and quantitative methods and registered the effects of superior ones on morphological, physiological and root growth of wheat. The roots of wheat seedling responded positively to the several bacteria inoculations by an increase in root length, dry weight and by the lateral root hairs.",
"title": ""
},
{
"docid": "c1d5df0e2058e3f191a8227fca51a2fb",
"text": "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.",
"title": ""
},
{
"docid": "9c7fbbde15c03078bce7bd8d07fa6d2a",
"text": "• For each sense sij, we create a sense embedding E(sij), again a D-dimensional vector. • The lemma embeddings can be decomposed into a mix (e.g. a convex combination) of sense vectors, for instance F(rock) = 0.3 · E(rock-1) + 0.7 · E(rock-2). The “mix variables” pij are non-negative and sum to 1 for each lemma. • The intuition of the optimization that each sense sij should be “close” to a number of other concepts, called the network neighbors, that we know are related to it, as defined by a semantic network. For instance, rock-2 might be defined by the network to be related to other types of music.",
"title": ""
},
{
"docid": "05f25a2de55907773c9ff13b8a2fe5f6",
"text": "Deep learning (DL) creates impactful advances following a virtuous recipe: model architecture search, creating large training data sets, and scaling computation. It is widely believed that growing training sets and models should improve accuracy and result in better products. As DL application domains grow, we would like a deeper understanding of the relationships between training set size, computational scale, and model accuracy improvements to advance the state-of-the-art. This paper presents a large scale empirical characterization of generalization error and model size growth as training sets grow. We introduce a methodology for this measurement and test four machine learning domains: machine translation, language modeling, image processing, and speech recognition. Our empirical results show power-law generalization error scaling across a breadth of factors, resulting in power-law exponents—the \"steepness\" of the learning curve—yet to be explained by theoretical work. Further, model improvements only shift the error but do not appear to affect the power-law exponent. We also show that model size scales sublinearly with data size. These scaling relationships have significant implications on deep learning research, practice, and systems. They can assist model debugging, setting accuracy targets, and decisions about data set growth. They can also guide computing system design and underscore the importance of continued computational scaling.",
"title": ""
},
{
"docid": "8171294a51cb3a83c43243ed96948c3d",
"text": "The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the probabilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unified approach that revisits the link between CS and array signal processing first unveiled in the mid 1990s by Feng and Bresler. The new algorithm, which we call compressive MUSIC, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimal -bound with finite number of snapshots even in cases where the signals are linearly dependent.",
"title": ""
},
{
"docid": "e0fc099ecd24d8d8e6118c01e4ed2e82",
"text": "The stated goal for visual data exploration is to operate at a rate that matches the pace of human data analysts, but the ever increasing amount of data has led to a fundamental problem: datasets are often too large to process within interactive time frames. Progressive analytics and visualizations have been proposed as potential solutions to this issue. By processing data incrementally in small chunks, progressive systems provide approximate query answers at interactive speeds that are then refined over time with increasing precision. We study how progressive visualizations affect users in exploratory settings in an experiment where we capture user behavior and knowledge discovery through interaction logs and think-aloud protocols. Our experiment includes three visualization conditions and different simulated dataset sizes. The visualization conditions are: (1) blocking, where results are displayed only after the entire dataset has been processed; (2) instantaneous, a hypothetical condition where results are shown almost immediately; and (3) progressive, where approximate results are displayed quickly and then refined over time. We analyze the data collected in our experiment and observe that users perform equally well with either instantaneous or progressive visualizations in key metrics, such as insight discovery rates and dataset coverage, while blocking visualizations have detrimental effects.",
"title": ""
},
{
"docid": "1b6af47ddb23b3927c451b8b659fb13e",
"text": "— This project presents an approach to develop a real-time hand gesture recognition enabling human-computer interaction. It is \" Vision Based \" that uses only a webcam and Computer Vision (CV) technology, such as image processing that can recognize several hand gestures. The applications of real time hand gesture recognition are numerous, due to the fact that it can be used almost anywhere where we interact with computers ranging from basic usage which involves small applications to domain-specific specialized applications. Currently, at this level our project is useful for the society but it can further be expanded to be readily used at the industrial level as well. Gesture recognition is an area of active current research in computer vision. Existing systems use hand detection primarily with some type of marker. Our system, however, uses a real-time hand image recognition system. Our system, however, uses a real-time hand image recognition without any marker, simply using bare hands. I. INTRODUCTION In today \" s computer age, every individual is dependent to perform most of their day-today tasks using computers. The major input devices one uses while operating a computer are keyboard and mouse. But there are a wide range of health problems that affects many people nowadays, caused by the constant and continuous work with the computer. Direct use of hands as an input device is an attractive method for providing natural Human Computer Interaction which has evolved from text-based interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to fully fledged multi participant Virtual Environment (VE) systems. Since hand gestures are completely natural form for communication it does not adversely affect the health of the operator as in case of excessive usage of keyboard and mouse. Imagine the human-computer interaction of the future: A 3Dapplication where you can move and rotate objects simply by moving and rotating your hand-all without touching any input device. In this paper a review of vision based hand gesture recognition is presented.",
"title": ""
},
{
"docid": "20e504a115a1448ea366eae408b6391f",
"text": "Clustering algorithms have emerged as an alternative powerful meta-learning tool to accurately analyze the massive volume of data generated by modern applications. In particular, their main goal is to categorize data into clusters such that objects are grouped in the same cluster when they are similar according to specific metrics. There is a vast body of knowledge in the area of clustering and there has been attempts to analyze and categorize them for a larger number of applications. However, one of the major issues in using clustering algorithms for big data that causes confusion amongst practitioners is the lack of consensus in the definition of their properties as well as a lack of formal categorization. With the intention of alleviating these problems, this paper introduces concepts and algorithms related to clustering, a concise survey of existing (clustering) algorithms as well as providing a comparison, both from a theoretical and an empirical perspective. From a theoretical perspective, we developed a categorizing framework based on the main properties pointed out in previous studies. Empirically, we conducted extensive experiments where we compared the most representative algorithm from each of the categories using a large number of real (big) data sets. The effectiveness of the candidate clustering algorithms is measured through a number of internal and external validity metrics, stability, runtime, and scalability tests. In addition, we highlighted the set of clustering algorithms that are the best performing for big data.",
"title": ""
},
{
"docid": "d87295095ef11648890b19cd0608d5da",
"text": "Link prediction and recommendation is a fundamental problem in social network analysis. The key challenge of link prediction comes from the sparsity of networks due to the strong disproportion of links that they have potential to form to links that do form. Most previous work tries to solve the problem in single network, few research focus on capturing the general principles of link formation across heterogeneous networks. In this work, we give a formal definition of link recommendation across heterogeneous networks. Then we propose a ranking factor graph model (RFG) for predicting links in social networks, which effectively improves the predictive performance. Motivated by the intuition that people make friends in different networks with similar principles, we find several social patterns that are general across heterogeneous networks. With the general social patterns, we develop a transfer-based RFG model that combines them with network structure information. This model provides us insight into fundamental principles that drive the link formation and network evolution. Finally, we verify the predictive performance of the presented transfer model on 12 pairs of transfer cases. Our experimental results demonstrate that the transfer of general social patterns indeed help the prediction of links.",
"title": ""
},
{
"docid": "c02865dab28db59a22b972d570c2929a",
"text": "............................................................................................................................. iii Table of",
"title": ""
},
{
"docid": "63b210cc5e1214c51b642e9a4a2a1fb0",
"text": "This paper proposes a simplified method to compute the systolic and diastolic blood pressures from measured oscillometric blood-pressure waveforms. Therefore, the oscillometric waveform is analyzed in the frequency domain, which reveals that the measured blood-pressure signals are heavily disturbed by nonlinear contributions. The proposed approach will linearize the measured oscillometric waveform in order to obtain a more accurate and transparent estimation of the systolic and diastolic pressure based on a robust preprocessing technique. This new approach will be compared with the Korotkoff method and a commercially available noninvasive blood-pressure meter. This allows verification if the linearized approach contains as much information as the Korotkoff method in order to calculate a correct systolic and diastolic blood pressure.",
"title": ""
},
{
"docid": "5b759f2d581a8940127b5e45019039d7",
"text": "The structure of the domain name is highly relevant for providing insights into the management, organization and operation of a given enterprise. Security assessment and network penetration testing are using information sourced from the DNS service in order to map the network, perform reconnaissance tasks, identify services and target individual hosts. Tracking the domain names used by popular Botnets is another major application that needs to undercover their underlying DNS structure. Current approaches for this purpose are limited to simplistic brute force scanning or reverse DNS, but these are unreliable. Brute force attacks depend of a huge list of known words and thus, will not work against unknown names, while reverse DNS is not always setup or properly configured. In this paper, we address the issue of fast and efficient generation of DNS names and describe practical experiences against real world large scale DNS names. Our approach is based on techniques derived from natural language modeling and leverage Markov Chain Models in order to build the first DNS scanner (SDBF) that is leveraging both, training and advanced language modeling approaches.",
"title": ""
},
{
"docid": "05049ac85552c32f2c98d7249a038522",
"text": "Remote sensing tools are increasingly being used to survey forest structure. Most current methods rely on GPS signals, which are available in above-canopy surveys or in below-canopy surveys of open forests, but may be absent in below-canopy environments of dense forests. We trialled a technology that facilitates mobile surveys in GPS-denied below-canopy forest environments. The platform consists of a battery-powered UAV mounted with a LiDAR. It lacks a GPS or any other localisation device. The vehicle is capable of an 8 min flight duration and autonomous operation but was remotely piloted in the present study. We flew the UAV around a 20 m × 20 m patch of roadside trees and developed postprocessing software to estimate the diameter-at-breast-height (DBH) of 12 trees that were detected by the LiDAR. The method detected 73% of trees greater than 200 mm DBH within 3 m of the flight path. Smaller and more distant trees could not be detected reliably. The UAV-based DBH estimates of detected trees were positively correlated with the humanbased estimates (R = 0.45, p = 0.017) with a median absolute error of 18.1%, a root-meansquare error of 25.1% and a bias of −1.2%. We summarise the main current limitations of this technology and outline potential solutions. The greatest gains in precision could be achieved through use of a localisation device. The long-term factor limiting the deployment of below-canopy UAV surveys is likely to be battery technology.",
"title": ""
},
{
"docid": "ff6a2e6b0fbb4e195b095981ab97aae0",
"text": "As broadband speeds increase, latency is becoming a bottleneck for many applications—especially for Web downloads. Latency affects many aspects of Web page load time, from DNS lookups to the time to complete a three-way TCP handshake; it also contributes to the time it takes to transfer the Web objects for a page. Previous work has shown that much of this latency can occur in the last mile [2]. Although some performance bottlenecks can be mitigated by increasing downstream throughput (e.g., by purchasing a higher service plan), in many cases, latency introduces performance bottlenecks, particularly for connections with higher throughput. To mitigate latency bottlenecks in the last mile, we have implemented a system that performs DNS prefetching and TCP connection caching to the Web sites that devices inside a home visit most frequently, a technique we call popularity-based prefetching. Many devices and applications already perform DNS prefetching and maintain persistent TCP connections, but most prefetching is predictive based on the content of the page, rather than on past site popularity. We evaluate the optimizations using a simulator that we drive from traffic traces that we collected from five homes in the BISmark testbed [1]. We find that performing DNS prefetching and TCP connection caching for the twenty most popular sites inside the home can double DNS and connection cache hit rates.",
"title": ""
},
{
"docid": "7b78b138539b876660c2a320aa10cd2e",
"text": "What are the psychological, computational and neural underpinnings of language? Are these neurocognitive correlates dedicated to language? Do different parts of language depend on distinct neurocognitive systems? Here I address these and other issues that are crucial for our understanding of two fundamental language capacities: the memorization of words in the mental lexicon, and the rule-governed combination of words by the mental grammar. According to the declarative/procedural model, the mental lexicon depends on declarative memory and is rooted in the temporal lobe, whereas the mental grammar involves procedural memory and is rooted in the frontal cortex and basal ganglia. I argue that the declarative/procedural model provides a new framework for the study of lexicon and grammar.",
"title": ""
},
{
"docid": "44f41d363390f6f079f2e67067ffa36d",
"text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢",
"title": ""
},
{
"docid": "dd0319de90cd0e58a9298a62c2178b25",
"text": "The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. This paper presents a novel hybrid automatic approach for the extraction of retinal image vessels. The method consists in the application of mathematical morphology and a fuzzy clustering algorithm followed by a purification procedure. In mathematical morphology, the retinal image is smoothed and strengthened so that the blood vessels are enhanced and the background information is suppressed. The fuzzy clustering algorithm is then employed to the previous enhanced image for segmentation. After the fuzzy segmentation, a purification procedure is used to reduce the weak edges and noise, and the final results of the blood vessels are consequently achieved. The performance of the proposed method is compared with some existing segmentation methods and hand-labeled segmentations. The approach has been tested on a series of retinal images, and experimental results show that our technique is promising and effective.",
"title": ""
},
{
"docid": "eb7eb6777a68fd594e2e94ac3cba6be9",
"text": "Cellulosic plant material represents an as-of-yet untapped source of fermentable sugars for significant industrial use. Many physio-chemical structural and compositional factors hinder the enzymatic digestibility of cellulose present in lignocellulosic biomass. The goal of any pretreatment technology is to alter or remove structural and compositional impediments to hydrolysis in order to improve the rate of enzyme hydrolysis and increase yields of fermentable sugars from cellulose or hemicellulose. These methods cause physical and/or chemical changes in the plant biomass in order to achieve this result. Experimental investigation of physical changes and chemical reactions that occur during pretreatment is required for the development of effective and mechanistic models that can be used for the rational design of pretreatment processes. Furthermore, pretreatment processing conditions must be tailored to the specific chemical and structural composition of the various, and variable, sources of lignocellulosic biomass. This paper reviews process parameters and their fundamental modes of action for promising pretreatment methods.",
"title": ""
},
{
"docid": "b3d232625a70ddf1733448ad26a9a0a0",
"text": "This study aims at minimizing the acoustic noise from a magnetic origin of a claw-pole alternator. This optimization is carried out through a multiphysics simulation, which includes the computation of magnetic forces, vibrations, and the resulting noise. Therefore, a mechanical model of the alternator has to be developed to determine its main modes. Predicted modal parameters are checked against experimental results. Based on this model, the sound power level is simulated and compared with measurements. Finally, the rotor shape is optimized and a significant reduction of the noise level is found by simulation.",
"title": ""
},
{
"docid": "ce0f21b03d669b72dd954352e2c35ab1",
"text": "In this letter, a new technique is proposed for the design of a compact high-power low-pass rectangular waveguide filter with a wide spurious-free frequency behavior. Specifically, the new filter is intended for the suppression of the fundamental mode over a wide band in much higher power applications than the classical corrugated filter with the same frequency specifications. Moreover, the filter length is dramatically reduced when compared to alternative techniques previously considered.",
"title": ""
}
] |
scidocsrr
|
55cc5efb797635335473bf96e206eeb2
|
Fog computing security: a review of current applications and security solutions
|
[
{
"docid": "16fbebf500be1bf69027d3a35d85362b",
"text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.",
"title": ""
},
{
"docid": "c3195ff8dc6ca8c130f5a96ebe763947",
"text": "The recent emergence of Cloud Computing has drastically altered everyone’s perception of infrastructure architectures, software delivery and development models. Projecting as an evolutionary step, following the transition from mainframe computers to client/server deployment models, cloud computing encompasses elements from grid computing, utility computing and autonomic computing, into an innovative deployment architecture. This rapid transition towards the clouds, has fuelled concerns on a critical issue for the success of information systems, communication and information security. From a security perspective, a number of unchartered risks and challenges have been introduced from this relocation to the clouds, deteriorating much of the effectiveness of traditional protection mechanisms. As a result the aim of this paper is twofold; firstly to evaluate cloud security by identifying unique security requirements and secondly to attempt to present a viable solution that eliminates these potential threats. This paper proposes introducing a Trusted Third Party, tasked with assuring specific security characteristics within a cloud environment. The proposed solution calls upon cryptography, specifically Public Key Infrastructure operating in concert with SSO and LDAP, to ensure the authentication, integrity and confidentiality of involved data and communications. The solution, presents a horizontal level of service, available to all implicated entities, that realizes a security mesh, within which essential trust is maintained.",
"title": ""
}
] |
[
{
"docid": "fd2450f5b02a2599be29b90a599ad31d",
"text": "Male genital injuries, demand prompt management to prevent long-term sexual and psychological damage. Injuries to the scrotum and contents may produce impaired fertility.We report our experience in diagnosing and managing a case of a foreign body in the scrotum following a boat engine blast accident. This case report highlights the need for a good history and thorough general examination to establish the mechanism of injury in order to distinguish between an embedded penetrating projectile injury and an injury with an exit wound. Prompt surgical exploration with hematoma evacuation limits complications.",
"title": ""
},
{
"docid": "d183be50b6cb55cbf42bc273b7e2e957",
"text": "THE FUNCTIONAL MOVEMENT SCREEN (FMS) IS A PREPARTICIPATION SCREENING TOOL COMPRISING 7 INDIVIDUAL TESTS FOR WHICH BOTH INDIVIDUAL SCORES AND AN OVERALL SCORE ARE GIVEN. THE FMS DISPLAYS BOTH INTERRATER AND INTRARATER RELIABILITY BUT HAS BEEN CHALLENGED ON THE BASIS OF A LACK OF VALIDITY IN SEVERAL RESPECTS. THE FMS SEEMS TO HAVE SOME DEGREE OF PREDICTIVE ABILITY FOR IDENTIFYING ATHLETES WHO ARE AT AN INCREASED RISK OF INJURY. HOWEVER, A POOR SCORE ON THE FMS DOES NOT PRECLUDE ATHLETES FROM COMPETING AT THE HIGHEST LEVEL NOR DOES IT DIFFERENTIATE BETWEEN ATHLETES OF DIFFERING ABILITIES. T he functional movement screen (FMS) is a pre-participation screening tool comprising 7 individual tests for which both individual scores and an overall score are given (11). The 7 tests are rated from 0 to 3 by an examiner and include the deep squat, hurdle step, in-line lunge, shoulder mobility, active straight leg raise, trunk stability push-up, and rotary stability (11,12). The score of 0 is given if pain occurs during a test, the score of 1 is given if the subject is not able to perform the movement, the score of 2 is given if the subject is able to complete the movement but compensates in some way, and the score of 3 is given if the subject performs the movement correctly (11). It has been suggested that a less-thanperfect score on a single individual test of the FMS reveals a “compensatory movement pattern.” Such compensatory movement patterns have been proposed to lead to athletes “sacrificing efficient movements for inefficient ones” (11), which implies the replacement of either amore economical ormore effective pattern with a less economical or less effective one. It has also been proposed that such compensatory movement patterns predispose an athlete to injury and reduced performance and may be corrected by performing specific exercises. As a designer of the FMS states: “an athlete who is unable to perform amovement correctly. has uncovered a significant piece of information that may be the key to reducing the risk of chronic injuries, improving overall sport performance, and developing a training or rehabilitation program .” (9). This seems to imply that the FMS is put forward as a valid test for identifying certain movement patterns that lead to greater injury risk and reduced athletic performance. In the course of our review, we did not identify a formal definition of the concept “compensatory movement pattern.” We suggest that it can be defined as a kinematic feature or sequence of features observed during the performance of a movement that deviate from a template that is thought to represent the least injurious way of performing the movement. In the FMS, the individual scores for each movement are combined into a final score out of 21 total possible points. It has been suggested that lower overall scores predict individuals who are at a greater risk of injury than those with higher scores (11). In practice, researchers have generally identified 14 points as the ideal cut-off point for those at greater or less risk of injury (5,8,36,38,41,43). The cut-off value of 14 points was in certain studies identified by means of a statistical method known as a receiver-operator characteristic (ROC) curve (5,38,43). This technique allows researchers to identify the numerical score that maximizes the correct prediction of injury classification (66). However, in other cases (8,36), the researchers simply adopted the cut-off value of 14 points based on the findings of previous studies. Although this may not maximize the predictability of the cut-off point in those individual studies that elected not to use a ROC curve, it does have the advantage of enhancing comparability between trials. Studies investigating the norms for FMS overall scores have identified that the normal FMS score in healthy but untrained populations ranges from 14.14 6 2.85 points (51) to 15.7 6 1.9 points (53). This suggests that most untrained people are slightly above the cut-off score of #14 points, which is thought to be indicative of prevalent compensation patterns and which is also believed to be predictive of increased risk of injury and reduced performance.",
"title": ""
},
{
"docid": "863e71cf1c1eddf3c6ceac400670e6f7",
"text": "This paper provides a brief overview to four major types of causal models for health-sciences research: Graphical models (causal diagrams), potential-outcome (counterfactual) models, sufficient-component cause models, and structural-equations models. The paper focuses on the logical connections among the different types of models and on the different strengths of each approach. Graphical models can illustrate qualitative population assumptions and sources of bias not easily seen with other approaches; sufficient-component cause models can illustrate specific hypotheses about mechanisms of action; and potential-outcome and structural-equations models provide a basis for quantitative analysis of effects. The different approaches provide complementary perspectives, and can be employed together to improve causal interpretations of conventional statistical results.",
"title": ""
},
{
"docid": "d848a684aeddd5447f17282fdd2efaf0",
"text": "..........................................................................................................iii ACKNOWLEDGMENTS.........................................................................................iv TABLE OF CONTENTS .........................................................................................vi LIST OF TABLES................................................................................................viii LIST OF FIGURES ................................................................................................ix",
"title": ""
},
{
"docid": "f925550d3830944b8649266292eae3fd",
"text": "In the recent years antenna design appears as a mature field of research. It really is not the fact because as the technology grows with new ideas, fitting expectations in the antenna design are always coming up. A Ku-band patch antenna loaded with notches and slit has been designed and simulated using Ansoft HFSS 3D electromagnetic simulation tool. Multi-frequency band operation is obtained from the proposed microstrip antenna. The design was carried out using Glass PTFE as the substrate and copper as antenna material. The designed antennas resonate at 15GHz with return loss over 50dB & VSWR less than 1, on implementing different slots in the radiating patch multiple frequencies resonate at 12.2GHz & 15.00GHz (Return Loss -27.5, -37.73 respectively & VSWR 0.89, 0.24 respectively) and another resonate at 11.16 GHz, 15.64GHz & 17.73 GHz with return loss -18.99, -23.026, -18.156 dB respectively and VSWR 1.95, 1.22 & 2.1 respectively. All the above designed band are used in the satellite application for non-geostationary orbit (NGSO) and fixed-satellite services (FSS) providers to operate in various segments of the Ku-band.",
"title": ""
},
{
"docid": "dfd56d9cd4bb7ee8b9a8d7e296b52bd8",
"text": "The development of traditional tumor-targeted drug delivery systems based on EPR effect and receptor-mediated endocytosis is very challenging probably because of the biological complexity of tumors as well as the limitations in the design of the functional nano-sized delivery systems. Recently, multistage drug delivery systems (Ms-DDS) triggered by various specific tumor microenvironment stimuli have emerged for tumor therapy and imaging. In response to the differences in the physiological blood circulation, tumor microenvironment, and intracellular environment, Ms-DDS can change their physicochemical properties (such as size, hydrophobicity, or zeta potential) to achieve deeper tumor penetration, enhanced cellular uptake, timely drug release, as well as effective endosomal escape. Based on these mechanisms, Ms-DDS could deliver maximum quantity of drugs to the therapeutic targets including tumor tissues, cells, and subcellular organelles and eventually exhibit the highest therapeutic efficacy. In this review, we expatiate on various responsive modes triggered by the tumor microenvironment stimuli, introduce recent advances in multistage nanoparticle systems, especially the multi-stimuli responsive delivery systems, and discuss their functions, effects, and prospects.",
"title": ""
},
{
"docid": "719c945e9f45371f8422648e0e81178f",
"text": "As technology in the cloud increases, there has been a lot of improvements in the maturity and firmness of cloud storage technologies. Many end-users and IT managers are getting very excited about the potential benefits of cloud storage, such as being able to store and retrieve data in the cloud and capitalizing on the promise of higher-performance, more scalable and cut-price storage. In this thesis, we present a typical Cloud Storage system architecture, a referral Cloud Storage model and Multi-Tenancy Cloud Storage model, value the past and the state-ofthe-art of Cloud Storage, and examine the Edge and problems that must be addressed to implement Cloud Storage. Use cases in diverse Cloud Storage offerings were also abridged. KEYWORDS—Cloud Storage, Cloud Computing, referral model, Multi-Tenancy, survey",
"title": ""
},
{
"docid": "c62b9ce2cf7b7bede9762fe66f4cf0ea",
"text": "Over the past few year, as a result of the great technological advances in color printing, duplicating and scanning, counterfeiting problems have become more and more serious. In the past, only the printing house has the ability to make counterfeit paper currency, but today it is possible for any person to print counterfeit bank notes simply by using a computer and a laser printer at house. Therefore the issue of efficiently distinguishing counterfeit banknotes from genuine ones via automatic machines has become more and more important. There is a need to design a system that is helpful in recognition of paper currency notes with fast speed and in less time.",
"title": ""
},
{
"docid": "f18dc5d572f60da7c85d50e6a42de2c9",
"text": "Recent developments in remote sensing are offering a promising opportunity to rethink conventional control strategies of wind turbines. With technologies such as LIDAR, the information about the incoming wind field - the main disturbance to the system - can be made available ahead of time. Feedforward control can be easily combined with traditional collective pitch feedback controllers and has been successfully tested on real systems. Nonlinear model predictive controllers adjusting both collective pitch and generator torque can further reduce structural loads in simulations but have higher computational times compared to feedforward or linear model predictive controller. This paper compares a linear and a commercial nonlinear model predictive controller to a baseline controller. On the one hand simulations show that both controller have significant improvements if used along with the preview of the rotor effective wind speed. On the other hand the nonlinear model predictive controller can achieve better results compared to the linear model close to the rated wind speed.",
"title": ""
},
{
"docid": "c32d28b173df9f6fbbe33e6843338007",
"text": "A coflow is a collection of related parallel flows that occur typically between two stages of a multi-stage compute task in a network, such as shuffle flows in MapReduce. The coflow abstraction allows applications to convey their semantics to the network so that application-level requirements (e.g., minimizing the completion time of the slowest flow) can be better satisfied. In this paper, we study the routing and scheduling of multiple coflows to minimize the average coflow completion time (CCT). We first propose a rounding-based randomized approximation algorithm, called OneCoflow, for single coflow routing and scheduling. The multiple coflow problem is more challenging as coexisting coflows will compete for the same network resources such as link bandwidths. To minimize the average CCT, we derive an online multiple coflow routing and scheduling algorithm, called OMCoflow, and prove that it has a reasonably good competitive ratio. To the best of our knowledge, this is the first online algorithm with theoretical performance guarantees which considers routing and scheduling simultaneously for multi-coflows. Compared with existing methods, OMCoflow runs more efficiently, and it avoids the problem of frequently rerouting the flows. Extensive simulations on a Facebook data trace show that OMCoflow outperforms the state-of-the-art heuristic schemes significantly (e.g., reducing the average CCT by up to 41.8% and the execution time by up to 99.2% against RAPIER [28]).",
"title": ""
},
{
"docid": "a975ca76af34f5911191efa72d7f583c",
"text": "Lattice-based cryptography is the use of conjectured hard problems on point lattices in Rn as the foundation for secure cryptographic systems. Attractive features of lattice cryptography include apparent resistance to quantum attacks (in contrast with most number-theoretic cryptography), high asymptotic efficiency and parallelism, security under worst-case intractability assumptions, and solutions to long-standing open problems in cryptography. This work surveys most of the major developments in lattice cryptography over the past ten years. The main focus is on the foundational short integer solution (SIS) and learning with errors (LWE) problems (and their more efficient ring-based variants), their provable hardness assuming the worst-case intractability of standard lattice problems, and their many cryptographic applications. C. Peikert. A Decade of Lattice Cryptography. Foundations and Trends © in Theoretical Computer Science, vol. 10, no. 4, pp. 283–424, 2014. DOI: 10.1561/0400000074. Full text available at: http://dx.doi.org/10.1561/0400000074",
"title": ""
},
{
"docid": "555c6057602204a143db1d62c4ca2da0",
"text": "Computers, even small ones like the phone in your pocket, are good at performing thousands of operations in just a few seconds. Even more impressively, they can also make decisions based on the data in their memory banks and logic specified by the programmer. This decision-making capability is probably the key ingredient of what people think of as artificial intelligence—and it’s definitely a very important part of creating smart, interesting apps! In this chapter, we’ll explore how to build decision-making logic into your apps.",
"title": ""
},
{
"docid": "9faa8b39898eaa4ca0a0c23d29e7a0ff",
"text": "Highly emphasized in entrepreneurial practice, business models have received limited attention from researchers. No consensus exists regarding the definition, nature, structure, and evolution of business models. Still, the business model holds promise as a unifying unit of analysis that can facilitate theory development in entrepreneurship. This article synthesizes the literature and draws conclusions regarding a number of these core issues. Theoretical underpinnings of a firm's business model are explored. A sixcomponent framework is proposed for characterizing a business model, regardless of venture type. These components are applied at three different levels. The framework is illustrated using a successful mainstream company. Suggestions are made regarding the manner in which business models might be expected to emerge and evolve over time. a c Purchase Export",
"title": ""
},
{
"docid": "ec8995a3f6d4fcf9cc90aa6acb044039",
"text": "Spitzoid lesions represent a challenging and controversial group of tumours, in terms of clinical recognition, biological behaviour and management strategies. Although Spitz naevi are considered benign tumours, their clinical and dermoscopic morphological overlap with spitzoid melanoma renders the management of spitzoid lesions particularly difficult. The controversy deepens because of the existence of tumours that cannot be safely histopathologically diagnosed as naevi or melanomas (atypical Spitz tumours). The dual objective of the present study was to provide an updated classification on dermoscopy of Spitz naevi, and management recommendations of spitzoid-looking lesions based on a consensus among experts in the field. After a detailed search of the literature for eligible studies, a data synthesis was performed from 15 studies on dermoscopy of Spitz naevi. Dermoscopically, Spitz naevi are typified by three main patterns: starburst pattern (51%), a pattern of regularly distributed dotted vessels (19%) and globular pattern with reticular depigmentation (17%). A consensus-based algorithm for the management of spitzoid lesions is proposed. According to it, dermoscopically asymmetric lesions with spitzoid features (both flat/raised and nodular) should be excised to rule out melanoma. Dermoscopically symmetric spitzoid nodules should also be excised or closely monitored, irrespective of age, to rule out atypical Spitz tumours. Dermoscopically symmetric, flat spitzoid lesions should be managed according to the age of the patient. Finally, the histopathological diagnosis of atypical Spitz tumour should warrant wide excision but not a sentinel lymph-node biopsy.",
"title": ""
},
{
"docid": "2d8094fc287a36d7d011aef42eff01ca",
"text": "Poor quality data may be detected and corrected by performing various quality assurance activities that rely on techniques with different efficacy and cost. In this paper, we propose a quantitative approach for measuring and comparing the effectiveness of these data quality (DQ) techniques. Our definitions of effectiveness are inspired by measures proposed in Information Retrieval. We show how the effectiveness of a DQ technique can be mathematically estimated in general cases, using formal techniques that are based on probabilistic assumptions. We then show how the resulting effectiveness formulas can be used to evaluate, compare and make choices involving DQ techniques.",
"title": ""
},
{
"docid": "b297069ce45ad91dcf3f74ed2b0e678c",
"text": "Cassava is the third largest source of carbohydrates for human food in the world but is vulnerable to virus diseases, which threaten to destabilize food security in sub-Saharan Africa. Novel methods of cassava disease detection are needed to support improved control which will prevent this crisis. Image recognition offers both a cost effective and scalable technology for disease detection. New deep learning models offer an avenue for this technology to be easily deployed on mobile devices. Using a dataset of cassava disease images taken in the field in Tanzania, we applied transfer learning to train a deep convolutional neural network to identify three diseases and two types of pest damage (or lack thereof). The best trained model accuracies were 98% for brown leaf spot (BLS), 96% for red mite damage (RMD), 95% for green mite damage (GMD), 98% for cassava brown streak disease (CBSD), and 96% for cassava mosaic disease (CMD). The best model achieved an overall accuracy of 93% for data not used in the training process. Our results show that the transfer learning approach for image recognition of field images offers a fast, affordable, and easily deployable strategy for digital plant disease detection.",
"title": ""
},
{
"docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0",
"text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.",
"title": ""
},
{
"docid": "95a6c3507b87cae9a70df0be97c34964",
"text": "The distributed consensus problem has been extensively studied in the last four decades as an important problem in distributed systems. Recent advances in decentralized consensus and blockchain technology, however, arose from a disparate model and gave rise to disjoint knowledge-base and techniques than those in the classical consensus research. In this paper we make a case for bridging these two seemingly disparate approaches in order to help transfer the lessons learned from the classical distributed consensus world to the blockchain world and vice versa. To this end, we draw parallels between blockchain consensus and a classical consensus protocol, Paxos. We also survey prominent approaches to improving the throughput and providing instant irreversibility to blockchain consensus and show analogies to the techniques from classical consensus protocols. Finally, inspired by the central role formal methods played in the success of classical consensus research, we suggest more extensive use of formal methods in modeling the blockchains and smartcontracts.",
"title": ""
},
{
"docid": "c69a480600fea74dab84290e6c0e2204",
"text": "Mobile cloud computing is computing of Mobile application through cloud. As we know market of mobile phones is growing rapidly. According to IDC, the premier global market intelligence firm, the worldwide Smartphone market grew 42. 5% year over year in the first quarter of 2012. With the growing demand of Smartphone the demand for fast computation is also growing. Inspite of comparatively more processing power and storage capability of Smartphone's, they still lag behind Personal Computers in meeting processing and storage demands of high end applications like speech recognition, security software, gaming, health services etc. Mobile cloud computing is an answer to intensive processing and storage demand of real-time and high end applications. Being in nascent stage, Mobile Cloud Computing has privacy and security issues which deter the users from adopting this technology. This review paper throws light on privacy and security issues of Mobile Cloud Computing.",
"title": ""
},
{
"docid": "dba5777004cf43d08a58ef3084c25bd3",
"text": "This paper investigates the problem of automatic humour recognition, and provides and in-depth analysis of two of the most frequently observ ed features of humorous text: human-centeredness and negative polarity. T hrough experiments performed on two collections of humorous texts, we show that th ese properties of verbal humour are consistent across different data s ets.",
"title": ""
}
] |
scidocsrr
|
c8fa010fab778c41682fd01a07f9433f
|
Size Estimation of Cloud Migration Projects with Cloud Migration Point (CMP)
|
[
{
"docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb",
"text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.",
"title": ""
},
{
"docid": "37679e0fdb6ba2a8629cc7792e2df17e",
"text": "This presentation summarizes the results of experiments conducted over the past year to empirically validate extensions made in an attempt to use the COCOMO I1 Early Design model to accurately estimate web development effort and duration. The presentation starts by summarizing the challenges associated with estimating resources for web-based developments. Next, it describes a new sizing metric, web objects, and an adaptation of the Early Design model. WEBMO (Web Model) developed to meet these challenges. Both the size metric and model adaptation have been developed to address unique estimating issues identified as data from more than 40 projects was collected, normalized and analyzed in order to get a handle on the resources needed for quick-to-market software developments. The presentation concludes by discussing lessons learned from the effort and the next steps.",
"title": ""
}
] |
[
{
"docid": "1a393c0789f4dddab690ec65d145424d",
"text": "INTRODUCTION: Microneedling procedures are growing in popularity for a wide variety of skin conditions. This paper comprehensively reviews the medical literature regarding skin needling efficacy and safety in all skin types and in multiple dermatologic conditions. METHODS: A PubMed literature search was conducted in all languages without restriction and bibliographies of relevant articles reviewed. Search terms included: \"microneedling,\" \"percutaneous collagen induction,\" \"needling,\" \"skin needling,\" and \"dermaroller.\" RESULTS: Microneedling is most commonly used for acne scars and cosmetic rejuvenation, however, treatment benefit has also been seen in varicella scars, burn scars, keloids, acne, alopecia, and periorbital melanosis, and has improved flap and graft survival, and enhanced transdermal delivery of topical products. Side effects were mild and self-limited, with few reports of post-inflammatory hyperpigmentation, and isolated reports of tram tracking, facial allergic granuloma, and systemic hypersensitivity. DISCUSS: Microneedling represents a safe, cost-effective, and efficacious treatment option for a variety of dermatologic conditions in all skin types. More double-blinded, randomized, controlled trials are required to make more definitive conclusions. J Drugs Dermatol. 2017;16(4):308-314..",
"title": ""
},
{
"docid": "bdb49f702123031d2ee935a387c9888e",
"text": "Standard state-machine replication involves consensus on a sequence of totally ordered requests through, for example, the Paxos protocol. Such a sequential execution model is becoming outdated on prevalent multi-core servers. Highly concurrent executions on multi-core architectures introduce non-determinism related to thread scheduling and lock contentions, and fundamentally break the assumption in state-machine replication. This tension between concurrency and consistency is not inherent because the total-ordering of requests is merely a simplifying convenience that is unnecessary for consistency. Concurrent executions of the application can be decoupled with a sequence of consensus decisions through consensus on partial-order traces, rather than on totally ordered requests, that capture the non-deterministic decisions in one replica execution and to be replayed with the same decisions on others. The result is a new multi-core friendly replicated state-machine framework that achieves strong consistency while preserving parallelism in multi-thread applications. On 12-core machines with hyper-threading, evaluations on typical applications show that we can scale with the number of cores, achieving up to 16 times the throughput of standard replicated state machines.",
"title": ""
},
{
"docid": "f7aac91b892013cfdc1302890cb7a263",
"text": "We study the problem of learning a generalizable action policy for an intelligent agent to actively approach an object of interest in indoor environment solely from its visual inputs. While scene-driven or recognition-driven visual navigation has been widely studied, prior efforts suffer severely from the limited generalization capability. In this paper, we first argue the object searching task is environment dependent while the approaching ability is general. To learn a generalizable approaching policy, we present a novel solution dubbed as GAPLE which adopts two channels of visual features: depth and semantic segmentation, as the inputs to the policy learning module. The empirical studies conducted on the House3D dataset as well as on a physical platform in a real world scenario validate our hypothesis, and we further provide indepth qualitative analysis.",
"title": ""
},
{
"docid": "2633bfb54b09ec28d4e123199a1ddb37",
"text": "Software complexity has increased the need for automated software testing. Most research on automating testing, however, has focused on creating test input data. While careful selection of input data is necessary to reach faulty states in a system under test, test oracles are needed to actually detect failures. In this work, we describe Dodona, a system that supports the generation of test oracles. Dodona ranks program variables based on the interactions and dependencies observed between them during program execution. Using this ranking, Dodona proposes a set of variables to be monitored, that can be used by engineers to construct assertion-based oracles. Our empirical study of Dodona reveals that it is more effective and efficient than the current state-of-the-art approach for generating oracle data sets, and can often yield oracles that are almost as effective as oracles hand-crafted by engineers without support.",
"title": ""
},
{
"docid": "160a27e958b5e853efb090f93bf006e8",
"text": "Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.",
"title": ""
},
{
"docid": "dc5693b92b0c91ef3e9239da9fd089d9",
"text": "This paper surveys approaches and up-to-date information of RDF data management and then categorizes them into four main RDF storages. Then, the survey restricts the discussion to those methods that solve RDF data management using a RDBMS, since it gives better performance and query optimization as a result of the large quantity of work required to induce relational query efficiency and also the scalability of its storage comes into play, with respect to scalability and various characteristics of performance.",
"title": ""
},
{
"docid": "4b09424630d5e27f1ed32b5798674595",
"text": "Tampering detection has been increasingly attracting attention in the field of digital forensics. As a popular nonlinear smoothing filter, median filtering is often used as a post-processing operation after image forgeries such as copy-paste forgery (including copy-move and image splicing), which is of particular interest to researchers. To implement the blind detection of median filtering, this paper proposes a novel approach based on a frequency-domain feature coined the annular accumulated points (AAP). Experimental results obtained on widely used databases, which consists of various real-world photos, show that the proposed method achieves outstanding performance in distinguishing median-filtered images from original images or images that have undergone other types of manipulations, especially in the scenarios of low resolution and JPEG compression with a low quality factor. Moreover, our approach remains reliable even when the feature dimension decreases to 5, which is significant to save the computing time required for classification, demonstrating its great advantage to be applied in real-time processing of big multimedia data.",
"title": ""
},
{
"docid": "06d05d4cbfd443d45993d6cc98ab22cb",
"text": "Genetic deficiency of ectodysplasin A (EDA) causes X-linked hypohidrotic ectodermal dysplasia (XLHED), in which the development of sweat glands is irreversibly impaired, an condition that can lead to life-threatening hyperthermia. We observed normal development of mouse fetuses with Eda mutations after they had been exposed in utero to a recombinant protein that includes the receptor-binding domain of EDA. We administered this protein intraamniotically to two affected human twins at gestational weeks 26 and 31 and to a single affected human fetus at gestational week 26; the infants, born in week 33 (twins) and week 39 (singleton), were able to sweat normally, and XLHED-related illness had not developed by 14 to 22 months of age. (Funded by Edimer Pharmaceuticals and others.).",
"title": ""
},
{
"docid": "d2cbf33cdd8fcc051fbc6ed53a70cdc0",
"text": "is book focuses on the core question of the necessary architectural support provided by hardware to efficiently run virtual machines, and of the corresponding design of the hypervisors that run them. Virtualization is still possible when the instruction set architecture lacks such support, but the hypervisor remains more complex and must rely on additional techniques. Despite the focus on architectural support in current architectures, some historical perspective is necessary to appropriately frame the problem. e first half of the book provides the historical perspective of the theoretical framework developed four decades ago by Popek and Goldberg. It also describes earlier systems that enabled virtualization despite the lack of architectural support in hardware. As is often the case, theory defines a necessary—but not sufficient—set of features, and modern architectures are the result of the combination of the theoretical framework with insights derived from practical systems. e second half of the book describes state-of-the-art support for virtualization in both x86-64 and ARM processors. is book includes an in-depth description of the CPU, memory, and I/O virtualization of these two processor architectures, as well as case studies on the Linux/KVM, VMware, and Xen hypervisors. It concludes with a performance comparison of virtualization on current-generation x86and ARM-based systems across multiple hypervisors.",
"title": ""
},
{
"docid": "da989da66f8c2019adf49eae97fc2131",
"text": "Psychedelic drugs are making waves as modern trials support their therapeutic potential and various media continue to pique public interest. In this opinion piece, we draw attention to a long-recognised component of the psychedelic treatment model, namely ‘set’ and ‘setting’ – subsumed here under the umbrella term ‘context’. We highlight: (a) the pharmacological mechanisms of classic psychedelics (5-HT2A receptor agonism and associated plasticity) that we believe render their effects exceptionally sensitive to context, (b) a study design for testing assumptions regarding positive interactions between psychedelics and context, and (c) new findings from our group regarding contextual determinants of the quality of a psychedelic experience and how acute experience predicts subsequent long-term mental health outcomes. We hope that this article can: (a) inform on good practice in psychedelic research, (b) provide a roadmap for optimising treatment models, and (c) help tackle unhelpful stigma still surrounding these compounds, while developing an evidence base for long-held assumptions about the critical importance of context in relation to psychedelic use that can help minimise harms and maximise potential benefits.",
"title": ""
},
{
"docid": "06b9f83845f3125272115894676b5e5d",
"text": "For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy alignment algorithm with particularly good performance and show that it computes the same alignment as does a certain dynamic programming algorithm, while executing over 10 times faster on appropriate data. An implementation of this algorithm is currently used in a program that assembles the UniGene database at the National Center for Biotechnology Information.",
"title": ""
},
{
"docid": "0930ec4162eec816379ca24808768ddd",
"text": "Cloud-integrated Internet of Things (IoT) is emerging as the next-generation service platform that enables smart functionality worldwide. IoT applications such as smart grid and power systems, e-health, and body monitoring applications along with large-scale environmental and industrial monitoring are increasingly generating large amounts of data that can conveniently be analyzed through cloud service provisioning. However, the nature of these applications mandates the use of secure and privacy-preserving implementation of services that ensures the integrity of data without any unwarranted exposure. This article explores the unique challenges and issues within this context of enabling secure cloud-based data analytics for the IoT. Three main applications are discussed in detail, with solutions outlined based on the use of fully homomorphic encryption systems to achieve data security and privacy over cloud-based analytical phases. The limitations of existing technologies are discussed and models proposed with regard to achieving high efficiency and accuracy in the provisioning of analytic services for encrypted data over a cloud platform.",
"title": ""
},
{
"docid": "c89ca701d947ba6594be753470f152ac",
"text": "The visualization of an image collection is the process of displaying a collection of images on a screen under some specific layout requirements. This paper focuses on an important problem that is not well addressed by the previous methods: visualizing image collections into arbitrary layout shapes while arranging images according to user-defined semantic or visual correlations (e.g., color or object category). To this end, we first propose a property-based tree construction scheme to organize images of a collection into a tree structure according to user-defined properties. In this way, images can be adaptively placed with the desired semantic or visual correlations in the final visualization layout. Then, we design a two-step visualization optimization scheme to further optimize image layouts. As a result, multiple layout effects including layout shape and image overlap ratio can be effectively controlled to guarantee a satisfactory visualization. Finally, we also propose a tree-transfer scheme such that visualization layouts can be adaptively changed when users select different “images of interest.” We demonstrate the effectiveness of our proposed approach through the comparisons with state-of-the-art visualization techniques.",
"title": ""
},
{
"docid": "9a3d90ecbd12f6ef5ee9348c4af90d0b",
"text": "The gene encoding the forkhead box transcription factor, FOXP2, is essential for developing the full articulatory power of human language. Mutations of FOXP2 cause developmental verbal dyspraxia (DVD), a speech and language disorder that compromises the fluent production of words and the correct use and comprehension of grammar. FOXP2 patients have structural and functional abnormalities in the striatum of the basal ganglia, which also express high levels of FOXP2. Since human speech and learned vocalizations in songbirds bear behavioral and neural parallels, songbirds provide a genuine model for investigating the basic principles of speech and its pathologies. In zebra finch Area X, a basal ganglia structure necessary for song learning, FoxP2 expression increases during the time when song learning occurs. Here, we used lentivirus-mediated RNA interference (RNAi) to reduce FoxP2 levels in Area X during song development. Knockdown of FoxP2 resulted in an incomplete and inaccurate imitation of tutor song. Inaccurate vocal imitation was already evident early during song ontogeny and persisted into adulthood. The acoustic structure and the duration of adult song syllables were abnormally variable, similar to word production in children with DVD. Our findings provide the first example of a functional gene analysis in songbirds and suggest that normal auditory-guided vocal motor learning requires FoxP2.",
"title": ""
},
{
"docid": "9009f20f639de20d28ba01fac60db9d0",
"text": "We propose strategies for selecting a good neural network architecture for modeling any spe-ciic data set. Our approach involves eeciently searching the space of possible architectures and selecting a \\best\" architecture based on estimates of generalization performance. Since an exhaustive search over the space of architectures is computationally infeasible, we propose heuristic strategies which dramatically reduce the search complexity. These employ directed search algorithms, including selecting the number of nodes via sequential network construction (SNC), sensitivity based pruning (SBP) of inputs, and optimal brain damage (OBD) pruning for weights. A selection criterion, the estimated generalization performance or prediction risk, is used to guide the heuristic search and to choose the nal network. Both predicted squared error (PSE) and nonlinear cross{validation (NCV) are used for estimating the prediction risk from the available data. We apply these heuristic search and prediction risk estimation techniques to the problem of predicting corporate bond ratings. This problem is very attractive as a case study, since it is characterized by a limited set of data and by the lack of a complete a priori model which could be used to impose a structure to the network architecture.",
"title": ""
},
{
"docid": "944f3e499b77e7ed50c74a786f9e218b",
"text": "This paper describes EMBER: a labeled benchmark dataset for training machine learning models to statically detect malicious Windows portable executable files. The dataset includes features extracted from 1.1M binary files: 900K training samples (300K malicious, 300K benign, 300K unlabeled) and 200K test samples (100K malicious, 100K benign). To accompany the dataset, we also release open source code for extracting features from additional binaries so that additional sample features can be appended to the dataset. This dataset fills a void in the information security machine learning community: a benign/malicious dataset that is large, open and general enough to cover several interesting use cases. We enumerate several use cases that we considered when structuring the dataset. Additionally, we demonstrate one use case wherein we compare a baseline gradient boosted decision tree model trained using LightGBM with default settings to MalConv, a recently published end-to-end (featureless) deep learning model for malware detection. Results show that even without hyperparameter optimization, the baseline EMBER model outperforms MalConv. The authors hope that the dataset, code and baseline model provided by EMBER will help invigorate machine learning research for malware detection, in much the same way that benchmark datasets have advanced computer vision research.",
"title": ""
},
{
"docid": "ba920ed04c20125f5975519367bebd02",
"text": "Tensor and matrix factorization methods have attracted a lot of attention recently thanks to their successful applications to information extraction, knowledge base population, lexical semantics and dependency parsing. In the first part, we will first cover the basics of matrix and tensor factorization theory and optimization, and then proceed to more advanced topics involving convex surrogates and alternative losses. In the second part we will discuss recent NLP applications of these methods and show the connections with other popular methods such as transductive learning, topic models and neural networks. The aim of this tutorial is to present in detail applied factorization methods, as well as to introduce more recently proposed methods that are likely to be useful to NLP applications.",
"title": ""
},
{
"docid": "a5c9de4127df50d495c7372b363691cf",
"text": "This book is an accompaniment to the computer software package mathStatica (which runs as an add-on to Mathematica). The book comes with two CD-ROMS: mathStatica, and a 30-day trial version of Mathematica 4.1. The mathStatica CD-ROM includes an applications pack for doing mathematical statistics, custom Mathematica palettes and an electronic version of the book that is identical to the printed text, but can be used interactively to generate animations of some of the book's figures (e.g. as a parameter is varied). (I found this last feature particularly valuable.) MathStatica has statistical operators for determining expectations (and hence characteristic functions, for example) and probabilities, for finding the distributions of transformations of random variables and generally for dealing with the kinds of problems and questions that arise in mathematical statistics. Applications include estimation, curve-fitting, asymptotics, decision theory and moment conversion formulae (e.g. central to cumulant). To give an idea of the coverage of the book: after an introductory chapter, there are three chapters on random variables, then chapters on systems of distributions (e.g. Pearson), multivariate distributions, moments, asymptotic theory, decision theory and then three chapters on estimation. There is an appendix, which deals with technical Mathematica details. What distinguishes mathStatica from statistical packages such as S-PLUS, R, SPSS and SAS is its ability to deal with the algebraic/symbolic problems that are the main concern of mathematical statistics. This is, of course, because it is based on Mathematica, and this is also the reason that it has a note–book interface (which enables one to incorporate text, equations and pictures into a single line), and why arbitrary-precision calculations can be performed. According to the authors, 'this book can be used as a course text in mathematical statistics or as an accompaniment to a more traditional text'. Assumed knowledge includes preliminary courses in statistics, probability and calculus. The emphasis is on problem solving. The material is supposedly pitched at the same level as Hogg and Craig (1995). However some topics are treated in much more depth than in Hogg and Craig (characteristic functions for instance, which rate less than one page in Hogg and Craig). Also, the coverage is far broader than that of Hogg and Craig; additional topics include for instance stable distributions, cumulants, Pearson families, Gram-Charlier expansions and copulae. Hogg and Craig can be used as a textbook for a third-year course in mathematical statistics in some Australian universities , whereas there is …",
"title": ""
},
{
"docid": "9e466a4414125c0b2a41565eaeffd602",
"text": "In this work, we present a part-based grasp planning approach that is capable of generating grasps that are applicable to multiple familiar objects. We show how object models can be decomposed according to their shape and local volumetric information. The resulting object parts are labeled with semantic information and used for generating robotic grasping information. We investigate how the transfer of such grasping information to familiar objects can be achieved and how the transferability of grasps can be measured. We show that the grasp transferability measure provides valuable information about how successful planned grasps can be applied to novel object instances of the same object category. We evaluate the approach in simulation, by applying it to multiple object categories and determine how successful the planned grasps can be transferred to novel, but familiar objects. In addition, we present a use case on the humanoid robot ARMAR-III.",
"title": ""
},
{
"docid": "2901aaa10d8e7aa23f372f4e715686d5",
"text": "This article describes a model of communication known as crisis and emergency risk communication (CERC). The model is outlined as a merger of many traditional notions of health and risk communication with work in crisis and disaster communication. The specific kinds of communication activities that should be called for at various stages of disaster or crisis development are outlined. Although crises are by definition uncertain, equivocal, and often chaotic situations, the CERC model is presented as a tool health communicators can use to help manage these complex events.",
"title": ""
}
] |
scidocsrr
|
97efac0e97a59d06473e45952a3ef36a
|
Control of Robotic Mobility-On-Demand Systems: a Queueing-Theoretical Perspective
|
[
{
"docid": "47027e5df955bc7c8fa64b0753a01d9f",
"text": "Recent years have witnessed great advancements in the science and technology of autonomy, robotics, and networking. This paper surveys recent concepts and algorithms for dynamic vehicle routing (DVR), that is, for the automatic planning of optimal multivehicle routes to perform tasks that are generated over time by an exogenous process. We consider a rich variety of scenarios relevant for robotic applications. We begin by reviewing the basic DVR problem: demands for service arrive at random locations at random times and a vehicle travels to provide on-site service while minimizing the expected wait time of the demands. Next, we treat different multivehicle scenarios based on different models for demands (e.g., demands with different priority levels and impatient demands), vehicles (e.g., motion constraints, communication, and sensing capabilities), and tasks. The performance criterion used in these scenarios is either the expected wait time of the demands or the fraction of demands serviced successfully. In each specific DVR scenario, we adopt a rigorous technical approach that relies upon methods from queueing theory, combinatorial optimization, and stochastic geometry. First, we establish fundamental limits on the achievable performance, including limits on stability and quality of service. Second, we design algorithms, and provide provable guarantees on their performance with respect to the fundamental limits.",
"title": ""
}
] |
[
{
"docid": "27d073103354137ea71801f37422b3a9",
"text": "This paper presents Sniper, an automated memory leak detection tool for C/C++ production software. To track the staleness of allocated memory (which is a clue to potential leaks) with little overhead (mostly <3%), Sniper leverages instruction sampling using performance monitoring units available in commodity processors. It also offloads the time- and space-consuming analyses, and works on the original software without modifying the underlying memory allocator; it neither perturbs the application execution nor increases the heap size. The Sniper can even deal with multithreaded applications with very low overhead. In particular, it performs a statistical analysis, which views memory leaks as anomalies, for automated and systematic leak determination. Consequently, it accurately detected real-world memory leaks with no false positive, and achieved an F-measure of 81% on average for 17 benchmarks stress-tested with various memory leaks.",
"title": ""
},
{
"docid": "3f06fc0b50a1de5efd7682b4ae9f5a46",
"text": "We present ShadowDraw, a system for guiding the freeform drawing of objects. As the user draws, ShadowDraw dynamically updates a shadow image underlying the user's strokes. The shadows are suggestive of object contours that guide the user as they continue drawing. This paradigm is similar to tracing, with two major differences. First, we do not provide a single image from which the user can trace; rather ShadowDraw automatically blends relevant images from a large database to construct the shadows. Second, the system dynamically adapts to the user's drawings in real-time and produces suggestions accordingly. ShadowDraw works by efficiently matching local edge patches between the query, constructed from the current drawing, and a database of images. A hashing technique enforces both local and global similarity and provides sufficient speed for interactive feedback. Shadows are created by aggregating the edge maps from the best database matches, spatially weighted by their match scores. We test our approach with human subjects and show comparisons between the drawings that were produced with and without the system. The results show that our system produces more realistically proportioned line drawings.",
"title": ""
},
{
"docid": "69ad93c7b6224321d69456c23a4185ce",
"text": "Modeling fashion compatibility is challenging due to its complexity and subjectivity. Existing work focuses on predicting compatibility between product images (e.g. an image containing a t-shirt and an image containing a pair of jeans). However, these approaches ignore real-world ‘scene’ images (e.g. selfies); such images are hard to deal with due to their complexity, clutter, variations in lighting and pose (etc.) but on the other hand could potentially provide key context (e.g. the user’s body type, or the season) for making more accurate recommendations. In this work, we propose a new task called ‘Complete the Look’, which seeks to recommend visually compatible products based on scene images. We design an approach to extract training data for this task, and propose a novel way to learn the scene-product compatibility from fashion or interior design images. Our approach measures compatibility both globally and locally via CNNs and attention mechanisms. Extensive experiments show that our method achieves significant performance gains over alternative systems. Human evaluation and qualitative analysis are also conducted to further understand model behavior. We hope this work could lead to useful applications which link large corpora of real-world scenes with shoppable products.",
"title": ""
},
{
"docid": "77372685b51931332bfb843f20c9d2ea",
"text": "This paper presents techniques for high-voltage converters to achieve high power efficiency at high switching frequency. A quasi-square-wave zero-voltage switching isolated three-level half-bridge architecture is proposed to minimize the converter switching loss under high-voltage high-frequency conditions. A synchronous three-level gate driver with dynamic dead-time control is also developed to ensure reliability of all eGaN power FETs, automatically generate appropriate dead-time for all power FETs to achieve ZVS with minimal reverse bias behavior, and provide fast propagation delays for high-frequency converter operation. Implemented in a 0.5-μm HV CMOS process, the proposed gate driver achieves 15ns propagation delays and enables a 100V 35W isolated three-level half-bridge converter to achieve the peak power efficiencies of 95.2% and 90.7% at 1MHz and 2MHz, respectively.",
"title": ""
},
{
"docid": "9e648d8a00cb82489e1b2cd0991f2fbd",
"text": "In this work, we propose and evaluate generic hardware countermeasures against DPA attacks for recent FPGA devices. The proposed set of FPGA-specific countermeasures can be combined to resist a large variety of first-order DPA attacks, even with 100 million recorded power traces. This set includes generic and resource-efficient countermeasures for on-chip noise generation, random-data processing delays and S-box scrambling using dual-ported block memories. In particular, it is possible to build many of these countermeasures into a single IP-core or hard macro that then provides basic protection for any cryptographic implementation just by its inclusion in the design process – what is particularly useful for engineers with no or little background on IT security and SCA attacks.",
"title": ""
},
{
"docid": "5179662c841302180848dc566a114f10",
"text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.",
"title": ""
},
{
"docid": "4b1c1194a9292adf76452eda03f7f67f",
"text": "Fin-type field-effect transistors (FinFETs) are promising substitutes for bulk CMOS at the nanoscale. FinFETs are double-gate devices. The two gates of a FinFET can either be shorted for higher perfomance or independently controlled for lower leakage or reduced transistor count. This gives rise to a rich design space. This chapter provides an introduction to various interesting FinFET logic design styles, novel circuit designs, and layout considerations.",
"title": ""
},
{
"docid": "a7b875e88042f6fc064b957eefe29c77",
"text": "This paper presents an extension of neural machine translation (NMT) model to incorporate additional word-level linguistic factors. Adding such linguistic factors may be of great benefits to learning of NMT models, potentially reducing language ambiguity or alleviating data sparseness problem (Koehn and Hoang, 2007). We explore different linguistic annotations at the word level, including: lemmatization, word clusters, Part-ofSpeech tags, and labeled dependency relations. We then propose different neural attention architectures to integrate these additional factors into the NMT framework. Evaluating on translating between English and German in two directions with a low resource setting in the domain of TED talks, we obtain promising results in terms of both perplexity reductions and improved BLEU scores over baseline methods.",
"title": ""
},
{
"docid": "ddb4e010d85eb8988bbe58331a078b89",
"text": "The task of data fusion is to identify the true values of data items (e.g., the true date of birth for Tom Cruise) among multiple observed values drawn from different sources (e.g., Web sites) of varying (and unknown) reliability. A recent survey [20] has provided a detailed comparison of various fusion methods on Deep Web data. In this paper, we study the applicability and limitations of different fusion techniques on a more challenging problem: knowledge fusion. Knowledge fusion identifies true subject-predicateobject triples extracted by multiple information extractors from multiple information sources. These extractors perform the tasks of entity linkage and schema alignment, thus introducing an additional source of noise that is quite different from that traditionally considered in the data fusion literature, which only focuses on factual errors in the original sources. We adapt state-of-the-art data fusion techniques and apply them to a knowledge base with 1.6B unique knowledge triples extracted by 12 extractors from over 1B Web pages, which is three orders of magnitude larger than the data sets used in previous data fusion papers. We show great promise of the data fusion approaches in solving the knowledge fusion problem, and suggest interesting research directions through a detailed error analysis of the methods.",
"title": ""
},
{
"docid": "6547b8d856a742925936ae20bdbf3543",
"text": "In this work we present a visual servoing approach that enables a humanoid robot to robustly execute dual arm grasping and manipulation tasks. Therefore the target object(s) and both hands are tracked alternately and a combined open-/ closed-loop controller is used for positioning the hands with respect to the target(s). We address the perception system and how the observable workspace can be increased by using an active vision system on a humanoid head. Furthermore a control framework for reactive positioning of both hands using position based visual servoing is presented, where the sensor data streams coming from the vision system, the joint encoders and the force/torque sensors are fused and joint velocity values are generated. This framework can be used for bimanual grasping as well as for two handed manipulations which is demonstrated with the humanoid robot Armar-III that executes grasping and manipulation tasks in a kitchen environment.",
"title": ""
},
{
"docid": "5cbd331652b69714bc4ff0eeacc8f85a",
"text": "A survey was conducted from May to Oct of 2011 of the parasitoid community of the imported cabbageworm, Pieris rapae (Lepidoptera: Pieridae), in cole crops in part of the eastern United States and southeastern Canada. The findings of our survey indicate that Cotesia rubecula (Hymenoptera: Braconidae) now occurs as far west as North Dakota and has become the dominant parasitoid of P. rapae in the northeastern and north central United States and adjacent parts of southeastern Canada, where it has displaced the previously common parasitoid Cotesia glomerata (Hymenoptera: Braconidae). Cotesia glomerata remains the dominant parasitoid in the mid-Atlantic states, from Virginia to North Carolina and westward to southern Illinois, below latitude N 38° 48’. This pattern suggests that the released populations of C. rubecula presently have a lower latitudinal limit south of which they are not adapted.",
"title": ""
},
{
"docid": "cb4966a838bbefccbb1b74e5f541ce76",
"text": "Theories of human behavior are an important but largely untapped resource for software engineering research. They facilitate understanding of human developers’ needs and activities, and thus can serve as a valuable resource to researchers designing software engineering tools. Furthermore, theories abstract beyond specific methods and tools to fundamental principles that can be applied to new situations. Toward filling this gap, we investigate the applicability and utility of Information Foraging Theory (IFT) for understanding information-intensive software engineering tasks, drawing upon literature in three areas: debugging, refactoring, and reuse. In particular, we focus on software engineering tools that aim to support information-intensive activities, that is, activities in which developers spend time seeking information. Regarding applicability, we consider whether and how the mathematical equations within IFT can be used to explain why certain existing tools have proven empirically successful at helping software engineers. Regarding utility, we applied an IFT perspective to identify recurring design patterns in these successful tools, and consider what opportunities for future research are revealed by our IFT perspective.",
"title": ""
},
{
"docid": "096989bf7938fbc33a584ac99eacfcf1",
"text": "Mining for association rules in market basket data has proved a fruitful area of research. Measures such as conditional probability (confidence) and correlation have been used to infer rules of the form “the existence of item A implies the existence of item B.” However, such rules indicate only a statistical relationship between A and B. They do not specify the nature of the relationship: whether the presence of A causes the presence of B, or the converse, or some other attribute or phenomenon causes both to appear together. In applications, knowing such causal relationships is extremely useful for enhancing understanding and effecting change. While distinguishing causality from correlation is a truly difficult problem, recent work in statistics and Bayesian learning provide some avenues of attack. In these fields, the goal has generally been to learn complete causal models, which are essentially impossible to learn in large-scale data mining applications with a large number of variables. In this paper, we consider the problem of determining casual relationships, instead of mere associations, when mining market basket data. We identify some problems with the direct application of Bayesian learning ideas to mining large databases, concerning both the scalability of algorithms and the appropriateness of the statistical techniques, and introduce some initial ideas for dealing with these problems. We present experimental results from applying our algorithms on several large, real-world data sets. The results indicate that the approach proposed here is both computationally feasible and successful in identifying interesting causal structures. An interesting outcome is that it is perhaps easier to infer the lack of causality than to infer causality, information that is useful in preventing erroneous decision making.",
"title": ""
},
{
"docid": "0249db106163559e34ff157ad6d45bf5",
"text": "We present an interpolation-based planning and replanning algorithm for generating low-cost paths through uniform and nonuniform resolution grids. Most grid-based path planners use discrete state transitions that artificially constrain an agent’s motion to a small set of possible headings e.g., 0, /4, /2, etc. . As a result, even “optimal” gridbased planners produce unnatural, suboptimal paths. Our approach uses linear interpolation during planning to calculate accurate path cost estimates for arbitrary positions within each grid cell and produce paths with a range of continuous headings. Consequently, it is particularly well suited to planning low-cost trajectories for mobile robots. In this paper, we introduce a version of the algorithm for uniform resolution grids and a version for nonuniform resolution grids. Together, these approaches address two of the most significant shortcomings of grid-based path planning: the quality of the paths produced and the memory and computational requirements of planning over grids. We demonstrate our approaches on a number of example planning problems, compare them to related algorithms, and present several implementations on real robotic systems. © 2006 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "cb5fd0e9488c9784f5674cee823e5c8d",
"text": "The role of glutamate in quantal release at the cytoneural junction was examined by measuring mEPSPs and afferent spikes at the posterior canal in the intact frog labyrinth. Release was enhanced by exogenous glutamate, or dl-TBOA, a blocker of glutamate reuptake. Conversely, drugs acting on ionotropic glutamate receptors did not affect release; the α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPA-R) blocker CNQX decreased mEPSP size in a dose-dependent manner; the NMDA-R blocker d-AP5 at concentrations <200 µM did not affect mEPSP size, either in the presence or absence of Mg and glycine. In isolated hair cells, glutamate did not modify Ca currents. Instead, it systematically reduced the compound delayed potassium current, IKD, whereas the metabotropic glutamate receptor (mGluR)-II inverse agonist, (2S)-2-amino-2-[(1S,2S)-2-carboxycycloprop-1-yl]-3-(xanth-9-yl)propanoic acid (LY341495), increased it. Given mGluR-II decrease cAMP production, these finding are consistent with the reported sensitivity of IKD to protein kinase A (PKA)-mediated phosphorylation. LY341495 also enhanced transmitter release, presumably through phosphorylation-mediated facilitation of the release machinery. The observed enhancement of release by glutamate confirms previous literature data, and can be attributed to activation of mGluR-I that promotes Ca release from intracellular stores. Glutamate-induced reduction in the repolarizing IKD may contribute to facilitation of release. Overall, glutamate exerts both a positive feedback action on mGluR-I, through activation of the phospholipase C (PLC)/IP3 path, and the negative feedback, by interfering with substrate phosphorylation through Gi/0-coupled mGluRs-II/III. The positive feedback prevails, which may explain the increase in overall rates of release observed during mechanical stimulation (symmetrical in the excitatory and inhibitory directions). The negative feedback may protect the junction from over-activation.",
"title": ""
},
{
"docid": "a1c73823bffd44ee4784d5bd77bd4f04",
"text": "People of the modern world are using social network Web sites to communicate with others either known or unknown, for getting opinions of others and giving their opinions to others. The post, weblogs, effects or affects human mind, at least for some time. These posts take a part in choosing their decisions and play an important role. But the information present in the post is either information or just misinformation, i.e., just a rumor. People are confused to distinguish these posts in either a correct information or misinformation. It is important to decide whether this is information or just a rumor because it may cause a support of the wrong decision of the whole majority. In this paper, a mathematical framework is presented related to these matters. Firstly, we proposed a mathematical model of news spreading from some posts displayed in an online social network. The development of mathematical models of news propagation uses the epidemiological modeling technique. Then, we proposed criteria of rumor detection and verification for the model. In the case of rumor, a revised model is proposed with media awareness as a control strategy for reducing the rumor spreading.",
"title": ""
},
{
"docid": "010926d088cf32ba3fafd8b4c4c0dedf",
"text": "The number and the size of spatial databases, e.g. for geomarketing, traffic control or environmental studies, are rapidly growing which results in an increasing need for spatial data mining. In this paper, we present new algorithms for spatial characterization and spatial trend analysis. For spatial characterization it is important that class membership of a database object is not only determined by its non-spatial attributes but also by the attributes of objects in its neighborhood. In spatial trend analysis, patterns of change of some non-spatial attributes in the neighborhood of a database object are determined. We present several algorithms for these tasks. These algorithms were implemented within a general framework for spatial data mining providing a small set of database primitives on top of a commercial spatial database management system. A performance evaluation using a real geographic database demonstrates the effectiveness of the proposed algorithms. Furthermore, we show how the algorithms can be combined to discover even more interesting spatial knowledge.",
"title": ""
},
{
"docid": "d40aa76e76c44da4c6237f654dcdab45",
"text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.",
"title": ""
},
{
"docid": "a144b5969c30808f0314218248c48ed6",
"text": "A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.",
"title": ""
},
{
"docid": "7e93c570c957a24ff4eb2132d691a8f1",
"text": "Most of video-surveillance based applications use a foreground extraction algorithm to detect interest objects from videos provided by static cameras. This paper presents a benchmark dataset and evaluation process built from both synthetic and real videos, used in the BMC workshop (Background Models Challenge). This dataset focuses on outdoor situations with weather variations such as wind, sun or rain. Moreover, we propose some evaluation criteria and an associated free software to compute them from several challenging testing videos. The evaluation process has been applied for several state of the art algorithms like gaussian mixture models or codebooks.",
"title": ""
}
] |
scidocsrr
|
1a9d2e2d9793dc4608f79f421cd806a0
|
Privacy, identity and security in ambient intelligence: A scenario analysis
|
[
{
"docid": "66da54da90bbd252386713751cec7c67",
"text": "A cyber world (CW) is a digitized world created on cyberspaces inside computers interconnected by networks including the Internet. Following ubiquitous computers, sensors, e-tags, networks, information, services, etc., is a road towards a smart world (SW) created on both cyberspaces and real spaces. It is mainly characterized by ubiquitous intelligence or computational intelligence pervasion in the physical world filled with smart things. In recent years, many novel and imaginative researcheshave been conducted to try and experiment a variety of smart things including characteristic smart objects and specific smart spaces or environments as well as smart systems. The next research phase to emerge, we believe, is to coordinate these diverse smart objects and integrate these isolated smart spaces together into a higher level of spaces known as smart hyperspace or hyper-environments, and eventually create the smart world. In this paper, we discuss the potential trends and related challenges toward the smart world and ubiquitous intelligence from smart things to smart spaces and then to smart hyperspaces. Likewise, we show our efforts in developing a smart hyperspace of ubiquitous care for kids, called UbicKids.",
"title": ""
}
] |
[
{
"docid": "8b773175bc7c1830958373dd45f56b6c",
"text": "Code-Mixing (CM) is a natural phenomenon observed in many multilingual societies and is becoming the preferred medium of expression and communication in online and social media fora. In spite of this, current Question Answering (QA) systems do not support CM and are only designed to work with a single interaction language. This assumption makes it inconvenient for multi-lingual users to interact naturally with the QA system especially in scenarios where they do not know the right word in the target language. In this paper, we present WebShodh an end-end web-based Factoid QA system for CM languages. We demonstrate our system with two CM language pairs: Hinglish (Matrix language: Hindi, Embedded language: English) and Tenglish (Matrix language: Telugu, Embedded language: English). Lack of language resources such as annotated corpora, POS taggers or parsers for CM languages poses a huge challenge for automated processing and analysis. In view of this resource scarcity, we only assume the existence of bi-lingual dictionaries from the matrix languages to English and use it for lexically translating the question into English. Later, we use this loosely translated question for our downstream analysis such as Answer Type(AType) prediction, answer retrieval and ranking. Evaluation of our system reveals that we achieve an MRR of 0.37 and 0.32 for Hinglish and Tenglish respectively. We hosted this system online and plan to leverage it for collecting more CM questions and answers data for further improvement.",
"title": ""
},
{
"docid": "2683c65d587e8febe45296f1c124e04d",
"text": "We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the canonical distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.",
"title": ""
},
{
"docid": "167e807e546e437d3ad1c8790a849cba",
"text": "One-way accumulators, introduced by Benaloh and de Mare, can be used to accumulate a large number of values into a single one, which can then be used to authenticate every input value without the need to transmit the others. However, the one-way property does is not suucient for all applications. In this paper, we generalize the deenition of accumulators and deene and construct a collision-free subtype. As an application, we construct a fail-stop signature scheme in which many one-time public keys are accumulated into one short public key. In contrast to previous constructions with tree authentication, the length of both this public key and the signatures can be independent of the number of messages that can be signed.",
"title": ""
},
{
"docid": "1176abf11f866dda3a76ce080df07c05",
"text": "Google Flu Trends can detect regional outbreaks of influenza 7-10 days before conventional Centers for Disease Control and Prevention surveillance systems. We describe the Google Trends tool, explain how the data are processed, present examples, and discuss its strengths and limitations. Google Trends shows great promise as a timely, robust, and sensitive surveillance system. It is best used for surveillance of epidemics and diseases with high prevalences and is currently better suited to track disease activity in developed countries, because to be most effective, it requires large populations of Web search users. Spikes in search volume are currently hard to interpret but have the benefit of increasing vigilance. Google should work with public health care practitioners to develop specialized tools, using Google Flu Trends as a blueprint, to track infectious diseases. Suitable Web search query proxies for diseases need to be established for specialized tools or syndromic surveillance. This unique and innovative technology takes us one step closer to true real-time outbreak surveillance.",
"title": ""
},
{
"docid": "477a8601e824139829568a154934b6cd",
"text": "Understanding noun compounds is the challenge that drew me to study computational linguistics. Think about how just two words, side by side, evoke a whole story: cacao seeds evokes the tree on which the cacao seeds grow, and to understand cacao powder we need to also imagine the seeds of the cacao tree that are crushed to powder. What conjures up these concepts of tree and grow, and seeds and crush, which are not explicitly present in the written word but are essential for our complete understanding of the compounds? The mechanisms by which we make sense of noun compounds can illuminate how we understand language more generally. And because the human mind is so wily as to provide interpretations even when we do not ask it to, I have always found it useful to study these phenomena of language on the computer, because the computer surely does not (yet) have the type of knowledge that must be brought to bear on the problem. If you find these phenomena equally intriguing and puzzling, then you will find this book by Nastase, Nakov, Ó Séaghdga, and Szpakowicz a wonderful summary of past research efforts and a good introduction to the current methods for analyzing semantic relations. To be clear, this book is not only about noun compounds, but explores all types of relations that can hold between what is expressed linguistically as nominal. Such nominals include entities (e.g., Godiva, Belgium) as well as nominals that refer to events (cultivation, roasting) and nominals with complex structure (delicious milk chocolate). In doing so, describing the different semantic relations between chocolate in the 20th century and chocolate in Belgium is within the scope of this book. This is a wise choice as there are then some linguistic cues that will help define and narrow the types of semantic relations (e.g., the prepositions above). Noun compounds are degenerate in the sense that there are few if any overt linguistic cues as to the semantic relations between the nominals.",
"title": ""
},
{
"docid": "e66bc39948ad53767971d444ecff82dd",
"text": "Face processing has several distinctive hallmarks that researchers have attributed either to face-specific mechanisms or to extensive experience distinguishing faces. Here, we examined the face-processing hallmark of selective attention failure--as indexed by the congruency effect in the composite paradigm--in a domain of extreme expertise: chess. Among 27 experts, we found that the congruency effect was equally strong with chessboards and faces. Further, comparing these experts with recreational players and novices, we observed a trade-off: Chess expertise was positively related to the congruency effect with chess yet negatively related to the congruency effect with faces. These and other findings reveal a case of expertise-dependent, facelike processing of objects of expertise and suggest that face and expert-chess recognition share common processes.",
"title": ""
},
{
"docid": "2cbf690c565c6a201d4d8b6bda20b766",
"text": "Visualizations that can handle flat files, or simple table data are most often used in data mining. In this paper we survey most visualizations that can handle more than three dimensions and fit our definition of Table Visualizations. We define Table Visualizations and some additional terms needed for the Table Visualization descriptions. For a preliminary evaluation of some of these visualizations see “Benchmark Development for the Evaluation of Visualization for Data Mining” also included in this volume. Data Sets Used Most of the datasets for the visualization examples are either the automobile or the Iris flower dataset. Nearly every data mining package comes with at least one of these two datasets. The datasets are available UC Irvine Machine Learning Repository [Uci97]. • Iris Plant Flowers – from Fischer 1936, physical measurements from three types of flowers. • Car (Automobile) – data concerning cars manufactured in America, Japan and Europe from 1970 to 1982 Definition of Table Visualizations A two-dimensional table of data is defined by M rows and N columns. A visualization of this data is termed a Table Visualization. In our definition, we define the columns to be the dimensions or the variates (also called fields or attributes), and the rows to be the data records. The data records are sometimes called ndimensional points, or cases. For a more thorough discussion of the table model, see [Car99]. This very general definition only rules out some structured or hierarchical data. In the most general case, a visualization maps certain dimensions to certain features in the visualization. In geographical, scientific, and imaging visualizations, the spatial dimensions are normally assigned to the appropriate X, Y or Z spatial dimension. In a typical information visualization there is no inherent spatial dimension, but quite often the dimension mapped to height and width on the screen has a dominating effect. For example in a scatter plot of four-dimensional data one could map two features to the Xand Y-axis and the other two features to the color and shape of the plotted points. The dimensions assigned to the Xand Y-axis would dominate many aspects of analysis, such as clustering and outlier detection. Some Table Visualizations such as Parallel Coordinates, Survey Plots, or Radviz, treat all of the data dimensions equally. We call these Regular Table Visualizations (RTVs). The data in a Table Visualizations is discrete. The data can be represented by different types, such as integer, real, categorical, nominal, etc. In most visualizations all data is converted to a real type before rendering the visualization. We are concerned with issues that arise from the various types of data, and use the more general term “Table Visualization.” These visualizations can also be called “Array Visualizations” because all the data are of the same type. Table Visualization data is not hierarchical. It does not explicitly contain internal structure or links. The data has a finite size (N and M are bounded). The data can be viewed as M points having N dimensions or features. The order of the table can sometimes be considered another dimension, which is an ordered sequence of integer values from 1 to M. If the table represents points in some other sequence such as a time series, that information should be represented as another column.",
"title": ""
},
{
"docid": "4013515fe0bfae910a4493ff91e4490e",
"text": "This paper presents NeuroChess, a program which learns to play chess from the final outcome of games. NeuroChess learns chess board evaluation functions, represented by artificial neural networks. It integrates inductive neural network learning, temporal differencing, and a variant of explanation-based learning. Performance results illustrate some of the strengths and weaknesses of this approach.",
"title": ""
},
{
"docid": "85d31f3940ee258589615661e596211d",
"text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.",
"title": ""
},
{
"docid": "31c0dc8f0a839da9260bb9876f635702",
"text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.",
"title": ""
},
{
"docid": "ff59e2a5aa984dec7805a4d9d55e69e5",
"text": "We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.",
"title": ""
},
{
"docid": "5916e605ab78bf75925fecbdc55422cd",
"text": "This paper presents a new method for estimating the average heart rate from a foot/ankle worn photoplethysmography (PPG) sensor during fast bike activity. Placing the PPG sensor on the lower half of the body allows more energy to be collected from energy harvesting in order to give a power autonomous sensor node, but comes at the cost of introducing significant motion interference into the PPG trace. We present a normalised least mean square adaptive filter and short-time Fourier transform based algorithm for estimating heart rate in the presence of this motion contamination. Results from 8 subjects show the new algorithm has an average error of 9 beats-per-minute when compared to an ECG gold standard.",
"title": ""
},
{
"docid": "323e7669476aab93735a655e54f6a4a9",
"text": "Monte Carlo Tree Search is a method that depends on decision theory in taking actions/ decisions, when other traditional methods failed on doing so, due to lots of factors such as uncertainty, huge problem domain, or lack in the knowledge base of the problem. Before using this method, several problems remained unsolved including some famous AI games like GO. This method represents a revolutionary technique where a Monte Carlo method has been applied to search tree technique, and proved to be successful in areas thought for a long time as impossible to be solved. This paper highlights some important aspects of this method, and presents some areas where it worked well, as well as enhancements to make it even more powerful.",
"title": ""
},
{
"docid": "9d36947ff5f794942e153c21cdfc3a53",
"text": "It is a well-established fact that corruption is a widespread phenomenon and it is widely acknowledged because of negative impact on economy and society. An important aspect of corruption is that two parties act separately or jointly in order to further their own interests at the expense of society. To strengthen prevent corruption, most of countries have construct special organization. The paper presents a new measure based on introducing game theory as an analytical tool for analyzing the relation between anti-corruption and corruption. Firstly, the paper introduces the corruption situation in China, gives the definition of the game theory and studies government anti-corruption activity through constructing the game theoretic models between anti-corruption and corruption. The relation between supervisor and the anti-corruption will be explained next. A thorough analysis of the mechanism of informant system has been made accordingly in the third part. At last, some suggestions for preventing and fight corruption are put forward.",
"title": ""
},
{
"docid": "8ca60b68f1516d63af36b7ead860686b",
"text": "The automatic patch-based exploit generation problem is: given a program P and a patched version of the program P', automatically generate an exploit for the potentially unknown vulnerability present in P but fixed in P'. In this paper, we propose techniques for automatic patch-based exploit generation, and show that our techniques can automatically generate exploits for 5 Microsoft programs based upon patches provided via Windows Update. Although our techniques may not work in all cases, a fundamental tenant of security is to conservatively estimate the capabilities of attackers. Thus, our results indicate that automatic patch-based exploit generation should be considered practical. One important security implication of our results is that current patch distribution schemes which stagger patch distribution over long time periods, such as Windows Update, may allow attackers who receive the patch first to compromise the significant fraction of vulnerable hosts who have not yet received the patch.",
"title": ""
},
{
"docid": "40c90bf58aae856c7c72bac573069173",
"text": "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a “distilled” policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning.",
"title": ""
},
{
"docid": "4c5b74544b1452ffe0004733dbeee109",
"text": "Literary genres are commonly viewed as being defined in terms of content and style. In this paper, we focus on one particular type of content feature, namely lexical expressions of emotion, and investigate the hypothesis that emotion-related information correlates with particular genres. Using genre classification as a testbed, we compare a model that computes lexiconbased emotion scores globally for complete stories with a model that tracks emotion arcs through stories on a subset of Project Gutenberg with five genres. Our main findings are: (a), the global emotion model is competitive with a largevocabulary bag-of-words genre classifier (80 % F1); (b), the emotion arc model shows a lower performance (59 % F1) but shows complementary behavior to the global model, as indicated by a very good performance of an oracle model (94 % F1) and an improved performance of an ensemble model (84 % F1); (c), genres differ in the extent to which stories follow the same emotional arcs, with particularly uniform behavior for anger (mystery) and fear (adventures, romance, humor, science fiction).",
"title": ""
},
{
"docid": "0a35370e6c99e122b8051a977029d77a",
"text": "To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.",
"title": ""
},
{
"docid": "09c042cb8ee06de9dffc4019f781e496",
"text": "High quality rendering and physics based modeling in volume graphics have been limited because intensity based volumetric data do not represent surfaces well. High spatial frequencies due to abrupt intensity changes at object surfaces result in jagged or terraced surfaces in rendered images. The use of a distance-to-closest-surface function to encode object surfaces is proposed. This function varies smoothly across surfaces and hence can be accurately reconstructed from sampled data. The zero value iso surface of the distance map yields the object surface and the derivative of the distance map yields the surface normal. Examples of rendered images are presented along with a new method for calculating distance maps from sampled binary data.",
"title": ""
},
{
"docid": "514b802d266259087a106d5c2c03f39b",
"text": "A substantial increase of photovoltaic (PV) power generators installations has taken place in recent years, due to the increasing efficiency of solar cells as well as the improvements of manufacturing technology of solar panels. These generators are both grid-connected and stand-alone applications. We present an overview of the essential research results. The paper concentrates on the operation and modeling of stand-alone power systems with PV power generators. Systems with PV array-inverter assemblies, operating in the slave-and-master modes, are discussed, and the simulation results obtained using a renewable energy power system modular simulator are presented. These results demonstrate that simulation is an essential step in the system development process and that PV power generators constitute a valuable energy source. They have the ability to balance the energy and supply good power quality. It is demonstrated that when PV array- inverters are operating in the master mode in stand-alone applications, they well perform the task of controlling the voltage and frequency of the power system. The mechanism of switching the master function between the diesel generator and the PV array-inverter assembly in a stand-alone power system is also proposed and analyzed. Finally, some experimental results on a practical system are compared to the simulation results and confirm the usefulness of the proposed approach to the development of renewable energy systems with PV power generators.",
"title": ""
}
] |
scidocsrr
|
7cfc0de8a127e1befe37f1c7a331e531
|
Knowledge management in higher education institutions for the generation of organizational knowledge
|
[
{
"docid": "830fad5e9760ba4b9ca4dd31df965d5e",
"text": "In the new economic era, knowledge has become the primary source of wealth and consequently, the term knowledge economy or knowledge age. Rapid technological advancements and innovations have narrowed the gap between competing organizations such that the collective knowledge of employees is regarded as the key factor in producing innovative and competitive products or services. Organizations, since the early 1 0s, have been forced to rethink the way they manage their intangible assets, which are in form of knowledge resources and therefore the need for knowledge management. Many organisations use knowledge management frameworks as a model that initiates and strengthens knowledge management activities in the context of achieving organisational excellence. However, different knowledge management frameworks do not fully address knowledge management activities across the organisation, such that each of them addresses certain knowledge management elements, while leaving others unattended to. The paper examined 21 knowledge management frameworks guided by three themes as knowledge management activities, knowledge management resources and knowledge management enablers (or influences) on knowledge management. A matrix was developed to capture the individual components advanced by each author with respect to knowledge management activities, resources and influences. Based on the matrices for activities, resources and influences, the individual components were harmonised and integrated in terms of relationships in the context of knowledge management. The findings are that knowledge management activities are socially enacted activities that support individual and collective knowledge. The activities vary depending on which of the knowledge resources the organization aims at improving. Since each organization has a different focus, knowledge management activities take place in different contexts. These activities have been summarized as knowledge acquisition, creation, repository, sharing, use and evaluation. The organization should consciously choose which of these activities they intend to support in order to identify appropriate organizational variables and technology to enable them have effect. Based on findings, a new knowledge management framework has been proposed to guide practitioners and researchers when conducting knowledge management.",
"title": ""
}
] |
[
{
"docid": "167d4c17b456223e9f417ae972318415",
"text": "The current centrally controlled power grid is undergoing a drastic change in order to deal with increasingly diversified challenges, including environment and infrastructure. The next-generation power grid, known as the smart grid, will be realized with proactive usage of state-of-the-art technologies in the areas of sensing, communications, control, computing, and information technology. In a smart power grid, an efficient and reliable communication architecture plays a crucial role in improving efficiency, sustainability, and stability. In this article, we first identify the fundamental challenges in the data communications for the smart grid and introduce the ongoing standardization effort in the industry. Then we present an unprecedented cognitive radio based communications architecture for the smart grid, which is mainly motivated by the explosive data volume, diverse data traffic, and need for QoS support. The proposed architecture is decomposed into three subareas: cognitive home area network, cognitive neighborhood area network, and cognitive wide area network, depending on the service ranges and potential applications. Finally, we focus on dynamic spectrum access and sharing in each subarea. We also identify a very unique challenge in the smart grid, the necessity of joint resource management in the decomposed NAN and WAN geographic subareas in order to achieve network scale performance optimization. Illustrative results indicate that the joint NAN/WAN design is able to intelligently allocate spectra to support the communication requirements in the smart grid.",
"title": ""
},
{
"docid": "7b7571705c637f325037e9ee8d8fa1c5",
"text": "Breast cancer is one of the most widespread diseases among women in the UAE and worldwide. Correct and early diagnosis is an extremely important step in rehabilitation and treatment. However, it is not an easy one due to several uncertainties in detection using mammograms. Machine Learning (ML) techniques can be used to develop tools for physicians that can be used as an effective mechanism for early detection and diagnosis of breast cancer which will greatly enhance the survival rate of patients. This paper compares three of the most popular ML techniques commonly used for breast cancer detection and diagnosis, namely Support Vector Machine (SVM), Random Forest (RF) and Bayesian Networks (BN). The Wisconsin original breast cancer data set was used as a training set to evaluate and compare the performance of the three ML classifiers in terms of key parameters such as accuracy, recall, precision and area of ROC. The results obtained in this paper provide an overview of the state of art ML techniques for breast cancer detection.",
"title": ""
},
{
"docid": "9894c1a341863372f1cec74d38151271",
"text": "Stock market prediction is attractive and challenging. According to the efficient market hypothesis, stock prices should follow a random walk pattern and thus should not be predictable with more than about 50 percent accuracy. In this paper, we investigated the predictability of the Dow Jones Industrial Average index to show that not all periods are equally random. We used the Hurst exponent to select a period with great predictability. Parameters for generating training patterns were determined heuristically by auto-mutual information and false nearest neighbor methods. Some inductive machine-learning classifiers—artificial neural network, decision tree, and k-nearest neighbor were then trained with these generated patterns. Through appropriate collaboration of these models, we achieved prediction accuracy up to 65 percent.",
"title": ""
},
{
"docid": "8d4bf1b8b45bae6c506db5339e6d9025",
"text": "Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrixmatrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.",
"title": ""
},
{
"docid": "4289b6f73a5e402b49d1daab464d26b5",
"text": "Run-time Partial Reconfiguration (PR) speed is significant in applications especially when fast IP core switching is required. In this paper, we propose to use Direct Memory Access (DMA), Master (MST) burst, and a dedicated Block RAM (BRAM) cache respectively to reduce the reconfiguration time. Based on the Xilinx PR technology and the Internal Configuration Access Port (ICAP) primitive in the FPGA fabric, we discuss multiple design architectures and thoroughly investigate their performance with measurements for different partial bitstream sizes. Compared to the reference OPB HWICAP and XPS HWICAP designs, experimental results showthatDMA HWICAP and MST HWICAP reduce the reconfiguration time by one order of magnitude, with little resource consumption overhead. The BRAM HWICAP design can even approach the reconfiguration speed limit of the ICAP primitive at the cost of large Block RAM utilization.",
"title": ""
},
{
"docid": "aa73f61a3d2eec9f47f789123d46f4a4",
"text": "Industrial reports indicate that security incidents continue to inflict large financial losses on organizations. Researchers and industrial analysts contend that there are fundamental problems with existing security incident response process solutions. This paper presents the Security Incident Response Criteria (SIRC) which can be applied to a variety of security incident response approaches. The criteria are derived from empirical data based on in-depth interviews conducted within a Global Fortune 500 organization and supporting literature. The research contribution of this paper is twofold. First, the criteria presented in this paper can be used to evaluate existing security incident response solutions and second, as a guide, to support future security incident response improvement initiatives.",
"title": ""
},
{
"docid": "d11fc4a2a799356380354af144aafe37",
"text": "[Context and motivation] For the past several years, Cyber Physical Systems (CPS) have emerged as a new system type like embedded systems or information systems. CPS are highly context-dependent, observe the world through sensors, act upon it through actuators, and communicate with one another through powerful networks. It has been widely argued that these properties pose new challenges for the development process. [Question/problem] Yet, how these CPS properties impact the development process has thus far been subject to conjecture. An investigation of a development process from a cyber physical perspective has thus far not been undertaken. [Principal ideas/results] In this paper, we conduct initial steps into such an investigation. We present a case study involving the example of a software simulator of an airborne traffic collision avoidance system. [Contribution] The goal of the case study is to investigate which of the challenges from the literature impact the development process of CPS the most.",
"title": ""
},
{
"docid": "96ce5f57038aa836be67a5964e365ea0",
"text": "Advances in technology have led to continuous innovation in teaching and learning methods. For instance, the use of tablet PCs (TPCs) in classroom instruction has been shown to be effective in attracting and motivating students’ interest and increasing their desire to participate in learning activities. In this paper, we used a TPCs game – an iPad app called Motion Math: Hungry Fish – to help young students learn to theoretically understand and practically implement the mathematical concepts of addition and subtraction. Based on findings from a pilot study, we categorized the game’s 18 levels of difficulty into “challenging” (experimental group) and “matching” (control group) games. We aimed to investigate whether challenging games were more able than matching games to improve the students’ motivation, flow experience, self-efficacy for technology, self-efficacy for science, feelings about the TPC game, and satisfaction with the learning approach. The findings showed that the students in the experimental group achieved better flow experience, learning performance, and satisfaction.",
"title": ""
},
{
"docid": "6bf002e1a3f544ebf599940ef22c1911",
"text": "In this paper, we present a new approach for fingerprint class ification based on Discrete Fourier Transform (DFT) and nonlinear discrimina nt nalysis. Utilizing the Discrete Fourier Transform and directional filters, a relia ble and efficient directional image is constructed from each fingerprint image, and then no nlinear discriminant analysis is applied to the constructed directional images, reducing the dimension dramatically and extracting the discriminant features. The pr oposed method explores the capability of DFT and directional filtering in dealing with l ow quality images and the effectiveness of nonlinear feature extraction method in fin gerprint classification. Experimental results demonstrates competitive performance compared with other published results.",
"title": ""
},
{
"docid": "7e7f88c872d1dd49c49830b667af960f",
"text": "The influence of Artificial Intelligence (AI) and Artificial Life (ALife) technologies upon society, and their potential to fundamentally shape the future evolution of humankind, are topics very much at the forefront of current scientific, governmental and public debate. While these might seem like very modern concerns, they have a long history that is often disregarded in contemporary discourse. Insofar as current debates do acknowledge the history of these ideas, they rarely look back further than the origin of the modern digital computer age in the 1940s–50s. In this paper we explore the earlier history of these concepts. We focus in particular on the idea of self-reproducing and evolving machines, and potential implications for our own species. We show that discussion of these topics arose in the 1860s, within a decade of the publication of Darwin’s The Origin of Species, and attracted increasing interest from scientists, novelists and the general public in the early 1900s. After introducing the relevant work from this period, we categorise the various visions presented by these authors of the future implications of evolving machines for humanity. We suggest that current debates on the co-evolution of society and technology can be enriched by a proper appreciation of the long history of the ideas involved.",
"title": ""
},
{
"docid": "98388ecea031b70916cabda20edf3496",
"text": "Rim-driven thrusters have received much attention concerning the potential benefits in vibration and hydrodynamic characteristics, which are of great importance in marine transportation systems. In this sense, the rim-driven permanent magnet, brushless dc, and induction motors have been recently suggested to be employed as marine propulsion motors. On the other hand, high-temperature superconducting (HTS) synchronous motors are becoming much fascinating, particularly in transport applications, regarding some considerable advantages such as low loss, high efficiency, and compactness. However, the HTS-type rim-driven synchronous motor has not been studied yet. Therefore, this paper is devoted to a design practice of rim-driven synchronous motors with HTS field winding. A detailed design procedure is developed for the HTS rim-driven motors, and the design algorithm is validated applying the finite element (FE) method. The FE model of a three-phase 2.5-MW HTS rim-driven synchronous motor is utilized, and the electromagnetic characteristics of the motor are then evaluated. The goal is to design an HTS machine fitted in a thin duct to minimize the hydrodynamic drag force. The design problem exhibits some difficulties while considering various constraints.",
"title": ""
},
{
"docid": "d1c3dfa4700562ac533fa8fb5992c952",
"text": "This study proposes a new pallet recognition system using Kinect camera. Depth image of Kinect camera is produced from the infrared ray data of random dot type. This system was applied to an automated guided vehicle(AGV) to recognize the pallet in various conditions. A modularized hardware and software of the pallet recognition system was developed. The performance of the developed pallet recognition system was tested through experiments under various environment, and it show good performance.",
"title": ""
},
{
"docid": "72ee3bf58497eddeda11f19488fc8e55",
"text": "People can benefit from disclosing negative emotions or stigmatized facets of their identities, and psychologists have noted that imagery can be an effective medium for expressing difficult emotions. Social network sites like Instagram offer unprecedented opportunity for image-based sharing. In this paper, we investigate sensitive self-disclosures on Instagram and the responses they attract. We use visual and textual qualitative content analysis and statistical methods to analyze self-disclosures, associated comments, and relationships between them. We find that people use Instagram to engage in social exchange and story-telling about difficult experiences. We find considerable evidence of social support, a sense of community, and little aggression or support for harmful or pro-disease behaviors. Finally, we report on factors that influence engagement and the type of comments these disclosures attract. Personal narratives, food and beverage, references to illness, and self-appearance concerns are more likely to attract positive social support. Posts seeking support attract significantly more comments. CAUTION: This paper includes some detailed examples of content about eating disorders and self-injury illnesses.",
"title": ""
},
{
"docid": "b0e3249bbea278ceee2154aba5ea99d8",
"text": "Much of the current research in learning Bayesian Networks fails to eeectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly ad-hoc methods; other methods do deal with missing data but learn only the conditional probabilities, assuming that the structure is known. We present a principled approach to learn both the Bayesian network structure as well as the conditional probabilities from incomplete data. The proposed algorithm is an iterative method that uses a combination of Expectation-Maximization (EM) and Imputation techniques. Results are presented on synthetic data sets which show that the performance of the new algorithm is much better than ad-hoc methods for handling missing data.",
"title": ""
},
{
"docid": "6d8239638a5581958071f4fb78f0596b",
"text": "This article presents the formal semantics of a large subset of the C language called Clight. Clight includes pointer arithmetic, struct and union types, C loops and structured switch statements. Clight is the source language of the CompCert verified compiler. The formal semantics of Clight is a big-step operational semantics that observes both terminating and diverging executions and produces traces of input/output events. The formal semantics of Clight is mechanized using the Coq proof assistant. In addition to the semantics of Clight, this article describes its integration in the CompCert verified compiler and several ways by which the semantics was validated.",
"title": ""
},
{
"docid": "fa19ca685177de66d0c003cf8df08b36",
"text": "Energy management in microgrids is typically formulated as a nonlinear optimization problem. Solving it in a centralized manner does not only require high computational capabilities at the microgrid central controller (MGCC), but may also infringe customer privacy. Existing distributed approaches, on the other hand, assume that all generations and loads are connected to one bus, and ignore the underlying power distribution network and the associated power flow and system operational constraints. Consequently, the schedules produced by those algorithms may violate those constraints and thus are not feasible in practice. Therefore, the focus of this paper is on the design of a distributed energy management strategy (EMS) for the optimal operation of microgrids with consideration of the distribution network and the associated constraints. Specifically, we formulate microgrid energy management as an optimal power flow problem, and propose a distributed EMS where the MGCC and the local controllers jointly compute an optimal schedule. We also provide an implementation of the proposed distributed EMS based on IEC 61850. As one demonstration, we apply the proposed distributed EMS to a real microgrid in Guangdong Province, China, consisting of photovoltaics, wind turbines, diesel generators, and a battery energy storage system. The simulation results demonstrate the effectiveness and fast convergence of the proposed distributed EMS.",
"title": ""
},
{
"docid": "cee66cf1d7d44e4a21d0aeb2e6d0ff64",
"text": "Generating images of texture mapped geometry requires projecting surfaces onto a two-dimensional screen. If this projection involves perspective, then a division must be performed at each pixel of the projected surface in order to correctly calculate texture map coordinates. We show how a simple extension to perspective-comect texture mapping can be used to create various lighting effects, These include arbitrary projection of two-dimensional images onto geometry, realistic spotlights, and generation of shadows using shadow maps[ 10]. These effects are obtained in real time using hardware that performs correct texture mapping. CR",
"title": ""
},
{
"docid": "74fd21dccc9e883349979c8292c5f450",
"text": "Stack Overflow (SO) has been a great source of natural language questions and their code solutions (i.e., question-code pairs), which are critical for many tasks including code retrieval and annotation. In most existing research, question-code pairs were collected heuristically and tend to have low quality. In this paper, we investigate a new problem of systematically mining question-code pairs from Stack Overflow (in contrast to heuristically collecting them). It is formulated as predicting whether or not a code snippet is a standalone solution to a question. We propose a novel Bi-View Hierarchical Neural Network which can capture both the programming content and the textual context of a code snippet (i.e., two views) to make a prediction. On two manually annotated datasets in Python and SQL domain, our framework substantially outperforms heuristic methods with at least 15% higher F1 and accuracy. Furthermore, we present StaQC (Stack Overflow Question-Code pairs), the largest dataset to date of ∼148K Python and ∼120K SQL question-code pairs, automatically mined from SO using our framework. Under various case studies, we demonstrate that StaQC can greatly help develop data-hungry models for associating natural language with programming language1.",
"title": ""
},
{
"docid": "b37064e74a2c88507eacb9062996a911",
"text": "This article builds a theoretical framework to help explain governance patterns in global value chains. It draws on three streams of literature – transaction costs economics, production networks, and technological capability and firm-level learning – to identify three variables that play a large role in determining how global value chains are governed and change. These are: (1) the complexity of transactions, (2) the ability to codify transactions, and (3) the capabilities in the supply-base. The theory generates five types of global value chain governance – hierarchy, captive, relational, modular, and market – which range from high to low levels of explicit coordination and power asymmetry. The article highlights the dynamic and overlapping nature of global value chain governance through four brief industry case studies: bicycles, apparel, horticulture and electronics.",
"title": ""
},
{
"docid": "7ec81eb3119d8b26056a587397bfeff4",
"text": "A human brain can store and remember thousands of faces in a person's life time, however it is very difficult for an automated system to reproduce the same results. Faces are complex and multidimensional which makes extraction of facial features to be very challenging, yet it is imperative for our face recognition systems to be better than our brain's capabilities. The face like many physiological biometrics that include fingerprint, hand geometry, retina, iris and ear uniquely identifies each individual. In this paper we focus mainly on the face recognition techniques. This review looks at three types of recognition approaches namely holistic, feature based (geometric) and the hybrid approach. We also look at the challenges that are face by the approaches.",
"title": ""
}
] |
scidocsrr
|
da02f02c7e48b3c36758db60bfa47ce6
|
On-Device Federated Learning via Blockchain and its Latency Analysis
|
[
{
"docid": "c411fc52d40cf1f67ddad0c448c6235a",
"text": "Intel’s Software Guard Extensions (SGX) is a set of extensions to the Intel architecture that aims to provide integrity and confidentiality guarantees to securitysensitive computation performed on a computer where all the privileged software (kernel, hypervisor, etc) is potentially malicious. This paper analyzes Intel SGX, based on the 3 papers [14, 79, 139] that introduced it, on the Intel Software Developer’s Manual [101] (which supersedes the SGX manuals [95, 99]), on an ISCA 2015 tutorial [103], and on two patents [110, 138]. We use the papers, reference manuals, and tutorial as primary data sources, and only draw on the patents to fill in missing information. This paper does not reflect the information available in two papers [74, 109] that were published after the first version of this paper. This paper’s contributions are a summary of the Intel-specific architectural and micro-architectural details needed to understand SGX, a detailed and structured presentation of the publicly available information on SGX, a series of intelligent guesses about some important but undocumented aspects of SGX, and an analysis of SGX’s security properties.",
"title": ""
},
{
"docid": "244b583ff4ac48127edfce77bc39e768",
"text": "We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users’ mobile devices instead of logging it to a data center for training. In federated optimization, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network — as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of federated optimization.",
"title": ""
},
{
"docid": "21e16f9abeb0c538b7403d264790b7a8",
"text": "In this paper, the problem of joint power and resource allocation for ultra reliable low latency communication (URLLC) in vehicular networks is studied. The key goal is to minimize the networkwide power consumption of vehicular users (VUEs) subject to high reliability in terms of probabilistic queuing delays. In particular, using extreme value theory (EVT), a new reliability measure is defined to characterize extreme events pertaining to vehicles’ queue lengths exceeding a predefined threshold with non-negligible probability. In order to learn these extreme events in a dynamic vehicular network, a novel distributed approach based on federated learning (FL) is proposed to estimate the tail distribution of the queues. Taking into account the communication delays incurred by FL over wireless links, Lyapunov optimization is used to derive the joint transmit power and resource allocation policies enabling URLLC for each VUE in a distributed manner. The proposed solution is then validated via extensive simulations using a Manhattan mobility model. Simulation results show that FL enables the proposed distributed method to estimate the tail distribution of queues with an accuracy that is very close to a centralized solution with up to 79% reductions in the amount of data that need to be exchanged. Furthermore, the proposed method yields up to 60% reductions of VUEs with large queue lengths, while reducing the average power consumption by two folds, compared to an average queue-based baseline. For the VUEs with large queue lengths, the proposed method reduces their average queue lengths and fluctuations therein by about 30% compared to the aforementioned baseline. ar X iv :1 80 7. 08 12 7v 1 [ cs .I T ] 2 1 Ju l 2 01 8",
"title": ""
}
] |
[
{
"docid": "937dec4b11b3d039c81ca258283f82e8",
"text": "Nonnegative matrix factorization (NMF) provides a lower rank approximation of a matrix by a product of two nonnegative factors. NMF has been shown to produce clustering results that are often superior to those by other methods such as K-means. In this paper, we provide further interpretation of NMF as a clustering method and study an extended formulation for graph clustering called Symmetric NMF (SymNMF). In contrast to NMF that takes a data matrix as an input, SymNMF takes a nonnegative similarity matrix as an input, and a symmetric nonnegative lower rank approximation is computed. We show that SymNMF is related to spectral clustering, justify SymNMF as a general graph clustering method, and discuss the strengths and shortcomings of SymNMF and spectral clustering. We propose two optimization algorithms for SymNMF and discuss their convergence properties and computational efficiencies. Our experiments on document clustering, image clustering, and image segmentation support SymNMF as a graph clustering method that captures latent linear and nonlinear relationships in the data.",
"title": ""
},
{
"docid": "5ad8e24875ab689ae1f8d6d63844153a",
"text": "Currently Internet of Things (IoT) and multimedia technologies have entered the healthcare field through ambient aiding living and telemedicine. However there are still several obstacles blocking in the way, the toughest ones among which are IoT interoperability, system security, streaming Quality of Service (QoS) and dynamic increasing storage. The major contribution of this paper is proposing an open, secure and flexible platform based on IoT and Cloud computing, on which several mainstream short distant ambient communication protocols for medical purpose are discussed to address interoperability; Secure Sockets Layer (SSL), authentication and auditing are taken into consideration to solve the security issue; an adaptive streaming QoS model is utilized to improve streaming quality in dynamic environment; and an open Cloud computing infrastructure is adopted to support elastic Electronic Health Record (EHR) archiving in the backend. Finally an integrated reference implementation is introduced to demonstrate feasibility.",
"title": ""
},
{
"docid": "54537c242bc89fbf15d9191be80c5073",
"text": "In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature’s number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.",
"title": ""
},
{
"docid": "682432bc24847bcca3fdeba01c08a5c6",
"text": "The effect of high K-concentration, insulin and the L-type Ca 2+ channel blocker PN 200-110 on cytosolic intracellular free calcium ([Ca2+]i) was studied in single ventricular myocytes of 10-day-old embryonic chick heart, 20-week-old human fetus and rabbit aorta (VSM) single cells using the Ca2+-sensitive fluorescent dye, Fura-2 microfluorometry and digital imaging technique. Depolarization of the cell membrane of both heart and VSM cells with continuous superfusion of 30 mM [K+]o induced a rapid transient increase of [Ca2+]j that was followed by a sustained component. The early transient increase of [Ca2+]i by high [K+]o was blocked by the L-type calcium channel antagonist nifedipine. However, the sustained component was found to be insensitive to this drug. PN 200-110 another L-type Ca 2+ blocker was found to decrease both the early transient and the sustained increase of [Ca2+]i induced by depolarization of the cell membrane with high [K+]o. Insulin at a concentration of 40 to 80 tzU/rnl only produced a sustained increase of [Ca2+]i that was blocked by PN 200-110 or by lowering the extracellular Ca 2+ concentration with EGTA. The sustained increase of [Ca2+]i induced by high [K+]o or insulin was insensitive to metabolic inhibitors such as KCN and ouabain as well to the fast Na + channel blocker, tetrodotoxin and to the increase of intracellular concentrations of cyclic nucleotides. Using the patch clamp technique, insulin did not affect the L-type Ca 2+ current and the delayed outward K + current. These results suggest that the early increase of [Ca2+]i during depolarization of the cell membrane of heart and VSM cells with high [K+]o is due to the opening and decay of an L-type Ca z+ channel. However, the sustained increase of [Ca2+]i during a sustained depolarization is due to the activation of a resting (R) Ca 2+ channel that is insensitive to lowering [ATP]i and sensitive to insulin. (Mol Cell Biochem 117: 93--106, 1992)",
"title": ""
},
{
"docid": "1c0be734eaff2b337edfd9af75a711fa",
"text": "This article is a fully referenced research review to overview progress in unraveling the details of the evolutionary Tree of Life, from life's first occurrence in the hypothetical RNA-era, to humanity's own emergence and diversification, through migration and intermarriage, using research diagrams and brief discussion of the current state of the art. The Tree of Life, in biological terms, has come to be identified with the evolutionary tree of biological diversity. It is this tree which represents the climax fruitfulness of the biosphere and the genetic foundation of our existence, embracing not just higher Eucaryotes, plants, animals and fungi, but Protista, Eubacteria and Archaea, the realm, including the extreme heat and salt-loving organisms, which appears to lie almost at the root of life itself. To a certain extent the notion of a tree based on generational evolution has become complicated by a variety of compounding factors. Gene transfer is not just vertical carried down the generations. There is also evidence for promiscuous incidences of horizontal gene transfer, genetic symbiosis, hybridization and even the formation of chimeras. This review will cover all these aspects, from the first life on Earth to Homo sapiens.",
"title": ""
},
{
"docid": "ec4b7d7e2a512b29ee2ba195706c3571",
"text": "BACKGROUND\nThe Currarino triad is a rare hereditary syndrome comprising anorectal malformation, sacral bony defect, and presacral mass. Most of the patients are diagnosed during infancy.\n\n\nCASE PRESENTATION\nA 44-year-old man was diagnosed with Currarino triad, with a huge presacral teratoma and meningocele. One-stage surgery via posterior approach was successful.\n\n\nCONCLUSIONS\nTreatment of the presacral mass in the Currarino triad, diagnosed in adulthood, is challenging. Multidisciplinary management and detailed planning before surgery are important for a satisfactory outcome.",
"title": ""
},
{
"docid": "69d68431379da12139fa4a87ccac527f",
"text": "Traditional ultra-dense wireless networks are recommended as a complement for cellular networks and are deployed in partial areas, such as hotspot and indoor scenarios. Based on the massive multiple-input multi-output antennas and the millimeter wave communication technologies, the 5G ultra-dense cellular network is proposed to deploy in overall cellular scenarios. Moreover, a distribution network architecture is presented for 5G ultra-dense cellular networks. Furthermore, the backhaul network capacity and the backhaul energy efficiency of ultra-dense cellular networks are investigated to answer an important question, that is, how much densification can be deployed for 5G ultra-dense cellular networks. Simulation results reveal that there exist densification limits for 5G ultra-dense cellular networks with backhaul network capacity and backhaul energy efficiency constraints.",
"title": ""
},
{
"docid": "78283b148e6340ef9c49e503f9f39a2e",
"text": "Blur in facial images significantly impedes the efficiency of recognition approaches. However, most existing blind deconvolution methods cannot generate satisfactory results due to their dependence on strong edges, which are sufficient in natural images but not in facial images. In this paper, we represent point spread functions (PSFs) by the linear combination of a set of pre-defined orthogonal PSFs, and similarly, an estimated intrinsic (EI) sharp face image is represented by the linear combination of a set of pre-defined orthogonal face images. In doing so, PSF and EI estimation is simplified to discovering two sets of linear combination coefficients, which are simultaneously found by our proposed coupled learning algorithm. To make our method robust to different types of blurry face images, we generate several candidate PSFs and EIs for a test image, and then, a non-blind deconvolution method is adopted to generate more EIs by those candidate PSFs. Finally, we deploy a blind image quality assessment metric to automatically select the optimal EI. Thorough experiments on the facial recognition technology database, extended Yale face database B, CMU pose, illumination, and expression (PIE) database, and face recognition grand challenge database version 2.0 demonstrate that the proposed approach effectively restores intrinsic sharp face images and, consequently, improves the performance of face recognition.",
"title": ""
},
{
"docid": "7eeb2bf2aaca786299ebc8507482e109",
"text": "In this paper we argue that questionanswering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present ExtrAns, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system.",
"title": ""
},
{
"docid": "6c2095e83fd7bc3b7bd5bd259d1ae9bb",
"text": "This paper basically deals with design of an IoT Smart Home System (IoTSHS) which can provide the remote control to smart home through mobile, infrared(IR) remote control as well as with PC/Laptop. The controller used to design the IoTSHS is WiFi based microcontroller. Temperature sensor is provided to indicate the room temperature and tell the user if it's needed to turn the AC ON or OFF. The designed IoTSHS need to be interfaced through switches or relays with the items under control through the power distribution box. When a signal is sent from IoTSHS, then the switches will connect or disconnect the item under control. The designed IoT smart home system can also provide remote controlling for the people who cannot use smart phone to control their appliances Thus, the designed IoTSHS can benefits the whole parts in the society by providing advanced remote controlling for the smart home. The designed IoTSHS is controlled through remote control which uses IR and WiFi. The IoTSHS is capable to connect to WiFi and have a web browser regardless to what kind of operating system it uses, to control the appliances. No application program is needed to purchase, download, or install. In WiFi controlling, the IoTSHS will give a secured Access Point (AP) with a particular service set identifier (SSID). The user will connect the device (e.g. mobile-phone or Laptop/PC) to this SSID with providing the password and then will open the browser and go to particular fixed link. This link will open an HTML web page which will allow the user to interface between the Mobile-Phone/Laptop/PC and the appliances. In addition, the IoTSHS may connect to the home router so that the user can control the appliances with keeping connection with home router. The proposed IoTSHS was designed, programmed, fabricated and tested with excellent results.",
"title": ""
},
{
"docid": "7c8948433cf6c0d35fe29ccfac75d5b5",
"text": "The EMIB dense MCP technology is a new packaging paradigm that provides localized high density interconnects between two or more die on an organic package substrate, opening up new opportunities for heterogeneous on-package integration. This paper provides an overview of EMIB architecture and package capabilities. First, EMIB is compared with other approaches for high density interconnects. Some of the inherent advantages of the technology, such as the ability to cost effectively implement high density interconnects without requiring TSVs, and the ability to support the integration of many large die in an area much greater than the typical reticle size limit are highlighted. Next, the overall EMIB architecture envelope is discussed along with its constituent building blocks, the package construction with the embedded bridge, die to package interconnect features. Next, the EMIB assembly process is described at a high level. Finally, high bandwidth signaling between the die is discussed and the link bandwidth envelope is quantified.",
"title": ""
},
{
"docid": "e2ffac5515399469b93ed53e05d92345",
"text": "Network security is a major issue affecting SCADA systems designed and deployed in the last decade. Simulation of network attacks on a SCADA system presents certain challenges, since even a simple SCADA system is composed of models in several domains and simulation environments. Here we demonstrate the use of C2WindTunnel to simulate a plant and its controller, and the Ethernet network that connects them, in different simulation environments. We also simulate DDOS-like attacks on a few of the routers to observe and analyze the effec ts of a network attack on such a system. I. I NTRODUCTION Supervisory Control And Data Acquisition (SCADA) systems are computer-based monitoring tools that are used to manage and control critical infrastructure functions in re al time, like gas utilities, power plants, chemical plants, tr affic control systems, etc. A typical SCADA system consists of a SCADA Master which provides overall monitoring and control for the system, local process controllers called Re mot Terminal Units (RTUs), sensors and actuators and a network which provides the communication between the Master and the RTUs. A. Security of SCADA Systems SCADA systems are designed to have long life spans, usually in decades. The SCADA systems currently installed and used were designed at a time when security issues were not paramount, which is not the case today. Furthermore, SCADA systems are now connected to the Internet for remote monitoring and control making the systems susceptible to network security problems which arise through a connection to a public network. Despite these evident security risks, SCADA systems are cumbersome to upgrade for several reasons. Firstly, adding security features often implies a large downtime, which is not desirable in systems like power plants and traffic contro l. Secondly, SCADA devices with embedded codes would need to be completely replaced to add new security protocols. Lastly, the networks used in a SCADA system are usually customized for that system and cannot be generalized. Security of legacy SCADA systems and design of future systems both thus rely heavily on the assessment and rectification of security vulnerabilities of SCADA implementatio ns in realistic settings. B. Simulation of SCADA Systems In a SCADA system it is essential to model and simulate communication networks in order to study mission critical situations such as network failures or attacks. Even a simpl e SCADA system is composed of several units in various domains like dynamic systems, networks and physical environments, and each of these units can be modeled using a variety of available simulators and/or emulators. An example system could include simulating controller and plant dynamics in Simulink or Matlab, network architecture and behavior in a network simulator like OMNeT++, etc. An adequate simulation of such a system necessitates the use of an underlying software infrastructure that connects and re lates the heterogeneous simulators in a logically and temporally coherent framework.",
"title": ""
},
{
"docid": "390cb70c820d0ebefe936318f8668ac3",
"text": "BACKGROUND\nMandatory labeling of products with top allergens has improved food safety for consumers. Precautionary allergen labeling (PAL), such as \"may contain\" or \"manufactured on shared equipment,\" are voluntarily placed by the food industry.\n\n\nOBJECTIVE\nTo establish knowledge of PAL and its impact on purchasing habits by food-allergic consumers in North America.\n\n\nMETHODS\nFood Allergy Research & Education and Food Allergy Canada surveyed consumers in the United States and Canada on purchasing habits of food products featuring different types of PAL. Associations between respondents' purchasing behaviors and individual characteristics were estimated using multiple logistic regression.\n\n\nRESULTS\nOf 6684 participants, 84.3% (n = 5634) were caregivers of a food-allergic child and 22.4% had food allergy themselves. Seventy-one percent reported a history of experiencing a severe allergic reaction. Buying practices varied on the basis of PAL wording; 11% of respondents purchased food with \"may contain\" labeling, whereas 40% purchased food that used \"manufactured in a facility that also processes.\" Twenty-nine percent of respondents were unaware that the law requires labeling of priority food allergens. Forty-six percent were either unsure or incorrectly believed that PAL is required by law. Thirty-seven percent of respondents thought PAL was based on the amount of allergen present. History of a severe allergic reaction decreased the odds of purchasing foods with PAL.\n\n\nCONCLUSIONS\nAlmost half of consumers falsely believed that PAL was required by law. Up to 40% surveyed consumers purchased products with PAL. Understanding of PAL is poor, and improved awareness and guidelines are needed to help food-allergic consumers purchase food safely.",
"title": ""
},
{
"docid": "02322377d048f2469928a71290cf1566",
"text": "In order to interact with human environments, humanoid robots require safe and compliant control which can be achieved through force-controlled joints. In this paper, full body step recovery control for robots with force-controlled joints is achieved by adding model-based feed-forward controls. Push Recovery Model Predictive Control (PR-MPC) is presented as a method for generating full-body step recovery motions after a large disturbance. Results are presented from experiments on the Sarcos Primus humanoid robot that uses hydraulic actuators instrumented with force feedback control.",
"title": ""
},
{
"docid": "c6a429e06f634e1dee995d0537777b4b",
"text": "Digital image editing is usually an iterative process; users repetitively perform short sequences of operations, as well as undo and redo using history navigation tools. In our collected data, undo, redo and navigation constitute about 9 percent of the total commands and consume a significant amount of user time. Unfortunately, such activities also tend to be tedious and frustrating, especially for complex projects.\n We address this crucial issue by adaptive history, a UI mechanism that groups relevant operations together to reduce user workloads. Such grouping can occur at various history granularities. We present two that have been found to be most useful. On a fine level, we group repeating commands patterns together to facilitate smart undo. On a coarse level, we segment commands history into chunks for semantic navigation. The main advantages of our approach are that it is intuitive to use and easy to integrate into any existing tools with text-based history lists. Unlike prior methods that are predominately rule based, our approach is data driven, and thus adapts better to common editing tasks which exhibit sufficient diversity and complexity that may defy predetermined rules or procedures.\n A user study showed that our system performs quantitatively better than two other baselines, and the participants also gave positive qualitative feedbacks on the system features.",
"title": ""
},
{
"docid": "8df1395775e139c281512e4e4c1920d9",
"text": "Over the past 20 years, breakthrough discoveries of chromatin-modifying enzymes and associated mechanisms that alter chromatin in response to physiological or pathological signals have transformed our knowledge of epigenetics from a collection of curious biological phenomena to a functionally dissected research field. Here, we provide a personal perspective on the development of epigenetics, from its historical origins to what we define as 'the modern era of epigenetic research'. We primarily highlight key molecular mechanisms of and conceptual advances in epigenetic control that have changed our understanding of normal and perturbed development.",
"title": ""
},
{
"docid": "bd0b233e4f19abaf97dcb85042114155",
"text": "BACKGROUND/PURPOSE\nHair straighteners are very popular around the world, although they can cause great damage to the hair. Thus, the characterization of the mechanical properties of curly hair using advanced techniques is very important to clarify how hair straighteners act on hair fibers and to contribute to the development of effective products. On this basis, we chose two nonconventional hair straighteners (formaldehyde and glyoxylic acid) to investigate how hair straightening treatments affect the mechanical properties of curly hair.\n\n\nMETHODS\nThe mechanical properties of curly hair were evaluated using a tensile test, differential scanning calorimetry (DSC) measurements, scanning electronic microscopy (SEM), a torsion modulus, dynamic vapor sorption (DVS), and Fourier transform infrared spectroscopy (FTIR) analysis.\n\n\nRESULTS\nThe techniques used effectively helped the understanding of the influence of nonconventional hair straighteners on hair properties. For the break stress and the break extension tests, formaldehyde showed a marked decrease in these parameters, with great hair damage. Glyoxylic acid had a slight effect compared to formaldehyde treatment. Both treatments showed an increase in shear modulus, a decrease in water sorption and damage to the hair surface.\n\n\nCONCLUSIONS\nA combination of the techniques used in this study permitted a better understanding of nonconventional hair straightener treatments and also supported the choice of the better treatment, considering a good relationship between efficacy and safety. Thus, it is very important to determine the properties of hair for the development of cosmetics used to improve the beauty of curly hair.",
"title": ""
},
{
"docid": "8f212b657bc99532387d008282cc75b1",
"text": "Mindfulness training has been considered an effective mode for optimizing sport performance. The purpose of this study was to examine the impact of a twelve-session, 30-minute mindfulness meditation training session for sport (MMTS) intervention. The sample included a Division I female collegiate athletes, using quantitative comparisons based on preand post-test ratings on the Mindfulness Attention Awareness Scale (MAAS), the Positive Affect Negative Affect Scale (PANAS), the Psychological Well-Being Scale and the Life Satisfaction Scale. Paired sample t-tests highlight significant increases in mindfulness scores for the intervention group (p < .01), while the comparison group score of mindfulness remained constant. Both groups remained stable in reported positive affect however the intervention group maintained stable reports of negative affect while the comparison group experienced a significant increase in Negative Affect (p < .001). Results are discussed in relation to existing theories on mindfulness and meditation.",
"title": ""
},
{
"docid": "b5aad69e6a0f672cdaa1f81187a48d57",
"text": "In this paper, we propose novel methodologies for the automatic segmentation and recognition of multi-food images. The proposed methods implement the first modules of a carbohydrate counting and insulin advisory system for type 1 diabetic patients. Initially the plate is segmented using pyramidal mean-shift filtering and a region growing algorithm. Then each of the resulted segments is described by both color and texture features and classified by a support vector machine into one of six different major food classes. Finally, a modified version of the Huang and Dom evaluation index was proposed, addressing the particular needs of the food segmentation problem. The experimental results prove the effectiveness of the proposed method achieving a segmentation accuracy of 88.5% and recognition rate equal to 87%.",
"title": ""
},
{
"docid": "ae7877dba4d843f6c6fc2f54e3ce7b9c",
"text": "Many lesion experiments have provided evidence that the hippocampus plays a time-limited role in memory, consistent with the operation of a systems-level memory consolidation process during which lasting neocortical memory traces become established [see Squire, L. R., Clark, R. E., & Knowlton, B. J. (2001). Retrograde amnesia. Hippocampus 11, 50]. However, large lesions of the hippocampus at different time intervals after acquisition of a watermaze spatial reference memory task have consistently resulted in temporally ungraded retrograde amnesia [Bolhuis, J. J., Stewart, C. A., Forrest, E. M. (1994). Retrograde amnesia and memory reactivation in rats with ibotenate lesions to the hippocampus or subiculum. Quarterly Journal of Experimental Psychology 47B, 129; Mumby, D. G., Astur, R. S., Weisend, M. P., Sutherland, R. J. (1999). Retrograde amnesia and selective damage to the hippocampal formation: memory for places and object discriminations. Behavioural Brain Research 106, 97; Sutherland, R. J., Weisend, M. P., Mumby, D., Astur, R. S., Hanlon, F. M., et al. (2001). Retrograde amnesia after hippocampal damage: recent vs. remote memories in two tasks. Hippocampus 11, 27]. It is possible that spatial memories acquired during such a task remain permanently dependent on the hippocampus, that chance performance may reflect a failure to access memory traces that are initially unexpressed but still present, or that graded retrograde amnesia for spatial information might only be observed following partial hippocampal lesions. This study examined the retrograde memory impairments of rats that received either partial or complete lesions of the hippocampus either 1-2 days, or 6 weeks after training in a watermaze reference memory task. Memory retention was assessed using a novel 'reminding' procedure consisting of a series of rewarded probe trials, allowing the measurement of both free recall and memory reactivation. Rats with complete hippocampal lesions exhibited stable, temporally ungraded retrograde amnesia, and could not be reminded of the correct location. Partially lesioned rats could be reminded of a recently learned platform location, but no recovery of remote memory was observed. These results offer no support for hippocampus-dependent consolidation of allocentric spatial information, and suggest that the hippocampus can play a long-lasting role in spatial memory. The nature of this role--in the storage, retrieval, or expression of memory--is discussed.",
"title": ""
}
] |
scidocsrr
|
700af11d69e36e5a57c0d41c1c96cead
|
Modeling Customer Lifetime Value in the Telecom Industry
|
[
{
"docid": "9b5224b94b448d5dabbd545aedd293f8",
"text": "the topic (a) has been dedicated to extolling its use as a decisionmaking criterion; (b) has presented isolated numerical examples of its calculation/determination; and (c) has considered it as part of the general discussions of profitability and discussed its role in customer acquisition decisions and customer acquisition/retention trade-offs. There has been a dearth of general modeling of the topic. This paper presents a series of mathematical models for determination of customer lifetime value. The choice of the models is based on a systematic theoretical taxonomy and on assumptions grounded in customer behavior. In NADA I. NASR is a doctoral student in Marketing at the School addition, selected managerial applications of these general models of of Management, Boston University. customer lifetime value are offered. 1998 John Wiley & Sons, Inc. and Direct Marketing Educational Foundation, Inc. CCC 1094-9968/98/010017-14",
"title": ""
}
] |
[
{
"docid": "dd14599e6a4d2e83a7a476471be53d13",
"text": "This paper presents the modeling, design, fabrication, and measurement of microelectromechanical systems-enabled continuously tunable evanescent-mode electromagnetic cavity resonators and filters with very high unloaded quality factors (Qu). Integrated electrostatically actuated thin diaphragms are used, for the first time, for tuning the frequency of the resonators/filters. An example tunable resonator with 2.6:1 (5.0-1.9 GHz) tuning ratio and Qu of 300-650 is presented. A continuously tunable two-pole filter from 3.04 to 4.71 GHz with 0.7% bandwidth and insertion loss of 3.55-2.38 dB is also shown as a technology demonstrator. Mechanical stability measurements show that the tunable resonators/filters exhibit very low frequency drift (less than 0.5% for 3 h) under constant bias voltage. This paper significantly expands upon previously reported tunable resonators.",
"title": ""
},
{
"docid": "8fccceb2757decb670eed84f4b2405a1",
"text": "This paper develops and evaluates search and optimization techniques for autotuning 3D stencil (nearest neighbor) computations on GPUs. Observations indicate that parameter tuning is necessary for heterogeneous GPUs to achieve optimal performance with respect to a search space. Our proposed framework takes a most concise specification of stencil behavior from the user as a single formula, autogenerates tunable code from it, systematically searches for the best configuration and generates the code with optimal parameter configurations for different GPUs. This autotuning approach guarantees adaptive performance for different generations of GPUs while greatly enhancing programmer productivity. Experimental results show that the delivered floating point performance is very close to previous handcrafted work and outperforms other autotuned stencil codes by a large margin. Furthermore, heterogeneous GPU clusters are shown to exhibit the highest performance for dissimilar tuning parameters leveraging proportional partitioning relative to single-GPU performance.",
"title": ""
},
{
"docid": "e902cdc8d2e06d7dd325f734b0a289b6",
"text": "Vaccinium arctostaphylos is a traditional medicinal plant in Iran used for the treatment of diabetes mellitus. In our search for antidiabetic compounds from natural sources, we found that the extract obtained from V. arctostaphylos berries showed an inhibitory effect on pancreatic alpha-amylase in vitro [IC50 = 1.91 (1.89-1.94) mg/mL]. The activity-guided purification of the extract led to the isolation of malvidin-3-O-beta-glucoside as an a-amylase inhibitor. The compound demonstrated a dose-dependent enzyme inihibitory activity [IC50 = 0.329 (0.316-0.342) mM].",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "773c132b708a605039d59de52a3cf308",
"text": "BACKGROUND\nAirSeal is a novel class of valve-free insufflation system that enables a stable pneumoperitoneum with continuous smoke evacuation and carbon dioxide (CO₂) recirculation during laparoscopic surgery. Comparison data to standard CO₂ pressure pneumoperitoneum insufflators is scarce. The aim of this study is to evaluate the potential advantages of AirSeal compared to a standard CO₂ insufflator.\n\n\nMETHODS/DESIGN\nThis is a single center randomized controlled trial comparing elective laparoscopic cholecystectomy, colorectal surgery and hernia repair with AirSeal (group A) versus a standard CO₂ pressure insufflator (group S). Patients are randomized using a web-based central randomization and registration system. Primary outcome measures will be operative time and level of postoperative shoulder pain by using the visual analog score (VAS). Secondary outcomes include the evaluation of immunological values through blood tests, anesthesiological parameters, surgical side effects and length of hospital stay. Taking into account an expected dropout rate of 5%, the total number of patients is 182 (n = 91 per group). All tests will be two-sided with a confidence level of 95% (P <0.05).\n\n\nDISCUSSION\nThe duration of an operation is an important factor in reducing the patient's exposure to CO₂ pneumoperitoneum and its adverse consequences. This trial will help to evaluate if the announced advantages of AirSeal, such as clear sight of the operative site and an exceptionally stable working environment, will facilitate the course of selected procedures and influence operation time and patients clinical outcome.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT01740011, registered 23 November 2012.",
"title": ""
},
{
"docid": "fea8bf3ca00b3440c2b34188876917a2",
"text": "Digitalization has been identified as one of the major trends changing society and business. Digitalization causes changes for companies due to the adoption of digital technologies in the organization or in the operation environment. This paper discusses digitalization from the viewpoint of diverse case studies carried out to collect data from several companies, and a literature study to complement the data. This paper describes the first version of the digital transformation model, derived from synthesis of these industrial cases, explaining a starting point for a systematic approach to tackle digital transformation. The model is aimed to help companies systematically handle the changes associated with digitalization. The model consists of four main steps, starting with positioning the company in digitalization and defining goals for the company, and then analyzing the company’s current state with respect to digitalization goals. Next, a roadmap for reaching the goals is defined and implemented in the company. These steps are iterative and can be repeated several times. Although company situations vary, these steps will help to systematically approach digitalization and to take the steps necessary to benefit from it.",
"title": ""
},
{
"docid": "f2f2b48cd35d42d7abc6936a56aa580d",
"text": "Complete enumeration of all the sequences to establish global optimality is not feasible as the search space, for a general job-shop scheduling problem, ΠG has an upper bound of (n!). Since the early fifties a great deal of research attention has been focused on solving ΠG, resulting in a wide variety of approaches such as Branch and Bound, Simulated Annealing, Tabu Search, etc. However limited success has been achieved by these methods due to the shear intractability of this generic scheduling problem. Recently, much effort has been concentrated on using neural networks to solve ΠG as they are capable of adapting to new environments with little human intervention and can mimic thought processes. Major contributions in solving ΠG using a Hopfield neural network, as well as applications of back-error propagation to general scheduling problems are presented. To overcome the deficiencies in these applications a modified back-error propagation model, a simple yet powerful parallel architecture which can be successfully simulated on a personal computer, is applied to solve ΠG.",
"title": ""
},
{
"docid": "4d3ed5dd5d4f08c9ddd6c9b8032a77fd",
"text": "The purpose of this study was to clarify the efficacy of stress radiography (stress X-P), ultrasonography (US), and magnetic resonance (MR) imaging in the detection of the anterior talofibular ligament (ATFL) injury. Thirty-four patients with ankle sprain were involved. In all patients, Stress X-P, US, MR imaging, and arthroscopy were performed. The arthroscopic results were considered to be the gold standard. The imaging results were compared with the arthroscopic results, and the accuracy calculated. Arthroscopic findings showed ATFL injury in 30 out of 34 cases. The diagnosis of ATFL injury with stress X-P, US, MR imaging were made with an accuracy of 67, 91 and 97%. US and MR imaging demonstrated the same location of the injury as arthroscopy in 63 and 93%. We have clarified the diagnostic value of stress X-P, US, and MR imaging in diagnosis of ATFL injury. We obtained satisfactory results with US and MR imaging.",
"title": ""
},
{
"docid": "5499d3f75391ec2a28dcc84d3a3c4410",
"text": "DRAM latency continues to be a critical bottleneck for system performance. In this work, we develop a low-cost mechanism, called ChargeCache, that enables faster access to recently-accessed rows in DRAM, with no modifications to DRAM chips. Our mechanism is based on the key observation that a recently-accessed row has more charge and thus the following access to the same row can be performed faster. To exploit this observation, we propose to track the addresses of recently-accessed rows in a table in the memory controller. If a later DRAM request hits in that table, the memory controller uses lower timing parameters, leading to reduced DRAM latency. Row addresses are removed from the table after a specified duration to ensure rows that have leaked too much charge are not accessed with lower latency. We evaluate ChargeCache on a wide variety of workloads and show that it provides significant performance and energy benefits for both single-core and multi-core systems.",
"title": ""
},
{
"docid": "49dd14500296da55b7ed34d96af30b13",
"text": "Deadly infections from opportunistic fungi have risen in frequency, largely because of the at-risk immunocompromised population created by advances in modern medicine and the HIV/AIDS pandemic. This review focuses on dynamics of the fungal polysaccharide cell wall, which plays an outsized role in fungal pathogenesis and therapy because it acts as both an environmental barrier and as the major interface with the host immune system. Human fungal pathogens use architectural strategies to mask epitopes from the host and prevent immune surveillance, and recent work elucidates how biotic and abiotic stresses present during infection can either block or enhance masking. The signaling components implicated in regulating fungal immune recognition can teach us how cell wall dynamics are controlled, and represent potential targets for interventions designed to boost or dampen immunity.",
"title": ""
},
{
"docid": "d0b2999de796ec3215513536023cc2be",
"text": "Recently proposed machine comprehension (MC) application is an effort to deal with natural language understanding problem. However, the small size of machine comprehension labeled data confines the application of deep neural networks architectures that have shown advantage in semantic inference tasks. Previous methods use a lot of NLP tools to extract linguistic features but only gain little improvement over simple baseline. In this paper, we build an attention-based recurrent neural network model, train it with the help of external knowledge which is semantically relevant to machine comprehension, and achieves a new state-of-the-art result.",
"title": ""
},
{
"docid": "a40e71e130f31450ce1e60d9cd4a96be",
"text": "Progering® is the only intravaginal ring intended for contraception therapies during lactation. It is made of silicone and releases progesterone through the vaginal walls. However, some drawbacks have been reported in the use of silicone. Therefore, ethylene vinyl acetate copolymer (EVA) was tested in order to replace it. EVA rings were produced by a hot-melt extrusion procedure. Swelling and degradation assays of these matrices were conducted in different mixtures of ethanol/water. Solubility and partition coefficient of progesterone were measured, together with the initial hormone load and characteristic dimensions. A mathematical model was used to design an EVA ring that releases the hormone at specific rate. An EVA ring releasing progesterone in vitro at about 12.05 ± 8.91 mg day−1 was successfully designed. This rate of release is similar to that observed for Progering®. In addition, it was observed that as the initial hormone load or ring dimension increases, the rate of release also increases. Also, the device lifetime was extended with a rise in the initial amount of hormone load. EVA rings could be designed to release progesterone in vitro at a rate of 12.05 ± 8.91 mg day−1. This ring would be used in contraception therapies during lactation. The use of EVA in this field could have initially several advantages: less initial and residual hormone content in rings, no need for additional steps of curing or crosslinking, less manufacturing time and costs, and the possibility to recycle the used rings.",
"title": ""
},
{
"docid": "6b1dd01c57f967e3caf83af9343099c5",
"text": "We have devised and implemented a novel computational strategy for de novo design of molecules with desired properties termed ReLeaSE (Reinforcement Learning for Structural Evolution). On the basis of deep and reinforcement learning (RL) approaches, ReLeaSE integrates two deep neural networks—generative and predictive—that are trained separately but are used jointly to generate novel targeted chemical libraries. ReLeaSE uses simple representation of molecules by their simplified molecular-input line-entry system (SMILES) strings only. Generative models are trained with a stack-augmented memory network to produce chemically feasible SMILES strings, and predictive models are derived to forecast the desired properties of the de novo–generated compounds. In the first phase of the method, generative and predictive models are trained separately with a supervised learning algorithm. In the second phase, both models are trained jointly with the RL approach to bias the generation of new chemical structures toward those with the desired physical and/or biological properties. In the proof-of-concept study, we have used the ReLeaSE method to design chemical libraries with a bias toward structural complexity or toward compounds with maximal, minimal, or specific range of physical properties, such as melting point or hydrophobicity, or toward compounds with inhibitory activity against Janus protein kinase 2. The approach proposed herein can find a general use for generating targeted chemical libraries of novel compounds optimized for either a single desired property or multiple properties.",
"title": ""
},
{
"docid": "f31a8b627e6a0143e70cf1526bf827fa",
"text": "D-amino acid oxidase (DAO) has been reported to be associated with schizophrenia. This study aimed to search for genetic variants associated with this gene. The genomic regions of all exons, highly conserved regions of introns, and promoters of this gene were sequenced. Potentially meaningful single-nucleotide polymorphisms (SNPs) obtained from direct sequencing were selected for genotyping in 600 controls and 912 patients with schizophrenia and in a replicated sample consisting of 388 patients with schizophrenia. Genetic associations were examined using single-locus and haplotype association analyses. In single-locus analyses, the frequency of the C allele of a novel SNP rs55944529 located at intron 8 was found to be significantly higher in the original large patient sample (p = 0.016). This allele was associated with a higher level of DAO mRNA expression in the Epstein-Barr virus-transformed lymphocytes. The haplotype distribution of a haplotype block composed of rs11114083-rs2070586-rs2070587-rs55944529 across intron 1 and intron 8 was significantly different between the patients and controls and the haplotype frequencies of AAGC were significantly higher in patients, in both the original (corrected p < 0.0001) and replicated samples (corrected p = 0.0003). The CGTC haplotype was specifically associated with the subgroup with deficits in sustained attention and executive function and the AAGC haplotype was associated with the subgroup without such deficits. The DAO gene was a susceptibility gene for schizophrenia and the genomic region between intron 1 and intron 8 may harbor functional genetic variants, which may influence the mRNA expression of DAO and neurocognitive functions in schizophrenia.",
"title": ""
},
{
"docid": "ca544972e6fe3c051f72d04608ff36c1",
"text": "The prefrontal cortex (PFC) plays a key role in controlling goal-directed behavior. Although a variety of task-related signals have been observed in the PFC, whether they are differentially encoded by various cell types remains unclear. Here we performed cellular-resolution microendoscopic Ca(2+) imaging from genetically defined cell types in the dorsomedial PFC of mice performing a PFC-dependent sensory discrimination task. We found that inhibitory interneurons of the same subtype were similar to each other, but different subtypes preferentially signaled different task-related events: somatostatin-positive neurons primarily signaled motor action (licking), vasoactive intestinal peptide-positive neurons responded strongly to action outcomes, whereas parvalbumin-positive neurons were less selective, responding to sensory cues, motor action, and trial outcomes. Compared to each interneuron subtype, pyramidal neurons showed much greater functional heterogeneity, and their responses varied across cortical layers. Such cell-type and laminar differences in neuronal functional properties may be crucial for local computation within the PFC microcircuit.",
"title": ""
},
{
"docid": "a941e1fb5a21fafa8e78269c4bd90637",
"text": "The penis is the male organ of copulation and is composed of erectile tissue that encases the extrapelvic portion of the urethra (Fig. 66-1). The penis of the horse is musculocavernous and can be divided into three parts: the root, the body or shaft, and the glans penis. The penis originates caudally at the root, which is fixed to the lateral aspects of the ischial arch by two crura (leg-like parts) that converge to form the shaft of the penis. The shaft constitutes the major portion of the penis and begins at the junction of the crura. It is attached caudally to the symphysis ischii of the pelvis by two short suspensory ligaments that merge with the origin of the gracilis muscles (Fig. 66-2). The glans penis is the conical enlargement that caps the shaft. The urethra passes over the ischial arch between the crura and curves cranioventrally to become incorporated within erectile tissue of the penis. The mobile shaft and glans penis extend cranioventrally to the umbilical region of the abdominal wall. The body is cylindrical but compressed laterally. When quiescent, the penis is soft, compressible, and about 50 cm long. Fifteen to 20 cm lie free in the prepuce. When maximally erect, the penis is up to three times longer than when it is in a quiescent state. Erectile Bodies",
"title": ""
},
{
"docid": "3b85d3eef49825e67f77769950b80800",
"text": "The phishing is a technique used by cyber-criminals to impersonate legitimate websites in order to obtain personal information. This paper presents a novel lightweight phishing detection approach completely based on the URL (uniform resource locator). The mentioned system produces a very satisfying recognition rate which is 95.80%. This system, is an SVM (support vector machine) tested on a 2000 records data-set consisting of 1000 legitimate and 1000 phishing URLs records. In the literature, several works tackled the phishing attack. However those systems are not optimal to smartphones and other embed devices because of their complex computing and their high battery usage. The proposed system uses only six URL features to perform the recognition. The mentioned features are the URL size, the number of hyphens, the number of dots, the number of numeric characters plus a discrete variable that correspond to the presence of an IP address in the URL and finally the similarity index. Proven by the results of this study the similarity index, the feature we introduce for the first time as input to the phishing detection systems improves the overall recognition rate by 21.8%.",
"title": ""
},
{
"docid": "13642d5d73a58a1336790f74a3f0eac7",
"text": "Fifty-eight patients received an Osteonics constrained acetabular implant for recurrent instability (46), girdlestone reimplant (8), correction of leg lengthening (3), and periprosthetic fracture (1). The constrained liner was inserted into a cementless shell (49), cemented into a pre-existing cementless shell (6), cemented into a cage (2), and cemented directly into the acetabular bone (1). Eight patients (13.8%) required reoperation for failure of the constrained implant. Type I failure (bone-prosthesis interface) occurred in 3 cases. Two cementless shells became loose, and in 1 patient, the constrained liner was cemented into an acetabular cage, which then failed by pivoting laterally about the superior fixation screws. Type II failure (liner locking mechanism) occurred in 2 cases. Type III failure (femoral head locking mechanism) occurred in 3 patients. Seven of the 8 failures occurred in patients with recurrent instability. Constrained liners are an effective method for treatment during revision total hip arthroplasty but should be used in select cases only.",
"title": ""
},
{
"docid": "9fa53682b83e925409ea115569494f70",
"text": "Circuit techniques for enabling a sub-0.9 V logic-compatible embedded DRAM (eDRAM) are presented. A boosted 3T gain cell utilizes Read Word-line (RWL) preferential boosting to increase read margin and improve data retention time. Read speed is enhanced with a hybrid current/voltage sense amplifier that allows the Read Bit-line (RBL) to remain close to VDD. A regulated bit-line write scheme for driving the Write Bit-line (WBL) is equipped with a steady-state storage node voltage monitor to overcome the data `1' write disturbance problem of the PMOS gain cell without introducing another boosted supply for the Write Word-line (WWL) over-drive. An adaptive and die-to-die adjustable read reference bias generator is proposed to cope with PVT variations. Monte Carlo simulations compare the 6-sigma read and write performance of proposed eDRAM against conventional designs. Measurement results from a 64 kb eDRAM test chip implemented in a 65 nm low-leakage CMOS process show a 1.25 ms data retention time with a 2 ns random cycle time at 0.9 V, 85°C, and a 91.3 μW per Mb static power dissipation at 1.0 V, 85°C.",
"title": ""
},
{
"docid": "c9d137a71c140337b3f8345efdac17ab",
"text": "For more than 30 years, many authors have attempted to synthesize the knowledge about how an enterprise should structure its business processes, the people that execute them, the Information Systems that support both of these and the IT layer on which such systems operate, in such a way that they will be aligned with the business strategy. This is the challenge of Enterprise Architecture design, which is the theme of this paper. We will provide a brief review of the literature on this subject, with an emphasis on more recent proposals and methods that have been applied in practice. We also select approaches that propose some sort of framework that provides a general Enterprise Architecture in a given domain that can be reused as a basis for specific designs in such a domain. Then we present our proposal for Enterprise Architecture design, which is based on general domain models that we call Enterprise Architecture Patterns.",
"title": ""
}
] |
scidocsrr
|
1578f521994cdea00141a1737327d677
|
ABC and 3D: opportunities and obstacles to 3D printing in special education environments
|
[
{
"docid": "20fa84c01c29609825302e4cc2bf4094",
"text": "In this paper, we introduce the origins and applications of digital fabrication and \"making\" in education, and discuss how they can be implemented, researched, and developed in schools. Our discussion is based on several papers and posters that we summarize into three categories: research, technology development, and experiences in formal and informal education.",
"title": ""
},
{
"docid": "073cd7c54b038dcf69ae400f97a54337",
"text": "Interventions to support children with autism often include the use of visual supports, which are cognitive tools to enable learning and the production of language. Although visual supports are effective in helping to diminish many of the challenges of autism, they are difficult and time-consuming to create, distribute, and use. In this paper, we present the results of a qualitative study focused on uncovering design guidelines for interactive visual supports that would address the many challenges inherent to current tools and practices. We present three prototype systems that address these design challenges with the use of large group displays, mobile personal devices, and personal recording technologies. We also describe the interventions associated with these prototypes along with the results from two focus group discussions around the interventions. We present further design guidance for visual supports and discuss tensions inherent to their design.",
"title": ""
}
] |
[
{
"docid": "9fa20791d2e847dbd2c7204d00eec965",
"text": "As neurobiological evidence points to the neocortex as the brain region mainly involved in high-level cognitive functions, an innovative model of neocortical information processing has been recently proposed. Based on a simplified model of a neocortical neuron, and inspired by experimental evidence of neocortical organisation, the Hierarchical Temporal Memory (HTM) model attempts at understanding intelligence, but also at building learning machines. This paper focuses on analysing HTM's ability for online, adaptive learning of sequences. In particular, we seek to determine whether the approach is robust to noise in its inputs, and to compare and contrast its performance and attributes to an alternative Hidden Markov Model (HMM) approach. We reproduce a version of a HTM network and apply it to a visual pattern recognition task under various learning conditions. Our first set of experiments explore the HTM network's capability to learn repetitive patterns and sequences of patterns within random data streams. Further experimentation involves assessing the network's learning performance in terms of inference and prediction under different noise conditions. HTM results are compared with those of a HMM trained at the same tasks. Online learning performance results demonstrate the HTM's capacity to make use of context in order to generate stronger predictions, whereas results on robustness to noise reveal an ability to deal with noisy environments. Our comparisons also, however, emphasise a manner in which HTM differs significantly from HMM, which is that HTM generates predicted observations rather than hidden states, and each observation is a sparse distributed representation.",
"title": ""
},
{
"docid": "f4cc2848713439b162dc5fc255c336d2",
"text": "We consider the problem of waveform design for multiple input/multiple output (MIMO) radars, where the transmit waveforms are adjusted based on target and clutter statistics. A model for the radar returns which incorporates the transmit waveforms is developed. The target detection problem is formulated for that model. Optimal and suboptimal algorithms are derived for designing the transmit waveforms under different assumptions regarding the statistical information available to the detector. The performance of these algorithms is illustrated by computer simulation.",
"title": ""
},
{
"docid": "1f218afceb60fe63ea0e137207f6faf7",
"text": "To present the prevalence, clinical relevance, and ultrasound (US) and magnetic resonance imaging (MRI) appearances of the accessory coracobrachialis (ACB) muscle. We present an US prospective study of the ACB muscle over a 2-year period. Five of the eight patients with suspected ACB on US were subsequently examined by MRI. An ACB muscle was demonstrated by US in eight patients (eight shoulders), including seven females, one male, with mean age 39 years, over 770 (664 patients) consecutive shoulder US examinations referred to our institution yielding a prevalence of 1.04 %. In dynamic US assessment, one case of subcoracoid impingement secondary to a bulky ACB was diagnosed. No thoracic outlet syndrome was encountered in the remaining cases. MRI confirmed the presence of the accessory muscle in five cases. ACB muscle is a rarely reported yet not uncommon anatomic variation of the shoulder musculature encountered only in eight of 664 patients referred for shoulder US study. Its US and MRI appearance is described. One of our patients presented with subcoracoid impingement related to the presence of an ACB.",
"title": ""
},
{
"docid": "f2026d9d827c088711875acc56b12b70",
"text": "The goal of the study is to formalize the concept of viral marketing (VM) as a close derivative of contagion models from epidemiology. The study examines in detail the two common mathematical models of epidemic spread and their marketing implications. The SIR and SEIAR models of infectious disease spread are examined in detail. From this analysis of the epidemiological foundations along with a review of relevant marketing literature, a marketing model of VM is developed. This study demonstrates the key elements that define viral marketing as a formal marketing concept and the distinctive mechanical features that differ from conventional marketing.",
"title": ""
},
{
"docid": "808a6c959eb79deb6ac5278805f5b855",
"text": "Recently there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1-norm, average percentage of zeros, etc) and retain only the top ranked filters. Once the low scoring filters are pruned away the remainder of the network is fine tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counter-intuitive results wherein by randomly pruning 25-50% filters from deep CNNs we are able to obtain the same performance as obtained by using state of the art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class specific pruning and show that even here a random pruning strategy gives close to state of the art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection. We show that using a simple random pruning strategy we can achieve significant speed up in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster RCNN model.",
"title": ""
},
{
"docid": "435925ecebc5a13f0a0547961f12fd27",
"text": "Feature subset selection is one of the key problems in the area of pattern recognition and machine learning. Feature subset selection refers to the problem of selecting only those features that are useful in predicting a target concept i. e. class. Most of the data acquired through different sources are not particularly screened for any specific task e. g. classification, clustering, anomaly detection, etc. When this data is fed to a learning algorithm, its results deteriorate. The proposed method is a pure filter based feature subset selection technique that incurs less computational cost and highly efficient in terms of classification accuracy. Moreover, along with high accuracy the proposed method requires less number of features in most of the cases. In the proposed method the issue of feature ranking and threshold value selection is addressed. The proposed method adaptively selects number of features as per the worth of an individual feature in the dataset. An extensive experimentation is performed, comprised of a number of benchmark datasets over three well known classification algorithms. Empirical results endorse efficiency and effectiveness of the proposed method.",
"title": ""
},
{
"docid": "c187a6ad17503d269fe4c3a03fc4fd89",
"text": "Despite the widespread support for live migration of Virtual Machines (VMs) in current hypervisors, these have significant shortcomings when it comes to migration of certain types of VMs. More specifically, with existing algorithms, there is a high risk of service interruption when migrating VMs with high workloads and/or over low-bandwidth networks. In these cases, VM memory pages are dirtied faster than they can be transferred over the network, which leads to extended migration downtime. In this contribution, we study the application of delta compression during the transfer of memory pages in order to increase migration throughput and thus reduce downtime. The delta compression live migration algorithm is implemented as a modification to the KVM hypervisor. Its performance is evaluated by migrating VMs running different type of workloads and the evaluation demonstrates a significant decrease in migration downtime in all test cases. In a benchmark scenario the downtime is reduced by a factor of 100. In another scenario a streaming video server is live migrated with no perceivable downtime to the clients while the picture is frozen for eight seconds using standard approaches. Finally, in an enterprise application scenario, the delta compression algorithm successfully live migrates a very large system that fails after migration using the standard algorithm. Finally, we discuss some general effects of delta compression on live migration and analyze when it is beneficial to use this technique.",
"title": ""
},
{
"docid": "8a243d17a61f75ef9a881af120014963",
"text": "This paper presents a Deep Mayo Predictor model for predicting the outcomes of the matches in IPL 9 being played in April – May, 2016. The model has three components which are based on multifarious considerations emerging out of a deeper analysis of T20 cricket. The models are created using Data Analytics methods from machine learning domain. The prediction accuracy obtained is high as the Mayo Predictor Model is able to correctly predict the outcomes of 39 matches out of the 56 matches played in the league stage of the IPL IX tournament. Further improvement in the model can be attempted by using a larger training data set than the one that has been utilized in this work. No such effort at creating predictor models for cricket matches has been reported in the literature.",
"title": ""
},
{
"docid": "cf0a4f12c23b42c08b6404fe897ed646",
"text": "By performing computation at the location of data, non-Von Neumann (VN) computing should provide power and speed benefits over conventional (e.g., VN-based) approaches to data-centric workloads such as deep learning. For the on-chip training of largescale deep neural networks using nonvolatile memory (NVM) based synapses, success will require performance levels (e.g., deep neural network classification accuracies) that are competitive with conventional approaches despite the inherent imperfections of such NVM devices, and will also require massively parallel yet low-power read and write access. In this paper, we focus on the latter requirement, and outline the engineering tradeoffs in performing parallel reads and writes to large arrays of NVM devices to implement this acceleration through what is, at least locally, analog computing. We address how the circuit requirements for this new neuromorphic computing approach are somewhat reminiscent of, yet significantly different from, the well-known requirements found in conventional memory applications. We discuss tradeoffs that can influence both the effective acceleration factor (“speed”) and power requirements of such on-chip learning accelerators. P. Narayanan A. Fumarola L. L. Sanches K. Hosokawa S. C. Lewis R. M. Shelby G. W. Burr",
"title": ""
},
{
"docid": "57c9170c8cbf4dda16538e8af5eb59e5",
"text": "Companies that offer loyalty reward programs believe that their programs have a long-run positive effect on customer evaluations and behavior. However, if loyalty rewards programs increase relationship durations and usage levels, customers will be increasingly exposed to the complete spectrum of service experiences, including experiences that may cause customers to switch to another service provider. Using cross-sectional, time-series data from a worldwide financial services company that offers a loyalty reward program, this article investigates the conditions under which a loyalty rewards program will have a positive effect on customer evaluations, behavior, and repeat purchase intentions. The results show that members in the loyalty reward program overlook or discount negative evaluations of the company vis-à-vis competition. One possible reason could be that members of the loyalty rewards program perceive that they are getting better quality and service for their price or, in other words, “good value.”",
"title": ""
},
{
"docid": "2ad6b17fcb0ea20283e318a3fed2939f",
"text": "A fundamental problem of time series is k nearest neighbor (k-NN) query processing. However, existing methods are not fast enough for large dataset. In this paper, we propose a novel approach, STS3, to process k-NN queries by transforming time series to sets and measure the similarity under Jaccard metric. Our approach is more accurate than Dynamic Time Warping(DTW) in our suitable scenarios and it is faster than most of the existing methods, due to the efficient similarity search for sets. Besides, we also developed an index, a pruning and an approximation technique to improve the k-NN query procedure. As shown in the experimental results, all of them could accelerate the query processing effectively.",
"title": ""
},
{
"docid": "a8edc02eb78637f18fc948d81397fc75",
"text": "When we are investigating an object in a data set, which itself may or may not be an outlier, can we identify unusual (i.e., outlying) aspects of the object? In this paper, we identify the novel problem of mining outlying aspects on numeric data. Given a query object $$o$$ o in a multidimensional numeric data set $$O$$ O , in which subspace is $$o$$ o most outlying? Technically, we use the rank of the probability density of an object in a subspace to measure the outlyingness of the object in the subspace. A minimal subspace where the query object is ranked the best is an outlying aspect. Computing the outlying aspects of a query object is far from trivial. A naïve method has to calculate the probability densities of all objects and rank them in every subspace, which is very costly when the dimensionality is high. We systematically develop a heuristic method that is capable of searching data sets with tens of dimensions efficiently. Our empirical study using both real data and synthetic data demonstrates that our method is effective and efficient.",
"title": ""
},
{
"docid": "3b9af99b33c15188a8ec50c7decd3b28",
"text": "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5% on BDDS (drive-cam videos) in an unsupervised setting.",
"title": ""
},
{
"docid": "30941e0bc8575047d1adc8c20983823b",
"text": "The world has changed dramatically for wind farm operators and service providers in the last decade. Organizations whose turbine portfolios was counted in 10-100s ten years ago are now managing large scale operation and service programs for fleet sizes well above one thousand turbines. A big challenge such organizations now face is the question of how the massive amount of operational data that are generated by large fleets are effectively managed and how value is gained from the data. A particular hard challenge is the handling of data streams collected from advanced condition monitoring systems. These data are highly complex and typically require expert knowledge to interpret correctly resulting in poor scalability when moving to large Operation and Maintenance (O&M) platforms.",
"title": ""
},
{
"docid": "4bce887df71f59085938c8030e7b0c1c",
"text": "Context plays an important role in human language understanding, thus it may also be useful for machines learning vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. We carefully designed experiments to show that neither an autoregressive decoder nor an RNN decoder is required. After that, we designed a model which still keeps an RNN as the encoder, while using a non-autoregressive convolutional decoder. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabelled corpora, and in both cases the transferability is evaluated on a set of downstream NLP tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks.",
"title": ""
},
{
"docid": "7527cfe075027c9356645419c4fd1094",
"text": "ive Multi-Document Summarization via Phrase Selection and Merging∗ Lidong Bing§ Piji Li Yi Liao Wai Lam Weiwei Guo† Rebecca J. Passonneau‡ §Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA USA Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong †Yahoo Labs, Sunnyvale, CA, USA ‡Center for Computational Learning Systems, Columbia University, New York, NY, USA §[email protected], {pjli, yliao, wlam}@se.cuhk.edu.hk †[email protected], ‡[email protected]",
"title": ""
},
{
"docid": "f24c9f07945572ed467f397e4274060e",
"text": "Scholarly digital libraries have become an important source of bibliographic records for scientific communities. Author name search is one of the most common query exercised in digital libraries. The name ambiguity problem in the context of author search in digital libraries, arising from multiple authors sharing the same name, poses many challenges. A number of name disambiguation methods have been proposed in the literature so far. A variety of bibliographic attributes have been considered in these methods. However, hardly any effort has been made to assess the potential contribution of these attributes. We, for the first time, evaluate the potential strength and/or weaknesses of these attributes by a rigorous course of experiments on a large data set. We also explore the potential utility of some attributes from different perspective. A close look reveals that most of the earlier work require one or more attributes which are difficult to obtain in practical applications. Based on this empirical study, we identify three very common and easy to access attributes and propose a two-step hierarchical clustering technique to solve name ambiguity using these attributes only. Experimental results on data set extracted from a popular digital library show that the proposed method achieves significantly high level of accuracy (> 90%) for most of the instances.",
"title": ""
},
{
"docid": "fa75c21227d8e9e417c54552f8dbe2f9",
"text": "Autonomous intelligent cruise control (AICC) is a technology for driver convenience, increased safety, and smoother traffic flow. AICC also has been proposed for increasing traffic flow by allowing shorter intervehicle headways. Because an AI CC-equipped vehicle operates using only information available from its own sensors, there is no requirement for communication and cooperation between vehicles. This format allows gradual market penetration of AICC systems, which makes the technology attractive from a systems implementation standpoint. The potential flow increases when only a proportion of vehicles on a highway are equipped with AICC were examined, and theoretical upper limits on flows as a function of pertinent variables were derived. Because of the limitations of the theoretical models, a simulator was used that models interactions between vehicles to give detailed information on achievable capacity and traffic stream stability. Results showed that AICC can lead to potentially large gains in capacity only if certain highly unrealistic assumptions hold. In reality, the capacity gains from AICC are likely to be small.",
"title": ""
},
{
"docid": "58e3444f3d35d0ad45e5637e7c53efb5",
"text": "An efficient method for text localization and recognition in real-world images is proposed. Thanks to effective pruning, it is able to exhaustively search the space of all character sequences in real time (200ms on a 640x480 image). The method exploits higher-order properties of text such as word text lines. We demonstrate that the grouping stage plays a key role in the text localization performance and that a robust and precise grouping stage is able to compensate errors of the character detector. The method includes a novel selector of Maximally Stable Extremal Regions (MSER) which exploits region topology. Experimental validation shows that 95.7% characters in the ICDAR dataset are detected using the novel selector of MSERs with a low sensitivity threshold. The proposed method was evaluated on the standard ICDAR 2003 dataset where it achieved state-of-the-art results in both text localization and recognition.",
"title": ""
}
] |
scidocsrr
|
e5007e7be2bbcdccdca180e672cc82ff
|
The Role of Lactic Acid Bacteria in Milk Fermentation
|
[
{
"docid": "1007cd10c262718fe108c9ddb0df1091",
"text": "Shalgam juice, hardaliye, boza, ayran (yoghurt drink) and kefir are the most known traditional Turkish fermented non-alcoholic beverages. The first three are obtained from vegetables, fruits and cereals, and the last two ones are made of milk. Shalgam juice, hardaliye and ayran are produced by lactic acid fermentation. Their microbiota is mainly composed of lactic acid bacteria (LAB). Lactobacillus plantarum, Lactobacillus brevis and Lactobacillus paracasei subsp. paracasei in shalgam fermentation and L. paracasei subsp. paracasei and Lactobacillus casei subsp. pseudoplantarum in hardaliye fermentation are predominant. Ayran is traditionally prepared by mixing yoghurt with water and salt. Yoghurt starter cultures are used in industrial ayran production. On the other hand, both alcohol and lactic acid fermentation occur in boza and kefir. Boza is prepared by using a mixture of maize, wheat and rice or their flours and water. Generally previously produced boza or sourdough/yoghurt are used as starter culture which is rich in Lactobacillus spp. and yeasts. Kefir is prepared by inoculation of raw milk with kefir grains which consists of different species of yeasts, LAB, acetic acid bacteria in a protein and polysaccharide matrix. The microbiota of boza and kefir is affected from raw materials, the origin and the production methods. In this review, physicochemical properties, manufacturing technologies, microbiota and shelf life and spoilage of traditional fermented beverages were summarized along with how fermentation conditions could affect rheological properties of end product which are important during processing and storage.",
"title": ""
}
] |
[
{
"docid": "b9da5b905cfe701303b627f359c30624",
"text": "Parametric embedding methods such as parametric t-distributed Stochastic Neighbor Embedding (pt-SNE) enables out-of-sample data visualization without further computationally expensive optimization or approximation. However, pt-SNE favors small mini-batches to train a deep neural network but large minibatches to approximate its cost function involving all pairwise data point comparisons, and thus has difficulty in finding a balance. To resolve the conflicts, we present parametric t-distributed stochastic exemplar-centered embedding. Our strategy learns embedding parameters by comparing training data only with precomputed exemplars to indirectly preserve local neighborhoods, resulting in a cost function with significantly reduced computational and memory complexity. Moreover, we propose a shallow embedding network with high-order feature interactions for data visualization, which is much easier to tune but produces comparable performance in contrast to a deep feedforward neural network employed by pt-SNE. We empirically demonstrate, using several benchmark datasets, that our proposed method significantly outperforms pt-SNE in terms of robustness, visual effects, and quantitative evaluations.",
"title": ""
},
{
"docid": "285a1c073ec4712ac735ab84cbcd1fac",
"text": "During a survey of black yeasts of marine origin, some isolates of Hortaea werneckii were recovered from scuba diving equipment, such as silicone masks and snorkel mouthpieces, which had been kept under poor storage conditions. These yeasts were unambiguously identified by phenotypic and genotypic methods. Phylogenetic analysis of both the D1/D2 regions of 26S rRNA gene and ITS-5.8S rRNA gene sequences showed three distinct genetic types. This species is the agent of tinea nigra which is a rarely diagnosed superficial mycosis in Europe. In fact this mycosis is considered an imported fungal infection being much more prevalent in warm, humid parts of the world such as the Central and South Americas, Africa, and Asia. Although H. werneckii has been found in hypersaline environments in Europe, this is the first instance of the isolation of this halotolerant species from scuba diving equipment made with silicone rubber which is used in close contact with human skin and mucous membranes. The occurrence of this fungus in Spain is also an unexpected finding because cases of tinea nigra in this country are practically not seen.",
"title": ""
},
{
"docid": "85d9b0ed2e9838811bf3b07bb31dbeb6",
"text": "In recent years, the medium which has negative index of refraction is widely researched. The medium has both the negative permittivity and the negative permeability. In this paper, we have researched the frequency range widening of negative permeability using split ring resonators.",
"title": ""
},
{
"docid": "65d938eee5da61f27510b334312afe41",
"text": "This paper reviews the actual and potential use of social media in emergency, disaster and crisis situations. This is a field that has generated intense interest. It is characterised by a burgeoning but small and very recent literature. In the emergencies field, social media (blogs, messaging, sites such as Facebook, wikis and so on) are used in seven different ways: listening to public debate, monitoring situations, extending emergency response and management, crowd-sourcing and collaborative development, creating social cohesion, furthering causes (including charitable donation) and enhancing research. Appreciation of the positive side of social media is balanced by their potential for negative developments, such as disseminating rumours, undermining authority and promoting terrorist acts. This leads to an examination of the ethics of social media usage in crisis situations. Despite some clearly identifiable risks, for example regarding the violation of privacy, it appears that public consensus on ethics will tend to override unscrupulous attempts to subvert the media. Moreover, social media are a robust means of exposing corruption and malpractice. In synthesis, the widespread adoption and use of social media by members of the public throughout the world heralds a new age in which it is imperative that emergency managers adapt their working practices to the challenge and potential of this development. At the same time, they must heed the ethical warnings and ensure that social media are not abused or misused when crises and emergencies occur.",
"title": ""
},
{
"docid": "bdf81fccbfa77dadcad43699f815475e",
"text": "The objective of this paper is classifying images by the object categories they contain, for example motorbikes or dolphins. There are three areas of novelty. First, we introduce a descriptor that represents local image shape and its spatial layout, together with a spatial pyramid kernel. These are designed so that the shape correspondence between two images can be measured by the distance between their descriptors using the kernel. Second, we generalize the spatial pyramid kernel, and learn its level weighting parameters (on a validation set). This significantly improves classification performance. Third, we show that shape and appearance kernels may be combined (again by learning parameters on a validation set).\n Results are reported for classification on Caltech-101 and retrieval on the TRECVID 2006 data sets. For Caltech-101 it is shown that the class specific optimization that we introduce exceeds the state of the art performance by more than 10%.",
"title": ""
},
{
"docid": "da540860f3ecb9ca15148a7315b74a45",
"text": "Learning mathematics is one of the most important aspects that determine the future of learners. However, mathematics as one of the subjects is often perceived as being complicated and not liked by the learners. Therefore, we need an application with the use of appropriate technology to create visualization effects which can attract more attention from learners. The application of Augmented Reality technology in digital game is a series of efforts made to create a better visualization effect. In addition, the system is also connected to a leaderboard web service in order to improve the learning motivation through competitive process. Implementation of Augmented Reality is proven to improve student's learning motivation moreover implementation of Augmented Reality in this game is highly preferred by students.",
"title": ""
},
{
"docid": "b3e32f77fde76eba0adfccdc6878a0f3",
"text": "The paper describes a work in progress on humorous response generation for short-text conversation using information retrieval approach. We gathered a large collection of funny tweets and implemented three baseline retrieval models: BM25, the query term reweighting model based on syntactic parsing and named entity recognition, and the doc2vec similarity model. We evaluated these models in two ways: in situ on a popular community question answering platform and in laboratory settings. The approach proved to be promising: even simple search techniques demonstrated satisfactory performance. The collection, test questions, evaluation protocol, and assessors’ judgments create a ground for future research towards more sophisticated models.",
"title": ""
},
{
"docid": "785c716d4f127a5a5fee02bc29aeb352",
"text": "In this paper we propose a novel, improved, phase generated carrier (PGC) demodulation algorithm based on the PGC-differential-cross-multiplying approach (PGC-DCM). The influence of phase modulation amplitude variation and light intensity disturbance (LID) on traditional PGC demodulation algorithms is analyzed theoretically and experimentally. An experimental system for remote no-contact microvibration measurement is set up to confirm the stability of the improved PGC algorithm with LID. In the experiment, when the LID with a frequency of 50 Hz and the depth of 0.3 is applied, the signal-to-noise and distortion ratio (SINAD) of the improved PGC algorithm is 19 dB, higher than the SINAD of the PGC-DCM algorithm, which is 8.7 dB.",
"title": ""
},
{
"docid": "3f8b8ef850aa838289265d175dfa7f1d",
"text": "If competitive equilibrium is defined as a situation in which prices are such that all arbitrage profits are eliminated, is it possible that a competitive economy always be in equilibrium? Clearly not, for then those who arbitrage make no (private) return from their (privately) costly activity. Hence the assumptions that all markets, including that for information, are always in equilibrium and always perfectly arbitraged are inconsistent when arbitrage is costly. We propose here a model in which there is an equilibrium degree of disequilibrium: prices reflect the information of informed individuals (arbitrageurs) but only partially, so that those who expend resources to obtain information do receive compensation. How informative the price system is depends on the number of individuals who are informed; but the number of individuals who are informed is itself an endogenous variable in the model. The model is the simplest one in which prices perform a well-articulated role in conveying information from the informed to the uninformed. When informed individuals observe information that the return to a security is going to be high, they bid its price up, and conversely when they observe information that the return is going to be low. Thus the price system makes publicly available the information obtained by informed individuals to the uniformed. In general, however, it does this imperfectly; this is perhaps lucky, for were it to do it perfectly, an equilibrium would not exist. In the introduction, we shall discuss the general methodology and present some conjectures concerning certain properties of the equilibrium. The remaining analytic sections of the paper are devoted to analyzing in detail an important example of our general model, in which our conjectures concerning the nature of the equilibrium can be shown to be correct. We conclude with a discussion of the implications of our approach and results, with particular emphasis on the relationship of our results to the literature on \"efficient capital markets.\"",
"title": ""
},
{
"docid": "ca8d686b7e0fb3e59508a3b397e8f85e",
"text": "TWIK-related acid-sensitive K(+)-1 (TASK-1 [KCNK3]) and TASK-3 (KCNK9) are tandem pore (K(2P)) potassium (K) channel subunits expressed in carotid bodies and the brainstem. Acidic pH values and hypoxia inhibit TASK-1 and TASK-3 channel function, and halothane enhances this function. These channels have putative roles in ventilatory regulation and volatile anesthetic mechanisms. Doxapram stimulates ventilation through an effect on carotid bodies, and we hypothesized that stimulation might result from inhibition of TASK-1 or TASK-3 K channel function. To address this, we expressed TASK-1, TASK-3, TASK-1/TASK-3 heterodimeric, and TASK-1/TASK-3 chimeric K channels in Xenopus oocytes and studied the effects of doxapram on their function. Doxapram inhibited TASK-1 (half-maximal effective concentration [EC50], 410 nM), TASK-3 (EC50, 37 microM), and TASK-1/TASK-3 heterodimeric channel function (EC50, 9 microM). Chimera studies suggested that the carboxy terminus of TASK-1 is important for doxapram inhibition. Other K2P channels required significantly larger concentrations for inhibition. To test the role of TASK-1 and TASK-3 in halothane-induced immobility, the minimum alveolar anesthetic concentration for halothane was determined and found unchanged in rats receiving doxapram by IV infusion. Our data indicate that TASK-1 and TASK-3 do not play a role in mediating the immobility produced by halothane, although they are plausible molecular targets for the ventilatory effects of doxapram.",
"title": ""
},
{
"docid": "0837c9af9b69367a5a6e32b2f72cef0a",
"text": "Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.",
"title": ""
},
{
"docid": "e2ed500ce298ea175554af97bd0f2f98",
"text": "The Climate CoLab is a system to help thousands of people around the world collectively develop plans for what humans should do about global climate change. This paper shows how the system combines three design elements (model-based planning, on-line debates, and electronic voting) in a synergistic way. The paper also reports early usage experience showing that: (a) the system is attracting a continuing stream of new and returning visitors from all over the world, and (b) the nascent community can use the platform to generate interesting and high quality plans to address climate change. These initial results indicate significant progress towards an important goal in developing a collective intelligence system—the formation of a large and diverse community collectively engaged in solving a single problem.",
"title": ""
},
{
"docid": "39fe1618fad28ec6ad72d326a1d00f24",
"text": "Popular real-time public events often cause upsurge of traffic in Twitter while the event is taking place. These posts range from real-time update of the event's occurrences highlights of important moments thus far, personal comments and so on. A large user group has evolved who seeks these live updates to get a brief summary of the important moments of the event so far. However, major social search engines including Twitter still present the tweets satisfying the Boolean query in reverse chronological order, resulting in thousands of low quality matches agglomerated in a prosaic manner. To get an overview of the happenings of the event, a user is forced to read scores of uninformative tweets causing frustration. In this paper, we propose a method for multi-tweet summarization of an event. It allows the search users to quickly get an overview about the important moments of the event. We have proposed a graph-based retrieval algorithm that identifies tweets with popular discussion points among the set of tweets returned by Twitter search engine in response to a query comprising the event related keywords. To ensure maximum coverage of topical diversity, we perform topical clustering of the tweets before applying the retrieval algorithm. Evaluation performed by summarizing the important moments of a real-world event revealed that the proposed method could summarize the proceeding of different segments of the event with up to 81.6% precision and up to 80% recall.",
"title": ""
},
{
"docid": "9c1beecda61e50dd278e73c55ca703c8",
"text": "Power MOSFET designs have been moving to higher performance particularly in the medium voltage area. (60V to 300V) New designs require lower Specific On-resistance while not sacrificing Unclamped Inductive Switching (UIS) capability or increasing turn-off losses. Two charge balance technologies currently address these needs, the PN junction and the Shielded Gate Charge Balance device topologies. This paper will study the impact of drift region as well as other design parameters that influence the shielded gate class of charge balance devices. The optimum design for maximizing UIS capability and minimizing the impact on other design parameters such as RDSON and switching performance are addressed. It will be shown through TCAD simulation one can design devices to have a stable avalanche point that is not influenced by small variations within a die or die-to-die that result from normal processing. Finally, measured and simulated data will be presented showing a fabricated device with near theoretical UIS capability.",
"title": ""
},
{
"docid": "5c111a5a30f011e4f47fb9e2041644f9",
"text": "Since the audio recapture can be used to assist audio splicing, it is important to identify whether a suspected audio recording is recaptured or not. However, few works on such detection have been reported. In this paper, we propose an method to detect the recaptured audio based on deep learning and we investigate two deep learning techniques, i.e., neural network with dropout method and stack auto-encoders (SAE). The waveform samples of audio frame is directly used as the input for the deep neural network. The experimental results show that error rate around 7.5% can be achieved, which indicates that our proposed method can successfully discriminate recaptured audio and original audio.",
"title": ""
},
{
"docid": "b38529e74442de80822204b63d061e3e",
"text": "Factors other than age and genetics may increase the risk of developing Alzheimer disease (AD). Accumulation of the amyloid-β (Aβ) peptide in the brain seems to initiate a cascade of key events in the pathogenesis of AD. Moreover, evidence is emerging that the sleep–wake cycle directly influences levels of Aβ in the brain. In experimental models, sleep deprivation increases the concentration of soluble Aβ and results in chronic accumulation of Aβ, whereas sleep extension has the opposite effect. Furthermore, once Aβ accumulates, increased wakefulness and altered sleep patterns develop. Individuals with early Aβ deposition who still have normal cognitive function report sleep abnormalities, as do individuals with very mild dementia due to AD. Thus, sleep and neurodegenerative disease may influence each other in many ways that have important implications for the diagnosis and treatment of AD.",
"title": ""
},
{
"docid": "fd97b7130c7d1828566422f49c857db5",
"text": "The phase noise of phase/frequency detectors can significantly raise the in-band phase noise of frequency synthesizers, corrupting the modulated signal. This paper analyzes the phase noise mechanisms in CMOS phase/frequency detectors and applies the results to two different topologies. It is shown that an octave increase in the input frequency raises the phase noise by 6 dB if flicker noise is dominant and by 3 dB if white noise is dominant. An optimization methodology is also proposed that lowers the phase noise by 4 to 8 dB for a given power consumption. Simulation and analytical results agree to within 3.1 dB for the two topologies at different frequencies.",
"title": ""
},
{
"docid": "47c5f3a7230ac19b8889ced2d8f4318a",
"text": "This paper deals with the setting parameter optimization procedure for a multi-phase induction heating system considering transverse flux heating. This system is able to achieve uniform static heating of different thin/size metal pieces without movable inductor parts, yokes or magnetic screens. The goal is reached by the predetermination of the induced power density distribution using an optimization procedure that leads to the required inductor supplying currents. The purpose of the paper is to describe the optimization program with the different solution obtained and to show that some compromise must be done between the accuracy of the temperature profile and the energy consumption.",
"title": ""
},
{
"docid": "2baa441b3daf9736154dd19864ec2497",
"text": "In some stochastic environments the well-known reinforcement learning algorithm Q-learning performs very poorly. This poor performance is caused by large overestimations of action values. These overestimations result from a positive bias that is introduced because Q-learning uses the maximum action value as an approximation for the maximum expected action value. We introduce an alternative way to approximate the maximum expected value for any set of random variables. The obtained double estimator method is shown to sometimes underestimate rather than overestimate the maximum expected value. We apply the double estimator to Q-learning to construct Double Q-learning, a new off-policy reinforcement learning algorithm. We show the new algorithm converges to the optimal policy and that it performs well in some settings in which Q-learning performs poorly due to its overestimation.",
"title": ""
},
{
"docid": "7b314cd0c326cb977b92f4907a0ed737",
"text": "This is the third part of a series of papers that provide a comprehensive survey of the techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty. Part I [1] and Part II [2] deal with general target motion models and ballistic target motion models, respectively. This part surveys measurement models, including measurement model-based techniques, used in target tracking. Models in Cartesian, sensor measurement, their mixed, and other coordinates are covered. The stress is on more recent advances — topics that have received more attention recently are discussed in greater details.",
"title": ""
}
] |
scidocsrr
|
0478ffe45c254325d183ea8c35c15b15
|
A Unified Knowledge Representation and Context-aware Recommender System in Internet of Things
|
[
{
"docid": "376943ca96470be14dd8ee821a59e0ee",
"text": "Interoperability in the Internet of Things is critical for emerging services and applications. In this paper we advocate the use of IoT `hubs' to aggregate things using web protocols, and suggest a staged approach to interoperability. In the context of a UK government funded project involving 8 IoT sub-projects to address cross-domain IoT interoperability, we introduce the HyperCat IoT catalogue specification. We then describe the tools and techniques we developed to adapt an existing data portal and IoT platform to this specification, and provide an IoT hub focused on the highways industry called `Smart Streets'. Based on our experience developing this large scale IoT hub, we outline lessons learned which we hope will contribute to ongoing efforts to create an interoperable global IoT ecosystem.",
"title": ""
},
{
"docid": "9379cad59abab5e12c97a9b92f4aeb93",
"text": "SigTur/E-Destination is a Web-based system that provides personalized recommendations of touristic activities in the region of Tarragona. The activities are properly classified and labeled according to a specific ontology, which guides the reasoning process. The recommender takes into account many different kinds of data: demographic information, travel motivations, the actions of the user on the system, the ratings provided by the user, the opinions of users with similar demographic characteristics or similar tastes, etc. The system has been fully designed and implemented in the Science and Technology Park of Tourism and Leisure. The paper presents a numerical evaluation of the correlation between the recommendations and the user’s motivations, and a qualitative evaluation performed by end users. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "8726aa3deb7177bcc343b9c9073b9e0b",
"text": "This work addresses the task of multilabel image classification. Inspired by the great success from deep convolutional neural networks (CNNs) for single-label visualsemantic embedding, we exploit extending these models for multilabel images. Specifically, we propose an imagedependent ranking model, which returns a ranked list of labels according to its relevance to the input image. In contrast to conventional CNN models that learn an image representation (i.e. the image embedding vector), the developed model learns a mapping (i.e. a transformation matrix) from an image in an attempt to differentiate between its relevant and irrelevant labels. Despite the conceptual simplicity of our approach, experimental results on a public benchmark dataset demonstrate that the proposed model achieves state-of-the-art performance while using fewer training images than other multilabel classification methods.",
"title": ""
},
{
"docid": "2f2be97ad06ded172333c29b32fd3f0d",
"text": "Measurement uncertainty is traditionally represented in the form of expanded uncertainty as defined through the Guide to the Expression of Uncertainty in Measurement (GUM). The International Organization for Standardization GUM represents uncertainty through confidence intervals based on the variances and means derived from probability density functions. A new approach to the evaluation of measurement uncertainty based on the polynomial chaos theory is presented and compared with the traditional GUM method",
"title": ""
},
{
"docid": "b3994e9545b573eb30038e5c75ead9df",
"text": "In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participant-generated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people's attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one.",
"title": ""
},
{
"docid": "0de95645a74d401ad0d0d608faaa0d1d",
"text": "This contribution describes the research activity on the development of different smart pixel topologies aimed at three-dimensional (3D) vision applications exploiting the multiple-pulse indirect time-of-flight (TOF) and standard direct TOF techniques. The proposed approaches allow for the realization of scannerless laser ranging systems capable of fast collection of 3D data sets, as required in a growing number of applications like, automotive, security, surveillance and robotic guidance. Single channel approach, as well as matrix-organized sensors, will be described, facing the demanding constraints of specific applications, like the high dynamic range capability and the background immunity. Real time range (3D) and intensity (2D) imaging of non-cooperative targets, also in presence of strong background illumination, has been successfully performed in the 2m-9m range with a precision better than 5% and an accuracy of about 1%.",
"title": ""
},
{
"docid": "68f3b3521b426b696419a58e6d389aae",
"text": "A new scan that matches an aided Inertial Navigation System (INS) with a low-cost LiDAR is proposed as an alternative to GNSS-based navigation systems in GNSS-degraded or -denied environments such as indoor areas, dense forests, or urban canyons. In these areas, INS-based Dead Reckoning (DR) and Simultaneous Localization and Mapping (SLAM) technologies are normally used to estimate positions as separate tools. However, there are critical implementation problems with each standalone system. The drift errors of velocity, position, and heading angles in an INS will accumulate over time, and on-line calibration is a must for sustaining positioning accuracy. SLAM performance is poor in featureless environments where the matching errors can significantly increase. Each standalone positioning method cannot offer a sustainable navigation solution with acceptable accuracy. This paper integrates two complementary technologies-INS and LiDAR SLAM-into one navigation frame with a loosely coupled Extended Kalman Filter (EKF) to use the advantages and overcome the drawbacks of each system to establish a stable long-term navigation process. Static and dynamic field tests were carried out with a self-developed Unmanned Ground Vehicle (UGV) platform-NAVIS. The results prove that the proposed approach can provide positioning accuracy at the centimetre level for long-term operations, even in a featureless indoor environment.",
"title": ""
},
{
"docid": "ef7d2afe9206e56479a4098b6255aa4b",
"text": "Cloud is becoming a dominant computing platform. Naturally, a question that arises is whether we can beat notorious DDoS attacks in a cloud environment. Researchers have demonstrated that the essential issue of DDoS attack and defense is resource competition between defenders and attackers. A cloud usually possesses profound resources and has full control and dynamic allocation capability of its resources. Therefore, cloud offers us the potential to overcome DDoS attacks. However, individual cloud hosted servers are still vulnerable to DDoS attacks if they still run in the traditional way. In this paper, we propose a dynamic resource allocation strategy to counter DDoS attacks against individual cloud customers. When a DDoS attack occurs, we employ the idle resources of the cloud to clone sufficient intrusion prevention servers for the victim in order to quickly filter out attack packets and guarantee the quality of the service for benign users simultaneously. We establish a mathematical model to approximate the needs of our resource investment based on queueing theory. Through careful system analysis and real-world data set experiments, we conclude that we can defeat DDoS attacks in a cloud environment.",
"title": ""
},
{
"docid": "1993b540ff91922d381128e9c8592163",
"text": "The use of the WWW as a venue for voicing opinions, complaints and recommendations on products and firms has been widely reported in the popular media. However little is known how consumers use these reviews and if they subsequently have any influence on evaluations and purchase intentions of products and retailers. This study examines the effect of negative reviews on retailer evaluation and patronage intention given that the consumer has already made a product/brand decision. Our results indicate that the extent of WOM search depends on the consumer’s reasons for choosing an online retailer. Further the influence of negative WOM information on perceived reliability and purchase intentions is determined largely by familiarity with the retailer and differs based on whether the retailer is a pure-Internet or clicks-and-mortar firm. Managerial implications for positioning strategies to minimize the effect of negative word-ofmouth have been discussed.",
"title": ""
},
{
"docid": "08e121203b159b7d59f17d65a33580f4",
"text": "Coded structured light is an optical technique based on active stereovision that obtains the shape of objects. One shot techniques are based on projecting a unique light pattern with an LCD projector so that grabbing an image with a camera, a large number of correspondences can be obtained. Then, a 3D reconstruction of the illuminated object can be recovered by means of triangulation. The most used strategy to encode one-shot patterns is based on De Bruijn sequences. In This work a new way to design patterns using this type of sequences is presented. The new coding strategy minimises the number of required colours and maximises both the resolution and the accuracy.",
"title": ""
},
{
"docid": "44bd234a8999260420bb2a07934887af",
"text": "T e purpose of this review is to assess the nature and magnitudes of the dominant forces in protein folding. Since proteins are only marginally stable at room temperature,’ no type of molecular interaction is unimportant, and even small interactions can contribute significantly (positively or negatively) to stability (Alber, 1989a,b; Matthews, 1987a,b). However, the present review aims to identify only the largest forces that lead to the structural features of globular proteins: their extraordinary compactness, their core of nonpolar residues, and their considerable amounts of internal architecture. This review explores contributions to the free energy of folding arising from electrostatics (classical charge repulsions and ion pairing), hydrogen-bonding and van der Waals interactions, intrinsic propensities, and hydrophobic interactions. An earlier review by Kauzmann (1959) introduced the importance of hydrophobic interactions. His insights were particularly remarkable considering that he did not have the benefit of known protein structures, model studies, high-resolution calorimetry, mutational methods, or force-field or statistical mechanical results. The present review aims to provide a reassessment of the factors important for folding in light of current knowledge. Also considered here are the opposing forces, conformational entropy and electrostatics. The process of protein folding has been known for about 60 years. In 1902, Emil Fischer and Franz Hofmeister independently concluded that proteins were chains of covalently linked amino acids (Haschemeyer & Haschemeyer, 1973) but deeper understanding of protein structure and conformational change was hindered because of the difficulty in finding conditions for solubilization. Chick and Martin (191 1) were the first to discover the process of denaturation and to distinguish it from the process of aggregation. By 1925, the denaturation process was considered to be either hydrolysis of the peptide bond (Wu & Wu, 1925; Anson & Mirsky, 1925) or dehydration of the protein (Robertson, 1918). The view that protein denaturation was an unfolding process was",
"title": ""
},
{
"docid": "1ee6aa26c98e59ede131207ce8382c7e",
"text": "Fusing data from multiple sensors on-board a mobile platform can significantly augment its state estimation abilities and enable autonomous traversals of different domains by adapting to changing signal availabilities. However, due to the need for accurate calibration and initialization of the sensor ensemble as well as coping with erroneous measurements that are acquired at different rates with various delays, multi-sensor fusion still remains a challenge. In this paper, we introduce a novel multi-sensor fusion approach for agile aerial vehicles that allows for measurement validation and seamless switching between sensors based on statistical signal quality analysis. Moreover, it is capable of self-initialization of its extrinsic sensor states. These initialized states are maintained in the framework such that the system can continuously self-calibrate. We implement this framework on-board a small aerial vehicle and demonstrate the effectiveness of the above capabilities on real data. As an example, we fuse GPS data, ultra-wideband (UWB) range measurements, visual pose estimates, and IMU data. Our experiments demonstrate that our system is able to seamlessly filter and switch between different sensors modalities during run time.",
"title": ""
},
{
"docid": "42d15f1d4eefe97938719a2372289f8d",
"text": "With the flourishing of multi-functional wearable devices and the widespread use of smartphones, MHN becomes a promising paradigm of ubiquitous healthcare to continuously monitor our health conditions, remotely diagnose phenomena, and share health information in real time. However, MHNs raise critical security and privacy issues, since highly sensitive health information is collected, and users have diverse security and privacy requirements about such information. In this article, we investigate security and privacy protection in MHNs from the perspective of QoP, which offers users adjustable security protections at fine-grained levels. Specifically, we first introduce the architecture of MHN, and point out the security and privacy challenges from the perspective of QoP. We then present some countermeasures for security and privacy protection in MHNs, including privacy- preserving health data aggregation, secure health data processing, and misbehavior detection. Finally, we discuss some open problems and pose future research directions in MHNs.",
"title": ""
},
{
"docid": "eb7b55c89ddbada0e186b3ff49769b5d",
"text": "By comparing the existing types of transformer bushings, this paper reviews distinctive features of RIF™ (Resin Impregnated Fiberglass) paperless condenser bushings; and, in more detail, it introduces principles, construction, characteristics and applications of this type of bushing when used with a new, safer and reliable built-in insulation monitoring function. As the construction of RIF™ insulation would delay the propagation of a core insulation breakdown after the onset of an initial insulation defect, this type of real time monitoring of core insulation condition provides a novel tool to manage bushing defects without any sense of urgency. It offers, for the first time, a very early field detection tool for transformer bushing insulation faults and by way of consequence, a much improved protection of power transformers over their operating life.",
"title": ""
},
{
"docid": "ea5b41179508151987a1f6e6d154d7a6",
"text": "Despite the considerable quantity of research directed towards multitouch technologies, a set of standardized UI components have not been developed. Menu systems provide a particular challenge, as traditional GUI menus require a level of pointing precision inappropriate for direct finger input. Marking menus are a promising alternative, but have yet to be investigated or adapted for use within multitouch systems. In this paper, we first investigate the human capabilities for performing directional chording gestures, to assess the feasibility of multitouch marking menus. Based on the positive results collected from this study, and in particular, high angular accuracy, we discuss our new multitouch marking menu design, which can increase the number of items in a menu, and eliminate a level of depth. A second experiment showed that multitouch marking menus perform significantly faster than traditional hierarchal marking menus, reducing acquisition times in both novice and expert usage modalities.",
"title": ""
},
{
"docid": "510a43227819728a77ff0c7fa06fa2d0",
"text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.",
"title": ""
},
{
"docid": "1ec1fc8aabb8f7880bfa970ccbc45913",
"text": "Several isolates of Gram-positive, acidophilic, moderately thermophilic, ferrous-iron- and mineral-sulphide-oxidizing bacteria were examined to establish unequivocally the characteristics of Sulfobacillus-like bacteria. Two species were evident: Sulfobacillus thermosulfidooxidans with 48-50 mol% G+C and Sulfobacillus acidophilus sp. nov. with 55-57 mol% G+C. Both species grew autotrophically and mixotrophically on ferrous iron, on elemental sulphur in the presence of yeast extract, and heterotrophically on yeast extract. Autotrophic growth on sulphur was consistently obtained only with S. acidophilus.",
"title": ""
},
{
"docid": "a283639ea8830be287650e6fc24ed082",
"text": "Telephone networks first appeared more than a hundred years ago, long beforetransistors were invented. They, therefore, form the oldest large scale networkthat has grown to touch over 7 billion people. Telephony is now merging manycomplex technologies and because numerous services enabled by these technologiescan be monetized, telephony attracts a lot of fraud. In 2015, a telecom fraudassociation study estimated that the loss of revenue due to global telecom fraudwas worth 38 billion US dollars per year. Because of the convergence oftelephony with the Internet, fraud in telephony networks can also have anegative impact on security of online services. However, there is littleacademic work on this topic, in part because of the complexity of such networksand their closed nature. This paper aims to systematically explorefraud in telephony networks. Our taxonomy differentiates the root causes, thevulnerabilities, the exploitation techniques, the fraud types and finally theway fraud benefits fraudsters. We present an overview of eachof these and use CAller NAMe (CNAM) revenue share fraud as aconcrete example to illustrate how our taxonomy helps in better understandingthis fraud and to mitigate it.",
"title": ""
},
{
"docid": "f82a0a2d6742494e2884f82a909b422b",
"text": "This paper describes DLEJena, a practical reasoner for the OWL 2 RL profile that combines the forward-chaining rule engine of Jena and the Pellet DL reasoner. This combination is based on rule templates, instantiating at run-time a set of ABox OWL 2 RL/RDF Jena rules dedicated to a particular TBox that is handled by Pellet. The goal of DLEJena is to handle efficiently, through instantiated rules, the OWL 2 RL ontologies under direct semantics, where classes and properties cannot be at the same time individuals. The TBox semantics are treated by Pellet, reusing in that way efficient and sophisticated TBox DL reasoning algorithms. The experimental evaluation shows that DLEJena achieves more scalable ABox reasoning than the direct implementation of the OWL 2 RL/RDF rule set in the Jena’s production rule engine, which is the main target of the system. DLEJena can be also used as a generic framework for applying an arbitrary number of entailments beyond the OWL 2 RL profile.",
"title": ""
},
{
"docid": "5184b25a4d056b861f5dbae34300344a",
"text": "AFFILIATIONS: asHouri, Hsu, soroosHian, and braitHwaite— Center for Hydrometeorology and Remote Sensing, Henry Samueli School of Engineering, Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California; Knapp and neLson—NOAA/National Climatic Data Center, Asheville, North Carolina; CeCiL—Global Science & Technology, Inc., Asheville, North Carolina; prat—Cooperative Institute for Climate and Satellites, North Carolina State University, and NOAA/National Climatic Data Center, Asheville, North Carolina CORRESPONDING AUTHOR: Hamed Ashouri, Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 E-mail: [email protected]",
"title": ""
},
{
"docid": "f438c1b133441cd46039922c8a7d5a7d",
"text": "This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.",
"title": ""
}
] |
scidocsrr
|
aa5b2ef46839c758932a01d215f2a377
|
Visual analytics in healthcare - opportunities and research challenges
|
[
{
"docid": "4457aa3443d756a4afeb76f0571d3e25",
"text": "THE AMOUNT OF DATA BEING DIGITALLY COLLECTED AND stored is vast and expanding rapidly. As a result, the science of data management and analysis is also advancing to enable organizations to convert this vast resource into information and knowledge that helps them achieve their objectives. Computer scientists have invented the term big data to describe this evolving technology. Big data has been successfully used in astronomy (eg, the Sloan Digital Sky Survey of telescopic information), retail sales (eg, Walmart’s expansive number of transactions), search engines (eg, Google’s customization of individual searches based on previous web data), and politics (eg, a campaign’s focus of political advertisements on people most likely to support their candidate based on web searches). In this Viewpoint, we discuss the application of big data to health care, using an economic framework to highlight the opportunities it will offer and the roadblocks to implementation. We suggest that leveraging the collection of patient and practitioner data could be an important way to improve quality and efficiency of health care delivery. Widespread uptake of electronic health records (EHRs) has generated massive data sets. A survey by the American Hospital Association showed that adoption of EHRs has doubled from 2009 to 2011, partly a result of funding provided by the Health Information Technology for Economic and Clinical Health Act of 2009. Most EHRs now contain quantitative data (eg, laboratory values), qualitative data (eg, text-based documents and demographics), and transactional data (eg, a record of medication delivery). However, much of this rich data set is currently perceived as a byproduct of health care delivery, rather than a central asset to improve its efficiency. The transition of data from refuse to riches has been key in the big data revolution of other industries. Advances in analytic techniques in the computer sciences, especially in machine learning, have been a major catalyst for dealing with these large information sets. These analytic techniques are in contrast to traditional statistical methods (derived from the social and physical sciences), which are largely not useful for analysis of unstructured data such as text-based documents that do not fit into relational tables. One estimate suggests that 80% of business-related data exist in an unstructured format. The same could probably be said for health care data, a large proportion of which is text-based. In contrast to most consumer service industries, medicine adopted a practice of generating evidence from experimental (randomized trials) and quasi-experimental studies to inform patients and clinicians. The evidence-based movement is founded on the belief that scientific inquiry is superior to expert opinion and testimonials. In this way, medicine was ahead of many other industries in terms of recognizing the value of data and information guiding rational decision making. However, health care has lagged in uptake of newer techniques to leverage the rich information contained in EHRs. There are 4 ways big data may advance the economic mission of health care delivery by improving quality and efficiency. First, big data may greatly expand the capacity to generate new knowledge. The cost of answering many clinical questions prospectively, and even retrospectively, by collecting structured data is prohibitive. Analyzing the unstructured data contained within EHRs using computational techniques (eg, natural language processing to extract medical concepts from free-text documents) permits finer data acquisition in an automated fashion. For instance, automated identification within EHRs using natural language processing was superior in detecting postoperative complications compared with patient safety indicators based on discharge coding. Big data offers the potential to create an observational evidence base for clinical questions that would otherwise not be possible and may be especially helpful with issues of generalizability. The latter issue limits the application of conclusions derived from randomized trials performed on a narrow spectrum of participants to patients who exhibit very different characteristics. Second, big data may help with knowledge dissemination. Most physicians struggle to stay current with the latest evidence guiding clinical practice. The digitization of medical literature has greatly improved access; however, the sheer",
"title": ""
},
{
"docid": "3d4f6ba4239854a91cee61bded978057",
"text": "OBJECTIVE\nThe aim of this study is to analyze and visualize the polymorbidity associated with chronic kidney disease (CKD). The study shows diseases associated with CKD before and after CKD diagnosis in a time-evolutionary type visualization.\n\n\nMATERIALS AND METHODS\nOur sample data came from a population of one million individuals randomly selected from the Taiwan National Health Insurance Database, 1998 to 2011. From this group, those patients diagnosed with CKD were included in the analysis. We selected 11 of the most common diseases associated with CKD before its diagnosis and followed them until their death or up to 2011. We used a Sankey-style diagram, which quantifies and visualizes the transition between pre- and post-CKD states with various lines and widths. The line represents groups and the width of a line represents the number of patients transferred from one state to another.\n\n\nRESULTS\nThe patients were grouped according to their states: that is, diagnoses, hemodialysis/transplantation procedures, and events such as death. A Sankey diagram with basic zooming and planning functions was developed that temporally and qualitatively depicts they had amid change of comorbidities occurred in pre- and post-CKD states.\n\n\nDISCUSSION\nThis represents a novel visualization approach for temporal patterns of polymorbidities associated with any complex disease and its outcomes. The Sankey diagram is a promising method for visualizing complex diseases and exploring the effect of comorbidities on outcomes in a time-evolution style.\n\n\nCONCLUSIONS\nThis type of visualization may help clinicians foresee possible outcomes of complex diseases by considering comorbidities that the patients have developed.",
"title": ""
}
] |
[
{
"docid": "4b988535edefeb3ff7df89bcb900dd1c",
"text": "Context: As a result of automated software testing, large amounts of software test code (script) are usually developed by software teams. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, just like any software development activity, development of test scripts is tedious and error prone. We refer, in this study, to all activities that should be conducted during the entire lifecycle of test-code as Software Test-Code Engineering (STCE). Objective: As the STCE research area has matured and the number of related studies has increased, it is important to systematically categorize the current state-of-the-art and to provide an overview of the trends in this field. Such summarized and categorized results provide many benefits to the broader community. For example, they are valuable resources for new researchers (e.g., PhD students) aiming to conduct additional secondary studies. Method: In this work, we systematically classify the body of knowledge related to STCE through a systematic mapping (SM) study. As part of this study, we pose a set of research questions, define selection and exclusion criteria, and systematically develop and refine a systematic map. Results: Our study pool includes a set of 60 studies published in the area of STCE between 1999 and 2012. Our mapping data is available through an online publicly-accessible repository. We derive the trends for various aspects of STCE. Among our results are the following: (1) There is an acceptable mix of papers with respect to different contribution facets in the field of STCE and the top two leading facets are tool (68%) and method (65%). The studies that presented new processes, however, had a low rate (3%), which denotes the need for more process-related studies in this area. (2) Results of investigation about research facet of studies and comparing our result to other SM studies shows that, similar to other fields in software engineering, STCE is moving towards more rigorous validation approaches. (3) A good mixture of STCE activities has been presented in the primary studies. Among them, the two leading activities are quality assessment and co-maintenance of test-code with production code. The highest growth rate for co-maintenance activities in recent years shows the importance and challenges involved in this activity. (4) There are two main categories of quality assessment activity: detection of test smells and oracle assertion adequacy. (5) JUnit is the leading test framework which has been used in about 50% of the studies. (6) There is a good mixture of SUT types used in the studies: academic experimental systems (or simple code examples), real open-source and commercial systems. (7) Among 41 tools that are proposed for STCE, less than half of the tools (45%) were available for download. It is good to have this percentile of tools to be available, although not perfect, since the availability of tools can lead to higher impact on research community and industry. Conclusion: We discuss the emerging trends in STCE, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing STCE approaches and spot areas in the field that require more attention from the",
"title": ""
},
{
"docid": "ca8aa3e930fd36a16ac36546a25a1fde",
"text": "Accurate State-of-Charge (SOC) estimation of Li-ion batteries is essential for effective battery control and energy management of electric and hybrid electric vehicles. To this end, first, the battery is modelled by an OCV-R-RC equivalent circuit. Then, a dual Bayesian estimation scheme is developed-The battery model parameters are identified online and fed to the SOC estimator, the output of which is then fed back to the parameter identifier. Both parameter identification and SOC estimation are treated in a Bayesian framework. The square-root recursive least-squares estimator and the extended Kalman-Bucy filter are systematically paired up for the first time in the battery management literature to tackle the SOC estimation problem. The proposed method is finally compared with the convectional Coulomb counting method. The results indicate that the proposed method significantly outperforms the Coulomb counting method in terms of accuracy and robustness.",
"title": ""
},
{
"docid": "083f43f1cc8fe2ad186567f243ee04de",
"text": "We consider the task of recognition of Australian vehicle number plates (also called license plates or registration plates in other countries). A system for Australian number plate recognition must cope with wide variations in the appearance of the plates. Each state uses its own range of designs with font variations between the designs. There are special designs issued for significant events such as the Sydney 2000 Olympic Games. Also, vehicle owners may place the plates inside glass covered frames or use plates made of non-standard materials. These issues compound the complexity of automatic number plate recognition, making existing approaches inadequate. We have developed a system that incorporates a novel combination of image processing and artificial neural network technologies to successfully locate and read Australian vehicle number plates in digital images. Commercial application of the system is envisaged.",
"title": ""
},
{
"docid": "10ebda480df1157da5581b6219a9464a",
"text": "Our goal is to create a convenient natural language interface for performing wellspecified but complex actions such as analyzing data, manipulating text, and querying databases. However, existing natural language interfaces for such tasks are quite primitive compared to the power one wields with a programming language. To bridge this gap, we start with a core programming language and allow users to “naturalize” the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. In a voxel world, we show that a community of users can simultaneously teach a common system a diverse language and use it to build hundreds of complex voxel structures. Over the course of three days, these users went from using only the core language to using the naturalized language in 85.9% of the last 10K utterances.",
"title": ""
},
{
"docid": "df5778fce3318029d249de1ff37b0715",
"text": "The Switched Reluctance Machine (SRM) is a robust machine and is a candidate for ultra high speed applications. Until now the area of ultra high speed machines has been dominated by permanent magnet machines (PM). The PM machine has a higher torque density and some other advantages compared to SRMs. However, the soaring prices of the rare earth materials are driving the efforts to find an alternative to PM machines without significantly impacting the performance. At the same time significant progress has been made in the design and control of the SRM. This paper reviews the progress of the SRM as a high speed machine and proposes a novel rotor structure design to resolve the challenge of high windage losses at ultra high speed. It then elaborates on the path of modifying the design to achieve optimal performance. The simulation result of the final design is verified on FEA software. Finally, a prototype machine with similar design is built and tested to verify the simulation model. The experimental waveform indicates good agreement with the simulation result. Therefore, the performance of the prototype machine is analyzed and presented at the end of this paper.",
"title": ""
},
{
"docid": "3ad0b3baa7d9f55d4d2f2b8d8c54b86d",
"text": "In this work we solve the uncalibrated photometric stereo problem with lights placed near the scene. Although the devised model is more complex than its far-light counterpart, we show that under a global linear ambiguity the reconstruction is possible up to a rotation and scaling, which can be easily fixed. We also propose a solution for reconstructing the normal map, the albedo, the light positions and the light intensities of a scene given only a sequence of near-light images. This is done in an alternating minimization framework which first estimates both the normals and the albedo, and then the light positions and intensities. We validate our method on real world experiments and show that a near-light model leads to a significant improvement in the surface reconstruction compared to the classic distant illumination case.",
"title": ""
},
{
"docid": "6cf2ffb0d541320b1ad04dc3b9e1c9a4",
"text": "Prediction of potential fraudulent activities may prevent both the stakeholders and the appropriate regulatory authorities of national or international level from being deceived. The objective difficulties on collecting adequate data that are obsessed by completeness affects the reliability of the most supervised Machine Learning methods. This work examines the effectiveness of forecasting fraudulent financial statements using semi-supervised classification techniques (SSC) that require just a few labeled examples for achieving robust learning behaviors mining useful data patterns from a larger pool of unlabeled examples. Based on data extracted from Greek firms, a number of comparisons between supervised and semi-supervised algorithms has been conducted. According to the produced results, the later algorithms are favored being examined over several scenarios of different Labeled Ratio (R) values.",
"title": ""
},
{
"docid": "9ae0078ef9dcc3bccca9efd87ac43f26",
"text": "Delusions are the false and often incorrigible beliefs that can cause severe suffering in mental illness. We cannot yet explain them in terms of underlying neurobiological abnormalities. However, by drawing on recent advances in the biological, computational and psychological processes of reinforcement learning, memory, and perception it may be feasible to account for delusions in terms of cognition and brain function. The account focuses on a particular parameter, prediction error--the mismatch between expectation and experience--that provides a computational mechanism common to cortical hierarchies, fronto-striatal circuits and the amygdala as well as parietal cortices. We suggest that delusions result from aberrations in how brain circuits specify hierarchical predictions, and how they compute and respond to prediction errors. Defects in these fundamental brain mechanisms can vitiate perception, memory, bodily agency and social learning such that individuals with delusions experience an internal and external world that healthy individuals would find difficult to comprehend. The present model attempts to provide a framework through which we can build a mechanistic and translational understanding of these puzzling symptoms.",
"title": ""
},
{
"docid": "0e521af53f9faf4fee38843a22ec2185",
"text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.",
"title": ""
},
{
"docid": "4a52f4c8f08cefac9d81296dbb853d6e",
"text": "Echo chambers, i.e., situations where one is exposed only to opinions that agree with their own, are an increasing concern for the political discourse in many democratic countries. This paper studies the phenomenon of political echo chambers on social media. We identify the two components in the phenomenon: the opinion that is shared (“echo”), and the place that allows its exposure (“chamber” — the social network), and examine closely at how these two components interact. We de ne a production and consumption measure for social-media users, which captures the political leaning of the content shared and received by them. By comparing the two, we nd that Twitter users are, to a large degree, exposed to political opinions that agree with their own. We also nd that users who try to bridge the echo chambers, by sharing content with diverse leaning, have to pay a “price of bipartisanship” in terms of their network centrality and content appreciation. In addition, we study the role of “gatekeepers,” users who consume content with diverse leaning but produce partisan content (with a single-sided leaning), in the formation of echo chambers. Finally, we apply these ndings to the task of predicting partisans and gatekeepers from social and content features. While partisan users turn out relatively easy to identify, gatekeepers prove to be more challenging. ACM Reference format: Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship. In Proceedings of WWW ’18, Lyon, France, April 23–27, 2018, 10 pages. DOI: 10.1145/nnnnnnn.nnnnnnn",
"title": ""
},
{
"docid": "80d4f6a622edea6530ffc7e29590af74",
"text": "Data protection is the process of backing up data in case of a data loss event. It is one of the most critical routine activities for every organization. Detecting abnormal backup jobs is important to prevent data protection failures and ensure the service quality. Given the large scale backup endpoints and the variety of backup jobs, from a backup-as-a-service provider viewpoint, we need a scalable and flexible outlier detection method that can model a huge number of objects and well capture their diverse patterns. In this paper, we introduce H2O, a novel hybrid and hierarchical method to detect outliers from millions of backup jobs for large scale data protection. Our method automatically selects an ensemble of outlier detection models for each multivariate time series composed by the backup metrics collected for each backup endpoint by learning their exhibited characteristics. Interactions among multiple variables are considered to better detect true outliers and reduce false positives. In particular, a new seasonal-trend decomposition based outlier detection method is developed, considering the interactions among variables in the form of common trends, which is robust to the presence of outliers in the training data. The model selection process is hierarchical, following a global to local fashion. The final outlier is determined through an ensemble learning by multiple models. Built on top of Apache Spark, H2O has been deployed to detect outliers in a large and complex data protection environment with more than 600,000 backup endpoints and 3 million daily backup jobs. To the best of our knowledge, this is the first work that selects and constructs large scale outlier detection models for multivariate time series on big data platforms.",
"title": ""
},
{
"docid": "508ffcdbc7d059ad8b7ee64d562d14b5",
"text": "A young manager faces an impasse in his career. He goes to see his mentor at the company, who closes the office door, offers the young man a chair, recounts a few war stories, and serves up a few specific pointers about the problem at hand. Then, just as the young manager is getting up to leave, the elder executive adds one small kernel of avuncular wisdom--which the junior manager carries with him through the rest of his career. Such is the nature of business advice. Or is it? The six essays in this article suggest otherwise. Few of the leaders who tell their stories here got their best advice in stereotypical form, as an aphorism or a platitude. For Ogilvy & Mather chief Shelly Lazarus, profound insight came from a remark aimed at relieving the tension of the moment. For Novartis CEO Daniel Vasella, it was an apt comment, made on a snowy day, back when he was a medical resident. For publishing magnate Earl Graves and Starwood Hotels' Barry Sternlicht, advice they received about trust from early bosses took on ever deeper and more practical meaning as their careers progressed. For Goldman Sachs chairman Henry Paulson, Jr., it was as much his father's example as it was a specific piece of advice his father handed down to him. And fashion designer Liz Lange rejects the very notion that there's inherent wisdom in accepting other people's advice. As these stories demonstrate, people find wisdom when they least expect to, and they never really know what piece of advice will transcend the moment, profoundly affecting how they later make decisions, evaluate people, and examine--and reexamine--their own actions.",
"title": ""
},
{
"docid": "36347412c7d30ae6fde3742bbc4f21b9",
"text": "iii",
"title": ""
},
{
"docid": "a3fe3b92fe53109888b26bb03c200180",
"text": "Using Artificial Neural Networh (A\".) in critical applications can be challenging due to the often experimental nature of A\" construction and the \"black box\" label that is fiequently attached to A\".. Wellaccepted process models exist for algorithmic sofhyare development which facilitate software validation and acceptance. The sojiware development process model presented herein is targeted specifically toward artificial neural networks in crik-al appliicationr. 7% model is not unwieldy, and could easily be used on projects without critical aspects. This should be of particular interest to organizations that use AMVs and need to maintain or achieve a Capability Maturity Model (CM&?I or IS0 sofhyare development rating. Further, while this model is aimed directly at neural network development, with minor moda&ations, the model could be applied to any technique wherein knowledge is extractedfiom existing &ka, such as other numeric approaches or knowledge-based systems.",
"title": ""
},
{
"docid": "260b39661df5cb7ddb9c4cf7ab8a36ba",
"text": "Deblurring camera-based document image is an important task in digital document processing, since it can improve both the accuracy of optical character recognition systems and the visual quality of document images. Traditional deblurring algorithms have been proposed to work for natural-scene images. However the natural-scene images are not consistent with document images. In this paper, the distinct characteristics of document images are investigated. We propose a content-aware prior for document image deblurring. It is based on document image foreground segmentation. Besides, an upper-bound constraint combined with total variation based method is proposed to suppress the rings in the deblurred image. Comparing with the traditional general purpose deblurring methods, the proposed deblurring algorithm can produce more pleasing results on document images. Encouraging experimental results demonstrate the efficacy of the proposed method.",
"title": ""
},
{
"docid": "1448b02c9c14e086a438d76afa1b2fde",
"text": "This paper analyzes the classification of hyperspectral remote sensing images with linear discriminant analysis (LDA) in the presence of a small ratio between the number of training samples and the number of spectral features. In these particular ill-posed problems, a reliable LDA requires one to introduce regularization for problem solving. Nonetheless, in such a challenging scenario, the resulting regularized LDA (RLDA) is highly sensitive to the tuning of the regularization parameter. In this context, we introduce in the remote sensing community an efficient version of the RLDA recently presented by Ye to cope with critical ill-posed problems. In addition, several LDA-based classifiers (i.e., penalized LDA, orthogonal LDA, and uncorrelated LDA) are compared theoretically and experimentally with the standard LDA and the RLDA. Method differences are highlighted through toy examples and are exhaustively tested on several ill-posed problems related to the classification of hyperspectral remote sensing images. Experimental results confirm the effectiveness of the presented RLDA technique and point out the main properties of other analyzed LDA techniques in critical ill-posed hyperspectral image classification problems.",
"title": ""
},
{
"docid": "d84179bb22103150f3eae95e6ea7b3ab",
"text": "Profile hidden Markov models (profile HMMs) and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the \"multiple segment Viterbi\" (MSV) algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call \"sparse rescaling\". These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches.",
"title": ""
},
{
"docid": "3af28edbed06ef6db9fdb27a73e784de",
"text": "The study aimed to investigate factors influencing older adults' physical activity engagement over time. The authors analyzed 3 waves of data from a sample of Israelis age 75-94 (Wave 1 n = 1,369, Wave 2 n = 687, Wave 3 n = 154). Findings indicated that physical activity engagement declined longitudinally. Logistic regressions showed that female gender, older age, and taking more medications were significant risk factors for stopping exercise at Wave 2 in those physically active at Wave 1. In addition, higher functional and cognitive status predicted initiating exercise at Wave 2 in those who did not exercise at Wave 1. By clarifying the influence of personal characteristics on physical activity engagement in the Israeli old-old, this study sets the stage for future investigation and intervention, stressing the importance of targeting at-risk populations, accommodating risk factors, and addressing both the initiation and the maintenance of exercise in the face of barriers.",
"title": ""
},
{
"docid": "a25fa0c0889b62b70bf95c16f9966cc4",
"text": "We deal with the problem of document representation for the task of measuring semantic relatedness between documents. A document is represented as a compact concept graph where nodes represent concepts extracted from the document through references to entities in a knowledge base such as DBpedia. Edges represent the semantic and structural relationships among the concepts. Several methods are presented to measure the strength of those relationships. Concepts are weighted through the concept graph using closeness centrality measure which reflects their relevance to the aspects of the document. A novel similarity measure between two concept graphs is presented. The similarity measure first represents concepts as continuous vectors by means of neural networks. Second, the continuous vectors are used to accumulate pairwise similarity between pairs of concepts while considering their assigned weights. We evaluate our method on a standard benchmark for document similarity. Our method outperforms state-of-the-art methods including ESA (Explicit Semantic Annotation) while our concept graphs are much smaller than the concept vectors generated by ESA. Moreover, we show that by combining our concept graph with ESA, we obtain an even further improvement.",
"title": ""
},
{
"docid": "9d2b3aaf57e31a2c0aa517d642f39506",
"text": "3.1. URINARY TRACT INFECTION Urinary tract infection is one of the important causes of morbidity and mortality in Indian population, affecting all age groups across the life span. Anatomically, urinary tract is divided into an upper portion composed of kidneys, renal pelvis, and ureters and a lower portion made up of urinary bladder and urethra. UTI is an inflammatory response of the urothelium to bacterial invasion that is usually associated with bacteriuria and pyuria. UTI may involve only the lower urinary tract or both the upper and lower tract [19].",
"title": ""
}
] |
scidocsrr
|
6ad5201c31f61f26b196a9d147a81a89
|
A survey of intrusion detection systems in wireless sensor networks
|
[
{
"docid": "66b337e0b6b2d28f7414cf5f88a724a0",
"text": "Sensor networks are currently an active research area mainly due to the potential of their applications. In this paper we investigate the use of Wireless Sensor Networks (WSN) for air pollution monitoring in Mauritius. With the fast growing industrial activities on the island, the problem of air pollution is becoming a major concern for the health of the population. We proposed an innovative system named Wireless Sensor Network Air Pollution Monitoring System (WAPMS) to monitor air pollution in Mauritius through the use of wireless sensors deployed in huge numbers around the island. The proposed system makes use of an Air Quality Index (AQI) which is presently not available in Mauritius. In order to improve the efficiency of WAPMS, we have designed and implemented a new data aggregation algorithm named Recursive Converging Quartiles (RCQ). The algorithm is used to merge data to eliminate duplicates, filter out invalid readings and summarise them into a simpler form which significantly reduce the amount of data to be transmitted to the sink and thus saving energy. For better power management we used a hierarchical routing protocol in WAPMS and caused the motes to sleep during idle time.",
"title": ""
}
] |
[
{
"docid": "227786365219fe1efab6414bae0d8cdb",
"text": "Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open.\n We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function.\n Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-the-art unsupervised approaches as well as approaches that are based on feature extraction.",
"title": ""
},
{
"docid": "16987d81cd90db3c0abe2631de9e737c",
"text": "Docker containers are becoming an attractive implementation choice for next-generation microservices-based applications. When provisioning such an application, container (microservice) instances need to be created from individual container images. Starting a container on a node, where images are locally available, is fast but it may not guarantee the quality of service due to insufficient resources. When a collection of nodes are available, one can select a node with sufficient resources. However, if the selected node does not have the required image, downloading the image from a different registry increases the provisioning time. Motivated by these observations, in this paper, we present CoMICon, a system for co-operative management of Docker images among a set of nodes. The key features of CoMICon are: (1) it enables a co-operative registry among a set of nodes, (2) it can store or delete images partially in the form of layers, (3) it facilitates the transfer of image layers between registries, and (4) it enables distributed pull of an image while starting a container. Using these features, we describe—(i) high availability management of images and (ii) provisioning management of distributed microservices based applications. We extensively evaluate the performance of CoMICon using 142 real, publicly available images from Docker hub. In contrast to state-of-the-art full image based approach, CoMICon can increase the number of highly available images up to 3x while reducing the application provisioning time by 28% on average.",
"title": ""
},
{
"docid": "56c30ddf0aedfb0f13885d90e22e6537",
"text": "A single-pole double-throw novel switch device in0.18¹m SOI complementary metal-oxide semiconductor(CMOS) process is developed for 0.9 Ghz wireless GSMsystems. The layout of the device is optimized keeping inmind the parameters of interest for the RF switch. A subcircuitmodel, with the standard surface potential (PSP) modelas the intrinsic FET model along with the parasitic elementsis built to predict the Ron and Coff of the switch. Themeasured data agrees well with the model. The eight FETstacked switch achieved an Ron of 2.5 ohms and an Coff of180 fF.",
"title": ""
},
{
"docid": "3fa8b8a93716a85f8573bd1cb8d215f2",
"text": "Vision-based research for intelligent vehicles have traditionally focused on specific regions around a vehicle, such as a front looking camera for, e.g., lane estimation. Traffic scenes are complex and vital information could be lost in unobserved regions. This paper proposes a framework that uses four visual sensors for a full surround view of a vehicle in order to achieve an understanding of surrounding vehicle behaviors. The framework will assist the analysis of naturalistic driving studies by automating the task of data reduction of the observed trajectories. To this end, trajectories are estimated using a vehicle detector together with a multiperspective optimized tracker in each view. The trajectories are transformed to a common ground plane, where they are associated between perspectives and analyzed to reveal tendencies around the ego-vehicle. The system is tested on sequences from 2.5 h of drive on US highways. The multiperspective tracker is tested in each view as well as for the ability to associate vehicles bet-ween views with a 92% recall score. A case study of vehicles approaching from the rear shows certain patterns in behavior that could potentially influence the ego-vehicle.",
"title": ""
},
{
"docid": "e393cf414910dbf50ac18d2ad0f2cd15",
"text": "Training relation extractors for the purpose of automated knowledge base population requires the availability of sufficient training data. The amount of manual labeling can be significantly reduced by applying distant supervision, which generates training data by aligning large text corpora with existing knowledge bases. This typically results in a highly noisy training set, where many training sentences do not express the intended relation. In this paper, we propose to combine distant supervision with minimal human supervision by annotating features (in particular shortest dependency paths) rather than complete relation instances. Such feature labeling eliminates noise from the initial training set, resulting in a significant increase of precision at the expense of recall. We further improve on this approach by introducing the Semantic Label Propagation (SLP) method, which uses the similarity between low-dimensional representations of candidate training instances to again extend the (filtered) training set in order to increase recall while maintaining high precision. Our strategy is evaluated on an established test collection designed for knowledge base population (KBP) from the TAC KBP English slot filling task. The experimental results show that SLP leads to substantial performance gains when compared to existing approaches while requiring an almost negligible human annotation effort.",
"title": ""
},
{
"docid": "747319dc1492cf26e9b9112e040cbba7",
"text": "Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detectionguided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work.",
"title": ""
},
{
"docid": "d8484cc7973882777f65a28fcdbb37be",
"text": "The reported power analysis attacks on hardware implementations of the MICKEY family of streams ciphers require a large number of power traces. The primary motivation of our work is to break an implementation of the cipher when only a limited number of power traces can be acquired by an adversary. In this paper, we propose a novel approach to mount a Template attack (TA) on MICKEY-128 2.0 stream cipher using Particle Swarm Optimization (PSO) generated initialization vectors (IVs). In addition, we report the results of power analysis against a MICKEY-128 2.0 implementation on a SASEBO-GII board to demonstrate our proposed attack strategy. The captured power traces were analyzed using Least Squares Support Vector Machine (LS-SVM) learning algorithm based binary classifiers to segregate the power traces into the respective Hamming distance (HD) classes. The outcomes of the experiments reveal that our proposed power analysis attack strategy requires a much lesser number of IVs compared to a standard Correlation Power Analysis (CPA) attack on MICKEY-128 2.0 during the key loading phase of the cipher.",
"title": ""
},
{
"docid": "d212f981eb8cc6054b2651009179b722",
"text": "A sixth-order 10.7-MHz bandpass switched-capacitor filter based on a double terminated ladder filter is presented. The filter uses a multipath operational transconductance amplifier (OTA) that presents both better accuracy and higher slew rate than previously reported Class-A OTA topologies. Design techniques based on charge cancellation and slower clocks are used to reduce the overall capacitance from 782 down to 219 unity capacitors. The filter's center frequency and bandwidth are 10.7 MHz and 400 kHz, respectively, and a passband ripple of 1 dB in the entire passband. The quality factor of the resonators used as filter terminations is around 32. The measured (filter + buffer) third-intermodulation (IM3) distortion is less than -40 dB for a two-tone input signal of +3-dBm power level each. The signal-to-noise ratio is roughly 58 dB while the IM3 is -45 dB; the power consumption for the standalone filter is 42 mW. The chip was fabricated in a 0.35-mum CMOS process; filter's area is 0.84 mm2",
"title": ""
},
{
"docid": "c861009ed309b208218182e60b126228",
"text": "We present a novel beam-search decoder for grammatical error correction. The decoder iteratively generates new hypothesis corrections from current hypotheses and scores them based on features of grammatical correctness and fluency. These features include scores from discriminative classifiers for specific error categories, such as articles and prepositions. Unlike all previous approaches, our method is able to perform correction of whole sentences with multiple and interacting errors while still taking advantage of powerful existing classifier approaches. Our decoder achieves an F1 correction score significantly higher than all previous published scores on the Helping Our Own (HOO) shared task data set.",
"title": ""
},
{
"docid": "4927fee47112be3d859733c498fbf594",
"text": "To design effective tools for detecting and recovering from software failures requires a deep understanding of software bug characteristics. We study software bug characteristics by sampling 2,060 real world bugs in three large, representative open-source projects—the Linux kernel, Mozilla, and Apache. We manually study these bugs in three dimensions—root causes, impacts, and components. We further study the correlation between categories in different dimensions, and the trend of different types of bugs. The findings include: (1) semantic bugs are the dominant root cause. As software evolves, semantic bugs increase, while memory-related bugs decrease, calling for more research effort to address semantic bugs; (2) the Linux kernel operating system (OS) has more concurrency bugs than its non-OS counterparts, suggesting more effort into detecting concurrency bugs in operating system code; and (3) reported security bugs are increasing, and the majority of them are caused by semantic bugs, suggesting more support to help developers diagnose and fix security bugs, especially semantic security bugs. In addition, to reduce the manual effort in building bug benchmarks for evaluating bug detection and diagnosis tools, we use machine learning techniques to classify 109,014 bugs automatically.",
"title": ""
},
{
"docid": "caa35f58e9e217fd45daa2e49c4a4cde",
"text": "Despite its linguistic complexity, the Horn of Africa region includes several major languages with more than 5 million speakers, some crossing the borders of multiple countries. All of these languages have official status in regions or nations and are crucial for development; yet computational resources for the languages remain limited or non-existent. Since these languages are complex morphologically, software for morphological analysis and generation is a necessary first step toward nearly all other applications. This paper describes a resource for morphological analysis and generation for three of the most important languages in the Horn of Africa, Amharic, Tigrinya, and Oromo. 1 Language in the Horn of Africa The Horn of Africa consists politically of four modern nations, Ethiopia, Somalia, Eritrea, and Djibouti. As in most of sub-Saharan Africa, the linguistic picture in the region is complex. The great majority of people are speakers of AfroAsiatic languages belonging to three sub-families: Semitic, Cushitic, and Omotic. Approximately 75% of the population of almost 100 million people are native speakers of four languages: the Cushitic languages Oromo and Somali and the Semitic languages Amharic and Tigrinya. Many others speak one or the other of these languages as second languages. All of these languages have official status at the national or regional level. All of the languages of the region, especially the Semitic languages, are characterized by relatively complex morphology. For such languages, nearly all forms of language technology depend on the existence of software for analyzing and generating word forms. As with most other subSaharan languages, this software has previously not been available. This paper describes a set of Python programs called HornMorpho that address this lack for three of the most important languages, Amharic, Tigrinya, and Oromo. 2 Morphological processingn 2.1 Finite state morphology Morphological analysis is the segmentation of words into their component morphemes and the assignment of grammatical morphemes to grammatical categories and lexical morphemes to lexemes. Morphological generation is the reverse process. Both processes relate a surface level to a lexical level. The relationship between the levels has traditionally been viewed within linguistics in terms of an ordered series of phonological rules. Within computational morphology, a very significant advance came with the demonstration that phonological rules could be implemented as finite state transducers (Kaplan and Kay, 1994) (FSTs) and that the rule ordering could be dispensed with using FSTs that relate the surface and lexical levels directly (Koskenniemi, 1983), so-called “twolevel” morphology. A second important advance was the recognition by Karttunen et al. (1992) that a cascade of composed FSTs could implement the two-level model. This made possible quite complex finite state systems, including ordered alternation rules representing context-sensitive variation in the phonological or orthographic shape of morphemes, the morphotactics characterizing the possible sequences of morphemes (in canonical form) for a given word class, and a lexicon. The key feature of such systems is that, even though the FSTs making up the cascade must be composed in a particular order, the result of composition is a single FST relating surface and lexical levels directly, as in two-level morphology. Because of the invertibility of FSTs, it is a simple matter to convert an analysis FST (surface input Figure 1: Basic architecture of lexical FSTs for morphological analysis and generation. Each rectangle represents an FST; the outermost rectangle is the full FST that is actually used for processing. “.o.” represents composition of FSTs, “+” concatenation of FSTs. to lexical output) to one that performs generation (lexical input to surface output). This basic architecture, illustrated in Figure 1, consisting of a cascade of composed FSTs representing (1) alternation rules and (2) morphotactics, including a lexicon of stems or roots, is the basis for the system described in this paper. We may also want to handle words whose roots or stems are not found in the lexicon, especially when the available set of known roots or stems is limited. In such cases the lexical component is replaced by a phonotactic component characterizing the possible shapes of roots or stems. Such a “guesser” analyzer (Beesley and Karttunen, 2003) analyzes words with unfamiliar roots or stems by positing possible roots or stems. 2.2 Semitic morphology These ideas have revolutionized computational morphology, making languages with complex word structure, such as Finnish and Turkish, far more amenable to analysis by traditional computational techniques. However, finite state morphology is inherently biased to view morphemes as sequences of characters or phones and words as concatenations of morphemes. This presents problems in the case of non-concatenative morphology, for example, discontinuous morphemes and the template morphology that characterizes Semitic languages such as Amharic and Tigrinya. The stem of a Semitic verb consists of a root, essentially a sequence of consonants, and a template that inserts other segments between the root consonants and possibly copies certain of the consonants. For example, the Amharic verb root sbr ‘break’ can combine with roughly 50 different templates to form stems in words such as y ̃b•l y1-sEbr-al ‘he breaks’, ° ̃¤ tEsEbbEr-E ‘it was broken’, ‰ ̃bw l-assEbb1r-Ew , ‘let me cause him to break something’, ̃§§” sEbabar-i ‘broken into many pieces’. A number of different additions to the basic FST framework have been proposed to deal with non-concatenative morphology, all remaining finite state in their complexity. A discussion of the advantages and drawbacks of these different proposals is beyond the scope of this paper. The approach used in our system is one first proposed by Amtrup (2003), based in turn on the well studied formalism of weighted FSTs. In brief, in Amtrup’s approach, each of the arcs in a transducer may be “weighted” with a feature structure, that is, a set of grammatical feature-value pairs. As the arcs in an FST are traversed, a set of feature-value pairs is accumulated by unifying the current set with whatever appears on the arcs along the path through the transducer. These feature-value pairs represent a kind of memory for the path that has been traversed but without the power of a stack. Any arc whose feature structure fails to unify with the current set of feature-value pairs cannot be traversed. The result of traversing such an FST during morphological analysis is not only an output character sequence, representing the root of the word, but a set of feature-value pairs that represents the grammatical structure of the input word. In the generation direction, processing begins with a root and a set of feature-value pairs, representing the desired grammatical structure of the output word, and the output is the surface wordform corresponding to the input root and grammatical structure. In Gasser (2009) we showed how Amtrup’s technique can be applied to the analysis and generation of Tigrinya verbs. For an alternate approach to handling the morphotactics of a subset of Amharic verbs, within the context of the Xerox finite state tools (Beesley and Karttunen, 2003), see Amsalu and Demeke (2006). Although Oromo, a Cushitic language, does not exhibit the root+template morphology that is typical of Semitic languages, it is also convenient to handle its morphology using the same technique because there are some long-distance dependencies and because it is useful to have the grammatical output that this approach yields for analysis.",
"title": ""
},
{
"docid": "2b00f2b02fa07cdd270f9f7a308c52c5",
"text": "A noninvasive and easy-operation measurement of the heart rate has great potential in home healthcare. We present a simple and high running efficiency method for measuring heart rate from a video. By only tracking one feature point which is selected from a small ROI (Region of Interest) in the head area, we extract trajectories of this point in both X-axis and Y-axis. After a series of processes including signal filtering, interpolation, the Independent Component Analysis (ICA) is used to obtain a periodic signal, and then the heart rate can be calculated. We evaluated on 10 subjects and compared to a commercial heart rate measuring instrument (YUYUE YE680B) and achieved high degree of agreement. A running time comparison experiment to the previous proposed motion-based method is carried out and the result shows that the time cost is greatly reduced in our method.",
"title": ""
},
{
"docid": "e4a3dfe53a66d0affd73234761e7e0e2",
"text": "BACKGROUND\nWhether cannabis can cause psychotic or affective symptoms that persist beyond transient intoxication is unclear. We systematically reviewed the evidence pertaining to cannabis use and occurrence of psychotic or affective mental health outcomes.\n\n\nMETHODS\nWe searched Medline, Embase, CINAHL, PsycINFO, ISI Web of Knowledge, ISI Proceedings, ZETOC, BIOSIS, LILACS, and MEDCARIB from their inception to September, 2006, searched reference lists of studies selected for inclusion, and contacted experts. Studies were included if longitudinal and population based. 35 studies from 4804 references were included. Data extraction and quality assessment were done independently and in duplicate.\n\n\nFINDINGS\nThere was an increased risk of any psychotic outcome in individuals who had ever used cannabis (pooled adjusted odds ratio=1.41, 95% CI 1.20-1.65). Findings were consistent with a dose-response effect, with greater risk in people who used cannabis most frequently (2.09, 1.54-2.84). Results of analyses restricted to studies of more clinically relevant psychotic disorders were similar. Depression, suicidal thoughts, and anxiety outcomes were examined separately. Findings for these outcomes were less consistent, and fewer attempts were made to address non-causal explanations, than for psychosis. A substantial confounding effect was present for both psychotic and affective outcomes.\n\n\nINTERPRETATION\nThe evidence is consistent with the view that cannabis increases risk of psychotic outcomes independently of confounding and transient intoxication effects, although evidence for affective outcomes is less strong. The uncertainty about whether cannabis causes psychosis is unlikely to be resolved by further longitudinal studies such as those reviewed here. However, we conclude that there is now sufficient evidence to warn young people that using cannabis could increase their risk of developing a psychotic illness later in life.",
"title": ""
},
{
"docid": "9db0e9b90db4d7fd9c0f268b5ee9b843",
"text": "Traditionally, the evaluation of surgical procedures in virtual reality (VR) simulators has been restricted to their individual technical aspects disregarding the procedures carried out by teams. However, some decision models have been proposed to support the collaborative training evaluation process of surgical teams in collaborative virtual environments. The main objective of this article is to present a collaborative simulator based on VR, named SimCEC, as a potential solution for education, training, and evaluation in basic surgical routines for teams of undergraduate students. The simulator considers both tasks performed individually and those carried in a collaborative manner. The main contribution of this work is to improve the discussion about VR simulators requirements (design and implementation) to provide team training in relevant topics, such as users’ feedback in real time, collaborative training in networks, interdisciplinary integration of curricula, and continuous evaluation.",
"title": ""
},
{
"docid": "0e54be77f69c6afbc83dfabc0b8b4178",
"text": "Spinal muscular atrophy (SMA) is a neurodegenerative disease characterized by loss of motor neurons in the anterior horn of the spinal cord and resultant weakness. The most common form of SMA, accounting for 95% of cases, is autosomal recessive proximal SMA associated with mutations in the survival of motor neurons (SMN1) gene. Relentless progress during the past 15 years in the understanding of the molecular genetics and pathophysiology of SMA has resulted in a unique opportunity for rational, effective therapeutic trials. The goal of SMA therapy is to increase the expression levels of the SMN protein in the correct cells at the right time. With this target in sight, investigators can now effectively screen potential therapies in vitro, test them in accurate, reliable animal models, move promising agents forward to clinical trials, and accurately diagnose patients at an early or presymptomatic stage of disease. A major challenge for the SMA community will be to prioritize and develop the most promising therapies in an efficient, timely, and safe manner with the guidance of the appropriate regulatory agencies. This review will take a historical perspective to highlight important milestones on the road to developing effective therapies for SMA.",
"title": ""
},
{
"docid": "a8b8f36f7093c79759806559fb0f0cf4",
"text": "Cooperative adaptive cruise control (CACC) is an extension of ACC. In addition to measuring the distance to a predecessor, a vehicle can also exchange information with a predecessor by wireless communication. This enables a vehicle to follow its predecessor at a closer distance under tighter control. This paper focuses on the impact of CACC on traffic-flow characteristics. It uses the traffic-flow simulation model MIXIC that was specially designed to study the impact of intelligent vehicles on traffic flow. The authors study the impacts of CACC for a highway-merging scenario from four to three lanes. The results show an improvement of traffic-flow stability and a slight increase in traffic-flow efficiency compared with the merging scenario without equipped vehicles",
"title": ""
},
{
"docid": "a0c9d3c2b14395a6d476b12c5e8b28b0",
"text": "Undergraduate research experiences enhance learning and professional development, but providing effective and scalable research training is often limited by practical implementation and orchestration challenges. We demonstrate Agile Research Studios (ARS)---a socio-technical system that expands research training opportunities by supporting research communities of practice without increasing faculty mentoring resources.",
"title": ""
},
{
"docid": "2ca54e2e53027eb2ff441f0e2724d68f",
"text": "Thanks to rapid advances in technologies like GPS and Wi-Fi positioning, smartphone users are able to determine their location almost everywhere they go. This is not true, however, of people who are traveling in underground public transportation networks, one of the few types of high-traffic areas where smartphones do not have access to accurate position information. In this paper, we introduce the problem of underground transport positioning on smartphones and present SubwayPS, an accelerometer-based positioning technique that allows smartphones to determine their location substantially better than baseline approaches, even deep beneath city streets. We highlight several immediate applications of positioning in subway networks in domains ranging from mobile advertising to mobile maps and present MetroNavigator, a proof-of-concept smartphone and smartwatch app that notifies users of upcoming points-of-interest and alerts them when it is time to get ready to exit the train.",
"title": ""
},
{
"docid": "cc3d14ebbba039241634d45dad8bfb03",
"text": "Digital humanities scholars strongly need a corpus exploration method that provides topics easier to interpret than standard LDA topic models. To move towards this goal, here we propose a combination of two techniques, called Entity Linking and Labeled LDA. Our method identifies in an ontology a series of descriptive labels for each document in a corpus. Then it generates a specific topic for each label. Having a direct relation between topics and labels makes interpretation easier; using an ontology as background knowledge limits label ambiguity. As our topics are described with a limited number of clear-cut labels, they promote interpretability and support the quantitative evaluation of the obtained results. We illustrate the potential of the approach by applying it to three datasets, namely the transcription of speeches from the European Parliament fifth mandate, the Enron Corpus and the Hillary Clinton Email Dataset. While some of these resources have already been adopted by the natural language processing community, they still hold a large potential for humanities scholars, part of which could be exploited in studies that will adopt the fine-grained exploration method presented in this paper.",
"title": ""
}
] |
scidocsrr
|
334964f1a2956ea37f7d8a28d93ab9cf
|
Insider Threat Prediction Tool: Evaluating the probability of IT misuse
|
[
{
"docid": "f1cb2ce5a32d09383745284cfa838e90",
"text": "In the information age, as we have become increasingly dependent upon complex information systems, there has been a focus on the vulnerability of these systems to computer crime and security attacks, exemplified by the work of the President's Commission on Critical Infrastructure Protection. Because of the high-tech nature of these systems and the technological expertise required to develop and maintain them, it is not surprising that overwhelming attention has been devoted by computer security experts to technological vulnerabilities and solutions. Yet, as captured in the title of a 1993 conference sponsored by the Defense Personnel Security Research Center, 2 Computer Crime: A Peopleware Problem, it is people who designed the systems, people who attack the systems, and understanding the psychology of information systems criminals is crucial to protecting those systems. s A Management Information Systems (MIS) professional at a military facility learns she is going to be downsized. She decides to encrypt large parts of the organization's database and hold it hostage. She contacts the systems administrator responsible for the database and offers to decode the data for $10,000 in \" severance pay \" and a promise of no prosecution. He agrees to her terms before consulting with proper authorities. Prosecutors reviewing the case determine that the administrator's deal precludes them from pursuing charges. s A postcard written by an enlisted man is discovered during the arrest of several members of a well-known hacker organization by the FBI. Writing from his military base where he serves as a computer specialist, he has inquired about establishing a relationship with the group. Investigation reveals the enlisted man to be a convicted hacker and former group member who had been offered a choice between prison and enlistment. While performing computer duties for the military, he is caught breaking into local phone systems. s An engineer at an energy processing plant becomes angry with his new supervisor, a non-technical administrator. The engineer's wife is terminally ill, and he is on probation after a series of angry and disruptive episodes at work. After he is sent home, the engineering staff discovers that he has made a series of idiosyncratic modifications to plant controls and safety systems. In response to being confronted about these changes, the engineer decides to withhold the password, threatening the productivity and safety of the plant. s At the regional headquarters of an international energy company, an MIS contractor effectively \" captures …",
"title": ""
}
] |
[
{
"docid": "bfd94756f73fc7f9eb81437f5d192ac3",
"text": "Technological advances in upper-limb prosthetic design offer dramatically increased possibilities for powered movement. The DEKA Arm system allows users 10 powered degrees of movement. Learning to control these movements by utilizing a set of motions that, in most instances, differ from those used to obtain the desired action prior to amputation is a challenge for users. In the Department of Veterans Affairs \"Study to Optimize the DEKA Arm,\" we attempted to facilitate motor learning by using a virtual reality environment (VRE) program. This VRE program allows users to practice controlling an avatar using the controls designed to operate the DEKA Arm in the real world. In this article, we provide highlights from our experiences implementing VRE in training amputees to use the full DEKA Arm. This article discusses the use of VRE in amputee rehabilitation, describes the VRE system used with the DEKA Arm, describes VRE training, provides qualitative data from a case study of a subject, and provides recommendations for future research and implementation of VRE in amputee rehabilitation. Our experience has led us to believe that training with VRE is particularly valuable for upper-limb amputees who must master a large number of controls and for those amputees who need a structured learning environment because of cognitive deficits.",
"title": ""
},
{
"docid": "700c016add5f44c3fbd560d84b83b290",
"text": "This paper describes a novel framework, called I<scp>n</scp>T<scp>ens</scp>L<scp>i</scp> (\"intensely\"), for producing fast single-node implementations of dense tensor-times-matrix multiply (T<scp>tm</scp>) of arbitrary dimension. Whereas conventional implementations of T<scp>tm</scp> rely on explicitly converting the input tensor operand into a matrix---in order to be able to use any available and fast general matrix-matrix multiply (G<scp>emm</scp>) implementation---our framework's strategy is to carry out the T<scp>tm</scp> <i>in-place</i>, avoiding this copy. As the resulting implementations expose tuning parameters, this paper also describes a heuristic empirical model for selecting an optimal configuration based on the T<scp>tm</scp>'s inputs. When compared to widely used single-node T<scp>tm</scp> implementations that are available in the Tensor Toolbox and Cyclops Tensor Framework (C<scp>tf</scp>), In-TensLi's in-place and input-adaptive T<scp>tm</scp> implementations achieve 4× and 13× speedups, showing Gemm-like performance on a variety of input sizes.",
"title": ""
},
{
"docid": "7e99c34beafefdfcf11750e5acfc8ac0",
"text": "Emerging technologies offer exciting new ways of using entertainment technology to create fantastic play experiences and foster interactions between players. Evaluating entertainment technology is challenging because success isn’ t defined in terms of productivity and performance, but in terms of enjoyment and interaction. Current subjective methods of evaluating entertainment technology aren’ t sufficiently robust. This paper describes two experiments designed to test the efficacy of physiological measures as evaluators of user experience with entertainment technologies. We found evidence that there is a different physiological response in the body when playing against a computer versus playing against a friend. These physiological results are mirrored in the subjective reports provided by the participants. In addition, we provide guidelines for collecting physiological data for user experience analysis, which were informed by our empirical investigations. This research provides an initial step towards using physiological responses to objectively evaluate a user’s experience with entertainment technology.",
"title": ""
},
{
"docid": "4ac083b7e2900eb5cc80efd6022c76c1",
"text": "We investigate the problem of reconstructing normals, albedo and lights of Lambertian surfaces in uncalibrated photometric stereo under the perspective projection model. Our analysis is based on establishing the integrability constraint. In the orthographic projection case, it is well-known that when such constraint is imposed, a solution can be identified only up to 3 parameters, the so-called generalized bas-relief (GBR) ambiguity. We show that in the perspective projection case the solution is unique. We also propose a closed-form solution which is simple, efficient and robust. We test our algorithm on synthetic data and publicly available real data. Our quantitative tests show that our method outperforms all prior work of uncalibrated photometric stereo under orthographic projection.",
"title": ""
},
{
"docid": "d495f9ae71492df9225249147563a3d9",
"text": "The control of a PWM rectifier with LCL-filter using a minimum number of sensors is analyzed. In addition to the DC-link voltage either the converter or line current is measured. Two different ways of current control are shown, analyzed and compared by simulations as well as experimental investigations. Main focus is spent on active damping of the LCL filter resonance and on robustness against line inductance variations.",
"title": ""
},
{
"docid": "509731f3ae004c797c25add85faf6939",
"text": "Based on the real data of a Chinese commercial bank’s credit card, in this paper, we classify the credit card customers into four classifications by K-means. Then we built forecasting models separately based on four data mining methods such as C5.0, neural network, chi-squared automatic interaction detector, and classification and regression tree according to the background information of the credit cards holders. Conclusively, we obtain some useful information of decision tree regulation by the best model among the four. The information is not only helpful for the bank to understand related characteristics of different customers, but also marketing representatives to find potential customers and to implement target marketing.",
"title": ""
},
{
"docid": "6b0b505c9ec2686c775b9af353d3287b",
"text": "OBJECTIVE\nTo determine the prevalence of additional injuries or bleeding disorders in a large population of young infants evaluated for abuse because of apparently isolated bruising.\n\n\nSTUDY DESIGN\nThis was a prospectively planned secondary analysis of an observational study of children<10 years (120 months) of age evaluated for possible physical abuse by 20 US child abuse teams. This analysis included infants<6 months of age with apparently isolated bruising who underwent diagnostic testing for additional injuries or bleeding disorders.\n\n\nRESULTS\nAmong 2890 children, 33.9% (980/2890) were <6 months old, and 25.9% (254/980) of these had bruises identified. Within this group, 57.5% (146/254) had apparently isolated bruises at presentation. Skeletal surveys identified new injury in 23.3% (34/146), neuroimaging identified new injury in 27.4% (40/146), and abdominal injury was identified in 2.7% (4/146). Overall, 50% (73/146) had at least one additional serious injury. Although testing for bleeding disorders was performed in 70.5% (103/146), no bleeding disorders were identified. Ultimately, 50% (73/146) had a high perceived likelihood of abuse.\n\n\nCONCLUSIONS\nInfants younger than 6 months of age with bruising prompting subspecialty consultation for abuse have a high risk of additional serious injuries. Routine medical evaluation for young infants with bruises and concern for physical abuse should include physical examination, skeletal survey, neuroimaging, and abdominal injury screening.",
"title": ""
},
{
"docid": "65192c3b3e3bfe96e187bf391df049b4",
"text": "This paper presents a new single-stage singleswitch (S4) high power factor correction (PFC) AC/DC converter suitable for low power applications (< 150 W) with a universal input voltage range (90–265 Vrms). The proposed topology integrates a buck-boost input current shaper followed by a buck and a buck-boost converter, respectively. As a result, the proposed converter can operate with larger duty cycles compared to the exiting S4 topologies; hence, making them suitable for extreme step-down voltage conversion applications. Several desirable features are gained when the three integrated converter cells operate in discontinuous conduction mode (DCM). These features include low semiconductor voltage stress, zero-current switch at turn-on, and simple control with a fast well-regulated output voltage. A detailed circuit analysis is performed to derive the design equations. The theoretical analysis and effectiveness of the proposed approach are confirmed by experimental results obtained from a 35-W/12-Vdc laboratory prototype.",
"title": ""
},
{
"docid": "e46943cc1c73a56093d4194330d52d52",
"text": "This paper deals with the compact modeling of an emerging technology: the carbon nanotube field-effect transistor (CNTFET). The paper proposed two design-oriented compact models, the first one for CNTFET with a classical behavior (MOSFET-like CNTFET), and the second one for CNTFET with an ambipolar behavior (Schottky-barrier CNTFET). Both models have been compared with exact numerical simulations and then implemented in VHDL-AMS",
"title": ""
},
{
"docid": "cbe70e9372d1588f075d2037164b3077",
"text": "Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.",
"title": ""
},
{
"docid": "09fdc74a146a876e44bec1eca1bf7231",
"text": "With more and more people around the world learning Chinese as a second language, the need of Chinese error correction tools is increasing. In the HSK dynamic composition corpus, word usage error (WUE) is the most common error type. In this paper, we build a neural network model that considers both target erroneous token and context to generate a correction vector and compare it against a candidate vocabulary to propose suitable corrections. To deal with potential alternative corrections, the top five proposed candidates are judged by native Chinese speakers. For more than 91% of the cases, our system can propose at least one acceptable correction within a list of five candidates. To the best of our knowledge, this is the first research addressing general-type Chinese WUE correction. Our system can help non-native Chinese learners revise their sentences by themselves. Title and Abstract in Chinese",
"title": ""
},
{
"docid": "9807eace5f1f89f395fb8dff9dda13ab",
"text": "This article provides a new, more comprehensive view of event-related brain dynamics founded on an information-based approach to modeling electroencephalographic (EEG) dynamics. Most EEG research focuses either on peaks 'evoked' in average event-related potentials (ERPs) or on changes 'induced' in the EEG power spectrum by experimental events. Although these measures are nearly complementary, they do not fully model the event-related dynamics in the data, and cannot isolate the signals of the contributing cortical areas. We propose that many ERPs and other EEG features are better viewed as time/frequency perturbations of underlying field potential processes. The new approach combines independent component analysis (ICA), time/frequency analysis, and trial-by-trial visualization that measures EEG source dynamics without requiring an explicit head model.",
"title": ""
},
{
"docid": "63efc2ce1756f64a0328ecb64cb9200b",
"text": "Memory analysis has gained popularity in recent years proving to be an effective technique for uncovering malware in compromised computer systems. The process of memory acquisition presents unique evidentiary challenges since many acquisition techniques require code to be run on a potential compromised system, presenting an avenue for anti-forensic subversion. In this paper, we examine a number of simple anti-forensic techniques and test a representative sample of current commercial and free memory acquisition tools. We find that current tools are not resilient to very simple anti-forensic measures. We present a novel memory acquisition technique, based on direct page table manipulation and PCI hardware introspection, without relying on operating system facilities making it more difficult to subvert. We then evaluate this technique’s further vulnerability to subversion by considering more advanced anti-forensic attacks. a 2013 Johannes Stüttgen and Michael Cohen. Published by Elsevier Ltd. All rights",
"title": ""
},
{
"docid": "b9a84b723f946ab8c3dd17ae98b5868a",
"text": "For many NLP applications such as Information Extraction and Sentiment Detection, it is of vital importance to distinguish between synonyms and antonyms. While the general assumption is that distributional models are not suitable for this task, we demonstrate that using suitable features, differences in the contexts of synonymous and antonymous German adjective pairs can be identified with a simple word space model. Experimenting with two context settings (a simple windowbased model and a ‘co-disambiguation model’ to approximate adjective sense disambiguation), our best model significantly outperforms the 50% baseline and achieves 70.6% accuracy in a synonym/antonym classification task.",
"title": ""
},
{
"docid": "9581483f301b3522b88f6690b2668217",
"text": "AI researchers employ not only the scientific method, but also methodology from mathematics and engineering. However, the use of the scientific method – specifically hypothesis testing – in AI is typically conducted in service of engineering objectives. Growing interest in topics such as fairness and algorithmic bias show that engineering-focused questions only comprise a subset of the important questions about AI systems. This results in the AI Knowledge Gap: the number of unique AI systems grows faster than the number of studies that characterize these systems’ behavior. To close this gap, we argue that the study of AI could benefit from the greater inclusion of researchers who are well positioned to formulate and test hypotheses about the behavior of AI systems. We examine the barriers preventing social and behavioral scientists from conducting such studies. Our diagnosis suggests that accelerating the scientific study of AI systems requires new incentives for academia and industry, mediated by new tools and institutions. To address these needs, we propose a two-sided marketplace called TuringBox. On one side, AI contributors upload existing and novel algorithms to be studied scientifically by others. On the other side, AI examiners develop and post machine intelligence tasks designed to evaluate and characterize algorithmic behavior. We discuss this market’s potential to democratize the scientific study of AI behavior, and thus narrow the AI Knowledge Gap. 1 The Many Facets of AI Research Although AI is a sub-discipline of computer science, AI researchers do not exclusively use the scientific method in their work. For example, the methods used by early AI researchers often drew from logic, a subfield of mathematics, and are distinct from the scientific method we think of today. Indeed AI has adopted many techniques and approaches over time. In this section, we distinguish and explore the history of these ∗Equal contribution. methodologies with a particular emphasis on characterizing the evolving science of AI.",
"title": ""
},
{
"docid": "5d15118fcb25368fc662deeb80d4ef28",
"text": "A5-GMR-1 is a synchronous stream cipher used to provide confidentiality for communications between satellite phones and satellites. The keystream generator may be considered as a finite state machine, with an internal state of 81 bits. The design is based on four linear feedback shift registers, three of which are irregularly clocked. The keystream generator takes a 64-bit secret key and 19-bit frame number as inputs, and produces an output keystream of length berween 28 and 210 bits.\n Analysis of the initialisation process for the keystream generator reveals serious flaws which significantly reduce the number of distinct keystreams that the generator can produce. Multiple (key, frame number) pairs produce the same keystream, and the relationship between the various pairs is easy to determine. Additionally, many of the keystream sequences produced are phase shifted versions of each other, for very small phase shifts. These features increase the effectiveness of generic time-memory tradeoff attacks on the cipher, making such attacks feasible.",
"title": ""
},
{
"docid": "15e866c21b0739b7a2e24dc8ee5f1833",
"text": "Plastics have outgrown most man-made materials and have long been under environmental scrutiny. However, robust global information, particularly about their end-of-life fate, is lacking. By identifying and synthesizing dispersed data on production, use, and end-of-life management of polymer resins, synthetic fibers, and additives, we present the first global analysis of all mass-produced plastics ever manufactured. We estimate that 8300 million metric tons (Mt) as of virgin plastics have been produced to date. As of 2015, approximately 6300 Mt of plastic waste had been generated, around 9% of which had been recycled, 12% was incinerated, and 79% was accumulated in landfills or the natural environment. If current production and waste management trends continue, roughly 12,000 Mt of plastic waste will be in landfills or in the natural environment by 2050.",
"title": ""
},
{
"docid": "b5831795da97befd3241b9d7d085a20f",
"text": "Want to learn more about the background and concepts of Internet congestion control? This indispensable text draws a sketch of the future in an easily comprehensible fashion. Special attention is placed on explaining the how and why of congestion control mechanisms complex issues so far hardly understood outside the congestion control research community. A chapter on Internet Traffic Management from the perspective of an Internet Service Provider demonstrates how the theory of congestion control impacts on the practicalities of service delivery.",
"title": ""
},
{
"docid": "3357bcf236fdb8077a6848423a334b45",
"text": "According to the latest investigation, there are 1.7 million active social network users in Taiwan. Previous researches indicated social network posts have a great impact on users, and mostly, the negative impact is from the rising demands of social support, which further lead to heavier social overload. In this study, we propose social overloaded posts detection model (SODM) by deploying the latest text mining and deep learning techniques to detect the social overloaded posts and, then with the developed social overload prevention system (SOS), the social overload posts and non-social overload ones are rearranged with different sorting methods to prevent readers from excessive demands of social support or social overload. The empirical results show that our SOS helps readers to alleviate social overload when reading via social media.",
"title": ""
},
{
"docid": "58b825902e652cc2ae0bfd867bd4f5d9",
"text": "Considers present and future practical applications of cross-reality. From tools to build new 3D virtual worlds to the products of those tools, cross-reality is becoming a staple of our everyday reality. Practical applications of cross-reality include the ability to virtually visit a factory to manage and maintain resources from the comfort of your laptop or desktop PC as well as sentient visors that augment reality with additional information so that users can make more informed choices. Tools and projects considered are:Project Wonderland for multiuser mixed reality;ClearWorlds: mixed- reality presence through virtual clearboards; VICI (Visualization of Immersive and Contextual Information) for ubiquitous augmented reality based on a tangible user interface; Mirror World Chocolate Factory; and sentient visors for browsing the world.",
"title": ""
}
] |
scidocsrr
|
a0d22a863b254dccd516fa63ae9be5e2
|
Electronic word-of-mouth: Challenges and opportunities
|
[
{
"docid": "80ce6c8c9fc4bf0382c5f01d1dace337",
"text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.",
"title": ""
}
] |
[
{
"docid": "8214191a507f7eb2d9c3315e8959c08d",
"text": "This paper addresses issues about the rejection of false jammer targets in the presence of digital radio frequency memory (DRFM) repeat jammer. An anti-jamming filtering technique is proposed that it can eliminate this type of jamming signal. By using a stretch processing with a particular selected reference signal, the presented method can fully separate the echoes being reflected from the true targets and the signals being re-transmitted by a jammer in frequency domain. Therefore, utilizing the nonoverlapping properties of the received signals, filters or suchlike techniques can be used to reject the undesired jamming signals. Particularly, this method does not require estimation of jamming signal parameters and does not involve a great computation burden. Simulations are given to show the validity of the introduced approach.",
"title": ""
},
{
"docid": "f9076f4dbc5789e89ed758d0ad2c6f18",
"text": "This paper presents an innovative manner of obtaining discriminative texture signatures by using the LBP approach to extract additional sources of information from an input image and by using fractal dimension to calculate features from these sources. Four strategies, called Min, Max, Diff Min and Diff Max , were tested, and the best success rates were obtained when all of them were employed together, resulting in an accuracy of 99.25%, 72.50% and 86.52% for the Brodatz, UIUC and USPTex databases, respectively, using Linear Discriminant Analysis. These results surpassed all the compared methods in almost all the tests and, therefore, confirm that the proposed approach is an effective tool for texture analysis. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ab02c4ebc5449a4371e7ebd22fd0db48",
"text": "A number of marketing phenomena are too complex for conventional analytical or empirical approaches. This makes marketing a costly process of trial and error: proposing, imagining, trying in the real world, and seeing results. Alternatively, Agent-based Social Simulation (ABSS) is becoming the most popular approach to model and study these phenomena. This research paradigm allows modeling a virtual market to: design, understand, and evaluate marketing hypotheses before taking them to the real world. However, there are shortcomings in the specialized literature such as the lack of methods, data, and implemented tools to deploy a realistic virtual market with ABSS. To advance the state of the art in this complex and interesting problem, this paper is a seven-fold contribution based on a (1) method to design and validate viral marketing strategies in Twitter by ABSS. The method is illustrated with the widely studied problem of rumor diffusion in social networks. After (2) an extensive review of the related works for this problem, (3) an innovative spread model is proposed which rests on the exploratory data analysis of two different rumor datasets in Twitter. Besides, (4) new strategies are proposed to control malicious gossips. (5) The experimental results validate the realism of this new propagation model with the datasets and (6) the strategies performance is evaluated over this model. (7) Finally, the article is complemented by a free and open-source simulator.",
"title": ""
},
{
"docid": "6761bd757cdd672f60c980b081d4dbc8",
"text": "Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.",
"title": ""
},
{
"docid": "5179662c841302180848dc566a114f10",
"text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.",
"title": ""
},
{
"docid": "fe360177f5a13e4b50489a6a96bead01",
"text": "Previous work on automatic summarization does not thoroughly consider coherence while generating the summary. We introduce a graph-based approach to summarize scientific articles. We employ coherence patterns to ensure that the generated summaries are coherent. The novelty of our model is twofold: we mine coherence patterns in a corpus of abstracts, and we propose a method to combine coherence, importance and non-redundancy to generate the summary. We optimize these factors simultaneously using Mixed Integer Programming. Our approach significantly outperforms baseline and state-of-the-art systems in terms of coherence (summary coherence assessment) and relevance (ROUGE scores).",
"title": ""
},
{
"docid": "f4a2e2cc920e28ae3d7539ba8b822fb7",
"text": "Neurologic injuries, such as stroke, spinal cord injuries, and weaknesses of skeletal muscles with elderly people, may considerably limit the ability of this population to achieve the main daily living activities. Recently, there has been an increasing interest in the development of wearable devices, the so-called exoskeletons, to assist elderly as well as patients with limb pathologies, for movement assistance and rehabilitation. In this paper, we review and discuss the state of the art of the lower limb exoskeletons that are mainly used for physical movement assistance and rehabilitation. An overview of the commonly used actuation systems is presented. According to different case studies, a classification and comparison between different types of actuators is conducted, such as hydraulic actuators, electrical motors, series elastic actuators, and artificial pneumatic muscles. Additionally, the mainly used control strategies in lower limb exoskeletons are classified and reviewed, based on three types of human-robot interfaces: the signals collected from the human body, the interaction forces between the exoskeleton and the wearer, and the signals collected from exoskeletons. Furthermore, the performances of several typical lower limb exoskeletons are discussed, and some assessment methods and performance criteria are reviewed. Finally, a discussion of the major advances that have been made, some research directions, and future challenges are presented.",
"title": ""
},
{
"docid": "8e6be29997001367542283e94c7d8f05",
"text": "Character recognition has been widely used since its inception in applications involved processing of scanned or camera-captured documents. There exist multiple scripts in which the languages are written. The scripts could broadly be divided into cursive and non-cursive scripts. The recurrent neural networks have been proved to obtain state-of-the-art results for optical character recognition. We present a thorough investigation of the performance of recurrent neural network (RNN) for cursive and non-cursive scripts. We employ bidirectional long short-term memory (BLSTM) networks, which is a variant of the standard RNN. The output layer of the architecture used to carry out our investigation is a special layer called connectionist temporal classification (CTC) which does the sequence alignment. The CTC layer takes as an input the activations of LSTM and aligns the target labels with the inputs. The results were obtained at the character level for both cursive Urdu and non-cursive English scripts are significant and suggest that the BLSTM technique is potentially more useful than the existing OCR algorithms.",
"title": ""
},
{
"docid": "05bcc85ca42945987a6f0c6c2839fa0a",
"text": "Abstract. Blockchain has many benefits including decentralization, availability, persistency, consistency, anonymity, auditability and accountability, and it also covers a wide spectrum of applications ranging from cryptocurrency, financial services, reputation system, Internet of Things, sharing economy to public and social services. Not only may blockchain be regarded as a by-product of Bitcoin cryptocurrency systems, but also it is a type of distributed ledger technology through using a trustworthy, decentralized log of totally ordered transactions. By summarizing the literature of blockchain, it is found that more papers focus on engineering implementation and realization, while little work has been done on basic theory, for example, mathematical models (Markov processes, queueing theory and game models), performance analysis and optimization of blockchain systems. In this paper, we develop queueing theory of blockchain systems and provide system performance evaluation. To do this, we design a Markovian batch-service queueing system with two different service stages, while the two stages are suitable to well express the mining process in the miners pool and the building of a new blockchain. By using the matrix-geometric solution, we obtain a system stable condition and express three key performance measures: (a) The number of transactions in the queue, (b) the number of transactions in a block, and (c) the transaction-confirmation time. Finally, We use numerical examples to verify computability of our theoretical results. Although our queueing model is simple under exponential or Poisson assumptions, our analytic method will open a series of potentially promising research in queueing theory of blockchain systems.",
"title": ""
},
{
"docid": "1b7048c328414573f55cc4aed2744496",
"text": "Structural Health Monitoring (SHM) can be understood as the integration of sensing and intelligence to enable the structure loading and damage-provoking conditions to be recorded, analyzed, localized, and predicted in such a way that nondestructive testing becomes an integral part of them. In addition, SHM systems can include actuation devices to take proper reaction or correction actions. SHM sensing requirements are very well suited for the application of optical fiber sensors (OFS), in particular, to provide integrated, quasi-distributed or fully distributed technologies. In this tutorial, after a brief introduction of the basic SHM concepts, the main fiber optic techniques available for this application are reviewed, emphasizing the four most successful ones. Then, several examples of the use of OFS in real structures are also addressed, including those from the renewable energy, transportation, civil engineering and the oil and gas industry sectors. Finally, the most relevant current technical challenges and the key sector markets are identified. This paper provides a tutorial introduction, a comprehensive background on this subject and also a forecast of the future of OFS for SHM. In addition, some of the challenges to be faced in the near future are addressed.",
"title": ""
},
{
"docid": "c736258623c7f977ebc00f5555d13e02",
"text": "We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-system rules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols of an L-system alphabet. The terminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced.",
"title": ""
},
{
"docid": "f709802a6da7db7c71dfa67930111b04",
"text": "Generative adversarial networks (GANs) are a class of unsupervised machine learning algorithms that can produce realistic images from randomly-sampled vectors in a multi-dimensional space. Until recently, it was not possible to generate realistic high-resolution images using GANs, which has limited their applicability to medical images that contain biomarkers only detectable at native resolution. Progressive growing of GANs is an approach wherein an image generator is trained to initially synthesize low resolution synthetic images (8x8 pixels), which are then fed to a discriminator that distinguishes these synthetic images from real downsampled images. Additional convolutional layers are then iteratively introduced to produce images at twice the previous resolution until the desired resolution is reached. In this work, we demonstrate that this approach can produce realistic medical images in two different domains; fundus photographs exhibiting vascular pathology associated with retinopathy of prematurity (ROP), and multi-modal magnetic resonance images of glioma. We also show that fine-grained details associated with pathology, such as retinal vessels or tumor heterogeneity, can be preserved and enhanced by including segmentation maps as additional channels. We envisage several applications of the approach, including image augmentation and unsupervised classification of pathology.",
"title": ""
},
{
"docid": "29505dcb2a40123c6ff700bf1017b5ce",
"text": "The development of algorithms for hierarchical clustering has been hampered by a shortage of precise objective functions. To help address this situation, we introduce a simple cost function on hierarchies over a set of points, given pairwise similarities between those points. We show that this criterion behaves sensibly in canonical instances and that it admits a top-down construction procedure with a provably good approximation ratio.",
"title": ""
},
{
"docid": "f6df133663ab4342222d95a20cd09996",
"text": "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open the door for inappropriate online activities, such as harassment, in which some users post messages in a virtual community that are intentionally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently few systems attempt to solve this problem. In this paper, we use a supervised learning approach for detecting harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experimental results described herein show that our method achieves significant improvements over several baselines, including Term FrequencyInverse Document Frequency (TFIDF) approaches. Identification of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.",
"title": ""
},
{
"docid": "74da516d4a74403ac5df760b0b656b1f",
"text": "In this paper a novel and effective approach for automated audio classification is presented that is based on the fusion of different sets of features, both visual and acoustic. A number of different acoustic and visual features of sounds are evaluated and compared. These features are then fused in an ensemble that produces better classification accuracy than other state-of-the-art approaches. The visual features of sounds are built starting from the audio file and are taken from images constructed from different spectrograms, a gammatonegram, and a rhythm image. These images are divided into subwindows from which a set of texture descriptors are extracted. For each feature descriptor a different Support Vector Machine (SVM) is trained. The SVMs outputs are summed for a final decision. The proposed ensemble is evaluated on three well-known databases of music genre classification (the Latin Music Database, the ISMIR 2004 database, and the GTZAN genre collection), a dataset of Bird vocalization aiming specie recognition, and a dataset of right whale calls aiming whale detection. The MATLAB code for the ensemble of classifiers and for the extraction of the features will be publicly available (https://www.dei.unipd.it/node/2357 +Pattern Recognition and Ensemble Classifiers).",
"title": ""
},
{
"docid": "8bdd02547be77f4c825c9aed8016ddf8",
"text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.",
"title": ""
},
{
"docid": "a552f0ee9fafe273859a11f29cf7670d",
"text": "A majority of the existing stereo matching algorithms assume that the corresponding color values are similar to each other. However, it is not so in practice as image color values are often affected by various radiometric factors such as illumination direction, illuminant color, and imaging device changes. For this reason, the raw color recorded by a camera should not be relied on completely, and the assumption of color consistency does not hold good between stereo images in real scenes. Therefore, the performance of most conventional stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new stereo matching measure that is insensitive to radiometric variations between left and right images. Unlike most stereo matching measures, we use the color formation model explicitly in our framework and propose a new measure, called the Adaptive Normalized Cross-Correlation (ANCC), for a robust and accurate correspondence measure. The advantage of our method is that it is robust to lighting geometry, illuminant color, and camera parameter changes between left and right images, and does not suffer from the fattening effect unlike conventional Normalized Cross-Correlation (NCC). Experimental results show that our method outperforms other state-of-the-art stereo methods under severely different radiometric conditions between stereo images.",
"title": ""
},
{
"docid": "5e2536588d34ab0067af1bd716489531",
"text": "Recommender systems support user decision-making, and explanations of recommendations further facilitate their usefulness. Previous explanation styles are based on similar users, similar items, demographics of users, and contents of items. Contexts, such as usage scenarios and accompanying persons, have not been used for explanations, although they influence user decisions. In this paper, we propose a context style explanation method, presenting contexts suitable for consuming recommended items. The expected impacts of context style explanations are 1) persuasiveness: recognition of suitable context for usage motivates users to consume items, and 2) usefulness: envisioning context helps users to make right choices because the values of items depend on contexts. We evaluate context style persuasiveness and usefulness by a crowdsourcing-based user study in a restaurant recommendation setting. The context style explanation is compared to demographic and content style explanations. We also combine context style and other explanation styles, confirming that hybrid styles improve persuasiveness and usefulness of explanation.",
"title": ""
},
{
"docid": "291ee9114488b7b8e20e9568fbf85afe",
"text": "Today, data availability has gone from scarce to superabundant. Technologies like IoT, trends in social media and the capabilities of smart-phones are producing and digitizing lots of data that was previously unavailable. This massive increase of data creates opportunities to gain new business models, but also demands new techniques and methods of data quality in knowledge discovery, especially when the data comes from different sources (e.g., sensors, social networks, cameras, etc.). The data quality process of the data set proposes conclusions about the information they contain. This is increasingly done with the aid of data cleaning approaches. Therefore, guaranteeing a high data quality is considered as the primary goal of the data scientist. In this paper, we propose a process for data cleaning in regression models (DC-RM). The proposed data cleaning process is evaluated through a real datasets coming from the UCI Repository of Machine Learning Databases. With the aim of assessing the data cleaning process, the dataset that is cleaned by DC-RM was used to train the same regression models proposed by the authors of UCI datasets. The results achieved by the trained models with the dataset produced by DC-RM are better than or equal to that presented by the datasets’ authors.",
"title": ""
},
{
"docid": "9c15e5ef720d42e1cc6d757391946146",
"text": "Verifying robustness of neural network classifiers has attracted great interests and attention due to the success of deep neural networks and their unexpected vulnerability to adversarial perturbations. Although finding minimum adversarial distortion of neural networks (with ReLU activations) has been shown to be an NP-complete problem, obtaining a non-trivial lower bound of minimum distortion as a provable robustness guarantee is possible. However, most previous works only focused on simple fully-connected layers (multilayer perceptrons) and were limited to ReLU activations. This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks. Our framework is general – we can handle various architectures including convolutional layers, max-pooling layers, batch normalization layer, residual blocks, as well as general activation functions; our approach is efficient – by exploiting the special structure of convolutional layers, we achieve up to 17 and 11 times of speed-up compared to the state-of-the-art certification algorithms (e.g. Fast-Lin, CROWN) and 366 times of speed-up compared to the dual-LP approach while our algorithm obtains similar or even better verification bounds. In addition, CNN-Cert generalizes state-of-the-art algorithms e.g. Fast-Lin and CROWN. We demonstrate by extensive experiments that our method outperforms state-of-the-art lowerbound-based certification algorithms in terms of both bound quality and speed.",
"title": ""
}
] |
scidocsrr
|
70efe5abbfaba4e4e37050dc906b7a85
|
Maximum battery life routing to support ubiquitous mobile computing in wireless ad hoc networks
|
[
{
"docid": "bbdb676a2a813d29cd78facebc38a9b8",
"text": "In this paper we develop a new multiaccess protocol for ad hoc radio networks. The protocol is based on the original MACA protocol with the adition of a separate signalling channel. The unique feature of our protocol is that it conserves battery power at nodes by intelligently powering off nodes that are not actively transmitting or receiving packets. The manner in which nodes power themselves off does not influence the delay or throughput characteristics of our protocol. We illustrate the power conserving behavior of PAMAS via extensive simulations performed over ad hoc networks containing 10-20 nodes. Our results indicate that power savings of between 10% and 70% are attainable in most systems. Finally, we discuss how the idea of power awareness can be built into other multiaccess protocols as well.",
"title": ""
},
{
"docid": "b5da410382e8ad27f012f3adac17592e",
"text": "In this paper, we propose a new routing protocol, the Zone Routing Protocol (ZRP), for the Reconfigurable Wireless Networks, a large scale, highly mobile ad-hoc networking environment. The novelty of the ZRP protocol is that it is applicable to large flat-routed networks. Furthermore, through the use of the zone radius parameter, the scheme exhibits adjustable hybrid behavior of proactive and reactive routing schemes. We evaluate the performance of the protocol, showing the reduction in the number of control messages, as compared with other reactive schemes, such as flooding. INTRODUCTION Recently, there has been an increased interest in ad-hoc networking [1]. In general, ad-hoc networks are network architecture that can be rapidly deployed, without preexistence of any fixed infrastructure. A special case of ad-hoc networks, the Reconfigurable Wireless Networks (RWN), was previously introduced [2,3] to emphasize a number of special characteristics of the RWN communication environment: 3⁄4 large network coverage; large network radius, net r , 3⁄4 large number of network nodes, and 3⁄4 large range of nodal velocities (from stationary to highly mobile). In particular, the topology of the RWN is quite frequently changing, while self-adapting to the connectivity and propagation conditions and to the traffic and mobility patterns. Examples of the use of the RWNs are: • military (tactical) communication for fast establishment of communication infrastructure during deployment of forces in a foreign (hostile) terrain • rescue missions for communication in areas without adequate wireless coverage • national security for communication in times of national crisis, when the existing communication infrastructure is non-operational due to a natural disasters or a global war • law enforcement similar to tactical communication 1 For example, the maximal nodal velocity is such that the lifetime of a link can be between hundreds of milliseconds to few seconds only. • commercial use for setting up communication in exhibitions, conferences, or sale presentations • education for operation of virtual classrooms • sensor networks for communication between intelligent sensors (e.g., MEMS) mounted on mobile platforms. Basically, there are two approaching in providing ad-hoc network connectivity: flat-routed or hierarchical network architectures. An example of a flat-routed network is shown in Figure 1 and of a two-tiered hierarFigure 1: A flat-routed ad-hoc network chical network in Figure 2. In flat-routed networks, all the nodes are “equal” and the packet routing is done based on peer-to-peer connections, restricted only by the propagation conditions. In hierarchical networks, there are at least two tiers; on the lower tier, nodes in geographical proximity create peer-to-peer networks. In each one of these lower-tier networks, at least one node is designated to serve as a \"gateway” to the higher tier. These “gateway” nodes create the highertier network, which usually requires more powerful transmitters/receivers. Although routing between nodes that belong to the same lower-tier network is based on peer-to-peer routing, routing between nodes that belong to different lower-tier networks is through the gateway nodes. Figure 2: A two-tiered ad-hoc network tier-1 network tier-2 network tier-1 network tier-1 network tier-1 network cluster cluster head We will omit here the comparison of the two architectures. Nevertheless, we note that the flat-routed networks are more suitable for the highly versatile communication environment as the RWN-s. The reason is that the maintenance of the hierarchies (and the associated cluster heads) is too costly in network resources when the lifetime of the links is quite short. Thus, we chose to concentrate on the flat-routed network architecture in our study of the routing protocols for the RWN. PREVIOUS AND RELATED WORK The currently available routing protocols are inadequate for the RWN. The main problem is that they do not support either fast-changeable network architecture or that they do not scale well with the size of the network (number of nodes). Surprisingly, these shortcomings are present even in some routing protocols that were proposed for ad-hoc networks. More specifically, the challenge stems from the fact that, on one hand, in-order to route packets in a network, the network topology needs to be known to the traversed nodes. On the other hand, in a RWN, this topology may change quite often. Also, the number of nodes may be very large. Thus, the cost of updates is quite high, in contradiction with the fact that updates are expensive in the wireless communication environment. Furthermore, as the number of network nodes may be large, the potential number of destinations is also large, requiring large and frequent exchange of data (e.g., routes, routes updates, or routing tables) between network nodes. The wired Internet uses routing protocols based on topological broadcast, such as the OSPF [4]. These protocols are not suitable for the RWN due to the relatively large bandwidth required for update messages. In the past, routing in multi-hop packet radio networks was based on shortest-path routing algorithms [5], such as Distributed Bellman-Ford (DBF) algorithm. These algorithms suffer from very slow convergence (the “counting to infinity” problem). Besides, DBF-like algorithms incur large update message penalty. Protocols that attempted to cure some of the shortcoming of DFB, such as DestinationSequenced Distance-Vector Routing (DSDV) [6], were proposed and studied. Nevertheless, synchronization problems and extra processing overhead are common in these protocols. Other protocols that rely on the information from the predecessor of the shortest path solve the slow convergence problem of DBF (e.g., [7]). However, the processing requirements of these protocols may be quite high, because of the way they process the update messages. Use of dynamic source routing protocol, which utilizes flooding to discover a route to a destination, is described in [8]. A number of optimization techniques, such as route caching are also presented that reduce the route determination/maintenance overhead. In a highly dynamic environment, such as the RWN is, this type of protocols lead to a large delay and the techniques to reduce overhead may not perform well. A query-reply based routing protocol has been introduced recently in [9]. Practical implementation of this protocol in the RWN-s can lead, however, to high communication requirements. A new distance-vector routing protocol for packet radio networks (WRP) is presented in [10]. Upon change in the network topology, WRP relies on communicating the change to its neighbors, which effectively propagates throughout the whole network. The salient advantage of WRP is the considerable reduction in the probability of loops in the calculated routes. The main disadvantage of WRP for the RWN is in the fact that routing nodes constantly maintain full routing information in each network node, which was obtained at relatively high cost in wireless resources In [11], routing is based on temporary addresses assigned to nodes. These addresses are concatenation of the node’s addresses on a physical and a virtual networks. However, routing requires full connectivity among all the physical network nodes. Furthermore, the routing may not be optimal, as it is based on addresses, which may not be related to the geographical locations, producing a long path for communication between two close-by nodes. The above routing protocols can be classified either as proactive or as reactive. Proactive protocols attempt to continuously evaluate the routes within the network, so that when a packet needs to be forwarded, the route is already known and can be immediately used. Reactive protocols, on the other hand, invoke the route determination procedures on demand only. Thus, when a route is needed, some sort of global search procedure is employed. The advantage of the proactive schemes is that, once a route is requested, there is little delay until route is determined. In reactive protocols, because route information may not be available at the time a routing request is received, the delay to determine a route can be quite significant. Because of this long delay, pure reactive routing protocols may not be applicable to realtime communication. However, pure proactive schemes are likewise not appropriate for the RWN environment, as they continuously use large portion of the network capacity to keep the routing information current. Since in an RWN nodes move quite fast, and as the changes may be more frequent than the routing requests, most of this routing information is never used! This results in an excessive waste of the network capacity. What is needed is a protocol that, on one hand, initiates the route-determination procedure on-demand, but with limited cost of the global search. The introduced here routing protocol, which is based on the notion of routing zones, incurs very low overhead in route determination. It requires maintaining a small amount of routing information in each node. There is no overhead of wireless resources to maintain routing information of inactive routes. Moreover, it identifies multiple routes with no looping problems. The ZONE ROUTING PROTOCOL (ZRP) Our approach to routing in the RWN is based on the notion of a routing zone, which is defined for each node and includes the nodes whose distance (e.g., in hops) is at most some predefined number. This distance is referred to here as the zone radius, zone r . Each node is required to know the topology of the network within its routing zone only and nodes are updated about topological changes only within their routing zone. Thus, even though a network can be quite large, the updates are only locally propagated. Since for radius greater than 1 the routing zones heavily overlap, the routing tends to be extremely robust. The rout",
"title": ""
}
] |
[
{
"docid": "31a198040fed8ce96dae2968a4060e4d",
"text": "Recent research has indicated that the degree of strategic planning in organisations is likely to have a direct impact on business performance and business evaluation. However, these findings leave small and medium-sized businesses (SMEs) in particular, with the challenge of matching the requirement for an improved strategic planning processes with the competitive advantage associated with being a “simple” and highly responsive organisation. In response to that challenge this paper discusses the potential benefits to SMEs in adopting the Balanced Scorecard methodology and the underlying management processes most relevant to SMEs. It also makes observations about how use and value may differ between Balanced Scorecard application in large and smaller enterprises.",
"title": ""
},
{
"docid": "abbb08ccfac8a7fb3bfe92e950bd4186",
"text": "This paper presents how text summarization can be influenced by textual entailment. We show that if we use textual entailment recognition together with text summarization approach, we achieve good results for final summaries, obtaining an improvement of 6.78% with respect to the summarization approach only. We also compare the performance of this combined approach to two baselines (the one provided in DUC 2002 and ours based on word-frequency technique) and we discuss the preliminary results obtained in order to infer conclusions that can be useful for future research.",
"title": ""
},
{
"docid": "6f6ebcdc15339df87b9499c0760936ce",
"text": "This paper outlines the design, implementation and evaluation of CAPTURE - a novel automated, continuously working cyber attack forecast system. It uses a broad range of unconventional signals from various public and private data sources and a set of signals forecasted via the Auto-Regressive Integrated Moving Average (ARIMA) model. While generating signals, auto cross correlation is used to find out the optimum signal aggregation and lead times. Generated signals are used to train a Bayesian classifier against the ground truth of each attack type. We show that it is possible to forecast future cyber incidents using CAPTURE and the consideration of the lead time could improve forecast performance.",
"title": ""
},
{
"docid": "0070d6e21bdb8bac260178603cfbf67d",
"text": "Sound is a medium that conveys functional and emotional information in a form of multilayered streams. With the use of such advantage, robot sound design can open a way for being more efficient communication in human-robot interaction. As the first step of research, we examined how individuals perceived the functional and emotional intention of robot sounds and whether the perceived information from sound is associated with their previous experience with science fiction movies. The sound clips were selected based on the context of the movie scene (i.e., Wall-E, R2-D2, BB8, Transformer) and classified as functional (i.e., platform, monitoring, alerting, feedback) and emotional (i.e., positive, neutral, negative). A total of 12 participants were asked to identify the perceived properties for each of the 30 items. We found that the perceived emotional and functional messages varied from those originally intended and differed by previous experience.",
"title": ""
},
{
"docid": "e84b6bbb2eaee0edb6ac65d585056448",
"text": "As memory accesses become slower with respect to the processor and consume more power with increasing memory size, the focus of memory performance and power consumption has become increasingly important. With the trend to develop multi-threaded, multi-core processors, the demands on the memory system will continue to scale. However, determining the optimal memory system configuration is non-trivial. The memory system performance is sensitive to a large number of parameters. Each of these parameters take on a number of values and interact in fashions that make overall trends difficult to discern. A comparison of the memory system architectures becomes even harder when we add the dimensions of power consumption and manufacturing cost. Unfortunately, there is a lack of tools in the public-domain that support such studies. Therefore, we introduce DRAMsim, a detailed and highly-configurable C-based memory system simulator to fill this gap. DRAMsim implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters. It also models the power consumption of SDRAM and its derivatives. It can be used as a standalone simulator or as part of a more comprehensive system-level model. We have successfully integrated DRAMsim into a variety of simulators including MASE [15], Sim-alpha [14], BOCHS[2] and GEMS[13]. The simulator can be downloaded from www.ece.umd.edu/dramsim.",
"title": ""
},
{
"docid": "51be236c79d1af7a2aff62a8049fba34",
"text": "BACKGROUND\nAs the number of children diagnosed with autism continues to rise, resources must be available to support parents of children with autism and their families. Parents need help as they assess their unique situations, reach out for help in their communities, and work to decrease their stress levels by using appropriate coping strategies that will benefit their entire family.\n\n\nMETHODS\nA descriptive, correlational, cross-sectional study was conducted with 75 parents/primary caregivers of children with autism. Using the McCubbin and Patterson model of family behavior, adaptive behaviors of children with autism, family support networks, parenting stress, and parent coping were measured.\n\n\nFINDINGS AND CONCLUSIONS\nAn association between low adaptive functioning in children with autism and increased parenting stress creates a need for additional family support as parents search for different coping strategies to assist the family with ongoing and new challenges. Professionals should have up-to-date knowledge of the supports available to families and refer families to appropriate resources to avoid overwhelming them with unnecessary and inappropriate referrals.",
"title": ""
},
{
"docid": "9c79105367f92ee1d6ac604af2105bf2",
"text": "Vector controlled motor drives are widely used in industry application areas, usually they contain two current sensors and a speed sensor. A fault diagnosis and reconfiguration structure is proposed in this paper including current sensor measurement errors and sensors open-circuit fault. Sliding windows and special features are designed to real-time detect the measurement errors, compensations are made according to detected offset and scaling values. When open-circuit faults occur, sensor outputs are constant-zero, the residuals between the Extended Kalman Filter (EKF) outputs and the sensors outputs are larger than pre-defined close-to-zero thresholds, under healthy condition, the residuals are equal to zero, as a result, the residuals can be used for open circuit fault detection. In this situation, the feedback signals immediately switch to EKF outputs to realize reconfiguration. Fair robustness are evaluated under disturbance such as load torque changes and variable speed. Simulation results show the effectiveness and merits of the proposed methods in this paper.",
"title": ""
},
{
"docid": "35a85bb270f1140d4dbb1090fd1e26cc",
"text": "English. The Citation Contexts of a cited entity can be seen as little tesserae that, fit together, can be exploited to follow the opinion of the scientific community towards that entity as well as to summarize its most important contents. This mosaic is an excellent resource of information also for identifying topic specific synonyms, indexing terms and citers’ motivations, i.e. the reasons why authors cite other works. Is a paper cited for comparison, as a source of data or just for additional info? What is the polarity of a citation? Different reasons for citing reveal also different weights of the citations and different impacts of the cited authors that go beyond the mere citation count metrics. Identifying the appropriate Citation Context is the first step toward a multitude of possible analysis and researches. So far, Citation Context have been defined in several ways in literature, related to different purposes, domains and applications. In this paper we present different dimensions of Citation Context investigated by researchers through the years in order to provide an introductory review of the topic to anyone approaching this subject. Italiano. Possiamo pensare ai Contesti Citazionali come tante tessere che, unite, possono essere sfruttate per seguire l’opinione della comunità scientifica riguardo ad un determinato lavoro o per riassumerne i contenuti più importanti. Questo mosaico di informazioni può essere utilizzato per identificare sinonimi specifici e Index Terms nonchè per individuare i motivi degli autori dietro le citazioni. Identificare il Contesto Citazionale ottimale è il primo passo per numerose analisi e ricerche. Il Contesto Citazionale è stato definito in diversi modi in letteratura, in relazione a differenti scopi, domini e applicazioni. In questo paper presentiamo le principali dimensioni testuali di Contesto Citazionale investigate dai ricercatori nel corso degli",
"title": ""
},
{
"docid": "5b7106a23930af7ccaeac561837c5154",
"text": "Recent years the number of vehicles increases tremendously. Because of that to identify the vehicle is significant task. Vehicle color and number plate recognition are various ways to identify the vehicle. So Vehicle color recognition essential part of an intelligent transportation system. There are several methods for recognizing the color of the vehicle like feature extract, template matching, convolutional neural network (CNN), etc. CNN is emerging technique within the field of Deep learning. The survey concludes that compared to other techniques CNN gives more accurate results with less training time even for large dataset. The images taken from roads or hill areas aren't visible because of haze. Consequently, removing haze may improve the color recognition. The proposed system combines both techniques and it adopts the dark channel prior technique to remove the haze, followed by feature learning using CNN. After feature learning, classification can be performed by effective classification technique like SVM.",
"title": ""
},
{
"docid": "6e97021a746cf7134d194f0ec58c3212",
"text": "Recently, medium-chain triglycerides (MCTs) containing a large fraction of lauric acid (LA) (C12)-about 30%-have been introduced commercially for use in salad oils and in cooking applications. As compared to the long-chain fatty acids found in other cooking oils, the medium-chain fats in MCTs are far less likely to be stored in adipose tissue, do not give rise to 'ectopic fat' metabolites that promote insulin resistance and inflammation, and may be less likely to activate macrophages. When ingested, medium-chain fatty acids are rapidly oxidised in hepatic mitochondria; the resulting glut of acetyl-coenzyme A drives ketone body production and also provokes a thermogenic response. Hence, studies in animals and humans indicate that MCT ingestion is less obesogenic than comparable intakes of longer chain oils. Although LA tends to raise serum cholesterol, it has a more substantial impact on high density lipoprotein (HDL) than low density lipoprotein (LDL) in this regard, such that the ratio of total cholesterol to HDL cholesterol decreases. LA constitutes about 50% of the fatty acid content of coconut oil; south Asian and Oceanic societies which use coconut oil as their primary source of dietary fat tend to be at low cardiovascular risk. Since ketone bodies can exert neuroprotective effects, the moderate ketosis induced by regular MCT ingestion may have neuroprotective potential. As compared to traditional MCTs featuring C6-C10, laurate-rich MCTs are more feasible for use in moderate-temperature frying and tend to produce a lower but more sustained pattern of blood ketone elevation owing to the more gradual hepatic oxidation of ingested laurate.",
"title": ""
},
{
"docid": "245371dccf75c8982f77c4d48d84d370",
"text": "This paper addresses the problem of streaming packetized media over a lossy packet network in a rate-distortion optimized way. We show that although the data units in a media presentation generally depend on each other according to a directed acyclic graph, the problem of rate-distortion optimized streaming of an entire presentation can be reduced to the problem of error-cost optimized transmission of an isolated data unit. We show how to solve the latter problem in a variety of scenarios, including the important common scenario of sender-driven streaming with feedback over a best-effort network, which we couch in the framework of Markov decision processes. We derive a fast practical algorithm for nearly optimal streaming in this scenario, and we derive a general purpose iterative descent algorithm for locally optimal streaming in arbitrary scenarios. Experimental results show that systems based on our algorithms have steady-state gains of 2-6 dB or more over systems that are not rate-distortion optimized. Furthermore, our systems essentially achieve the best possible performance: the operational distortion-rate function of the source at the capacity of the packet erasure channel.",
"title": ""
},
{
"docid": "2472a20493c3319cdc87057cc3d70278",
"text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.",
"title": ""
},
{
"docid": "a448b5e4e4bd017049226f06ce32fa9d",
"text": "We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator’s action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphoto- realistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 com- pared to the most accurate prior approximation scheme, while being the fastest. We show that our models general- ize across datasets and across resolutions, and investigate a number of extensions of the presented approach.",
"title": ""
},
{
"docid": "896fa229bd0ffe9ef6da9fbe0e0866e6",
"text": "In this paper, a cascaded current-voltage control strategy is proposed for inverters to simultaneously improve the power quality of the inverter local load voltage and the current exchanged with the grid. It also enables seamless transfer of the operation mode from stand-alone to grid-connected or vice versa. The control scheme includes an inner voltage loop and an outer current loop, with both controllers designed using the H∞ repetitive control strategy. This leads to a very low total harmonic distortion in both the inverter local load voltage and the current exchanged with the grid at the same time. The proposed control strategy can be used to single-phase inverters and three-phase four-wire inverters. It enables grid-connected inverters to inject balanced clean currents to the grid even when the local loads (if any) are unbalanced and/or nonlinear. Experiments under different scenarios, with comparisons made to the current repetitive controller replaced with a current proportional-resonant controller, are presented to demonstrate the excellent performance of the proposed strategy.",
"title": ""
},
{
"docid": "767d0ad795eedc0109d3afe738dc9ce7",
"text": "We do not know a-priori what the normalised patch looks like. But we may know the transformation Tgt between the patches: Tgt = Tʹ T = Ψ(xʹ) Ψ(x) Learning Formulation Using synthetic translations of real patches: ǁTgt Ψ(xʹ) + Ψ(x)ǁ ≅ 0 General Local Feature Detectors A general covariant framework Generalization to Affine Covariant Detectors Features are oriented ellipses and transformations are affinities.",
"title": ""
},
{
"docid": "5228454ef59c012b079885b2cce0c012",
"text": "As a contribution to the HICSS 50 Anniversary Conference, we proposed a new mini-track on Text Mining in Big Data Analytics. This mini-track builds on the successful HICSS Workshop on Text Mining and recognizes the growing importance of unstructured text as a data source for descriptive and predictive analytics in research on collaboration systems and technologies. In this initial iteration of the mini-track, we have accepted three papers that cover conceptual issues, methodological approaches to social media, and the development of categorization models and dictionaries useful in a corporate context. The minitrack highlights the potential of an interdisciplinary research community within the HICSS collaboration systems and technologies track.",
"title": ""
},
{
"docid": "740666c9391668a1e4763a612776ad75",
"text": "Building user empathy in a tech organization is crucial to ensure that products are designed with an eye toward user needs and experiences. The Pokerface program is a Google internal user empathy campaign with 26 researchers that helped more than 1500 employees-including engineers, product managers, designers, analysts, and program managers across more than 15 sites-have first-hand experiences with their users. Here, we discuss the goals of the Pokerface program, some challenges that we have faced during execution, and the impact we have measured thus far.",
"title": ""
},
{
"docid": "95411969fcf7e2ba1eb506edf30d7c3e",
"text": "The increasing implementation of various platforms and technology of information systems also increases the complexity of integration when the integration system is needed. This is just like what has happened in most of government areas. As we have done from a case study in Sleman, a regency of Yogyakarta Indonesia, it has many departments that use different platform and technology on implementing information system. Integration services using point-to-point method is considered to be irrelevant whereas the number of services are growing up rapidly and more complex. So, in this paper we have proposed a service orchestration mechanism using enterprise service bus (ESB) to integrate many services from many departments which used their owned platform and technology of information system. ESB can be the solution of n-to-n integration problem and it strongly supports the implementation of service oriented architecture (SOA). This paper covers the analysis, design and implementation of integration system in government area. Then the result of this integration has been deployed as a single real time executive dashboard system that can be useful for the governance in order to support them on making decision or policy. Functional and performance testing are used to ensure that the implementation of integration does not disrupt other transaction processes.",
"title": ""
},
{
"docid": "d9ce8f84bfac52a9d7d8a2924cec7e3d",
"text": "Urban water quality is of great importance to our daily lives. Prediction of urban water quality help control water pollution and protect human health. In this work, we forecast the water quality of a station over the next few hours, using a multitask multi-view learning method to fuse multiple datasets from different domains. In particular, our learning model comprises two alignments. The first alignment is the spaio-temporal view alignment, which combines local spatial and temporal information of each station. The second alignment is the prediction alignment among stations, which captures their spatial correlations and performs copredictions by incorporating these correlations. Extensive experiments on real-world datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "de96b6b43f68972faac8eec246e34c25",
"text": "The idea that chemotherapy can be used in combination with immunotherapy may seem somewhat counterproductive, as it can theoretically eliminate the immune cells needed for antitumour immunity. However, much preclinical work has now demonstrated that in addition to direct cytotoxic effects on cancer cells, a proportion of DNA damaging agents may actually promote immunogenic cell death, alter the inflammatory milieu of the tumour microenvironment and/or stimulate neoantigen production, thereby activating an antitumour immune response. Some notable combinations have now moved forward into the clinic, showing promise in phase I–III trials, whereas others have proven toxic, and challenging to deliver. In this review, we discuss the emerging data of how DNA damaging agents can enhance the immunogenic properties of malignant cells, focussing especially on immunogenic cell death, and the expansion of neoantigen repertoires. We discuss how best to strategically combine DNA damaging therapeutics with immunotherapy, and the challenges of successfully delivering these combination regimens to patients. With an overwhelming number of chemotherapy/immunotherapy combination trials in process, clear hypothesis-driven trials are needed to refine the choice of combinations, and determine the timing and sequencing of agents in order to stimulate antitumour immunological memory and improve maintained durable response rates, with minimal toxicity.",
"title": ""
}
] |
scidocsrr
|
b81daa4462a1c7345b66ff8c13391434
|
Better Alternatives to OSPF Routing
|
[
{
"docid": "9775092feda3a71c1563475bae464541",
"text": "Open Shortest Path First (OSPF) is the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortest paths, sptitting flow at nodes where several outgoing tinks are on shortest paths to the destination. The weights of the tinks, and thereby the shortest path routes, can be changed by the network operator. The weights could be set proportional to their physical distances, but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic rec. ommended by Cisco is to make the weight of a link inversely proportional to its capacity. Our starting point was a proposed AT&T WorldNet backbone with demands projected from previous measurements. The desire was to optimize the weight setting based on the projected demands. We showed that optimiz@ the weight settings for a given set of demands is NP-hard, so we resorted to a local search heuristic. Surprisingly it turned out that for the proposed AT&T WorldNet backbone, we found weight settiis that performed within a few percent from that of the optimal general routing where the flow for each demand is optimalty distributed over all paths between source and destination. This contrasts the common belief that OSPF routing leads to congestion and it shows that for the network and demand matrix studied we cannot get a substantially better load balancing by switching to the proposed more flexible Multi-protocol Label Switching (MPLS) technologies. Our techniques were atso tested on synthetic internetworks, based on a model of Zegura et al. (INFOCOM’96), for which we dld not always get quite as close to the optimal general routing. However, we compared witIs standard heuristics, such as weights inversely proportional to the capac.. ity or proportioml to the physical distances, and found that, for the same network and capacities, we could support a 50 Yo-1 10% increase in the demands. Our assumed demand matrix can also be seen as modeling service level agreements (SLAS) with customers, with demands representing guarantees of throughput for virtnal leased lines. Keywords— OSPF, MPLS, traffic engineering, local search, hashing ta. bles, dynamic shortest paths, mntti-cosnmodity network flows.",
"title": ""
}
] |
[
{
"docid": "c25ed65511cb0a22301896bbf4ebd84d",
"text": "This paper surveys the field of machine vision from a computer science perspective. It is written to act as an introduction to the field and presents the reader with references to specific implementations. Machine vision is a complex and developing field that can be broken into the three stages: stereo correspondence, scene reconstruction, and object recognition. We present the techniques and general approaches to each of these stages and summarize the future direction of research.",
"title": ""
},
{
"docid": "9dec25eadfc6835512487abb6ff061ba",
"text": "We consider the problem of how to enable a live video streaming service to vehicles in motion. In such applications, the video source can be a typical video server or vehicles with appropriate capability, while the video receivers are vehicles that are driving on the road. An infrastructure-based approach relies on strategically deployed base stations and video servers to forward video data to nearby vehicles. While this approach can provide a streaming video service to certain vehicles, it suffers from high base station deployment and maintenance cost. In this paper, we propose V3, an architecture to provide a live video streaming service to driving vehicles through vehicle-to-vehicle (V2V) networks. We argue that this solution is practical with the advance of wireless ad-hoc network techniques. With ample engine power, powerful computing capability and considerable data storage that a vehicle can provide, it is reasonable to support data-intensive video streaming service. On the other hand, V2V video streaming can be challenging because: 1) the V2V network may be persistently partitioned, and 2) the video sources are mobile and transient. V3 addresses these challenges by incorporating a novel signaling mechanism to continuously trigger video sources to send video data back to receivers. It also adopts a store-carry-and-forward approach to transmit video data in a partitioned network environment. Several algorithms are proposed to balance the video transmission delay and bandwidth overheads Simulation experiments demonstrate the feasibility of supporting vehicle-to-vehicle live video streaming with acceptable performance.",
"title": ""
},
{
"docid": "968c116ed298a1f0b9592ab0971fe562",
"text": "According to the DSM-IV (American Psychiatric Association, 1995), simple phobias consist of persistent fear of a circumscribed stimulus and consequent avoidance of that stimulus, where the person having this fear knows it is excessive or unreasonable. If the feared stimulus is heights, the person is said to suffer from acrophobia, or fear of heights. The most common and most successful treatment for acrophobia is graded exposure in-vivo. Here, the avoidance behavior is broken by exposing the patient to a hierarchy of feared stimuli, whereby the fear will first increase, after which habituation will occur and the fear will gradually diminish (Bouman, Scholing & Emmelkamp, 1992). In in-vivo treatment the patient is exposed to real stimuli. A promising alternative is graded exposure in Virtual Reality, where the patient can be treated in the safety and privacy of the therapist’s office and situations can be recreated which are hard to find or costly to reach. At this moment, research is being conducted at the Delft University of Technology and the University of Amsterdam aimed at developing a virtual reality system to be used by therapists for such VR Exposure Therapy (VRET). This article describes the results of a pilot study undertaken to explore the possibilities and characteristics of VRET and determine requirements for a system to support therapists",
"title": ""
},
{
"docid": "d3e2efde80890e469684a41287833eb6",
"text": "Recent work has suggested reducing electricity generation cost by cutting the peak to average ratio (PAR) without reducing the total amount of the loads. However, most of these proposals rely on consumer's willingness to act. In this paper, we propose an approach to cut PAR explicitly from the supply side. The resulting cut loads are then distributed among consumers by the means of a multiunit auction which is done by an intelligent agent on behalf of the consumer. This approach is also in line with the future vision of the smart grid to have the demand side matched with the supply side. Experiments suggest that our approach reduces overall system cost and gives benefit to both consumers and the energy provider.",
"title": ""
},
{
"docid": "e8a69f68bc1647c69431ce88a0728777",
"text": "Contrary to popular perception, qualitative research can produce vast amounts of data. These may include verbatim notes or transcribed recordings of interviews or focus groups, jotted notes and more detailed “fieldnotes” of observational research, a diary or chronological account, and the researcher’s reflective notes made during the research. These data are not necessarily small scale: transcribing a typical single interview takes several hours and can generate 20-40 pages of single spaced text. Transcripts and notes are the raw data of the research. They provide a descriptive record of the research, but they cannot provide explanations. The researcher has to make sense of the data by sifting and interpreting them.",
"title": ""
},
{
"docid": "3da8cb73f3770a803ca43b8e2a694ccc",
"text": "We present a novel framework for hallucinating faces of unconstrained poses and with very low resolution (face size as small as 5pxIOD). In contrast to existing studies that mostly ignore or assume pre-aligned face spatial configuration (e.g. facial landmarks localization or dense correspondence field), we alternatingly optimize two complementary tasks, namely face hallucination and dense correspondence field estimation, in a unified framework. In addition, we propose a new gated deep bi-network that contains two functionality-specialized branches to recover different levels of texture details. Extensive experiments demonstrate that such formulation allows exceptional hallucination quality on in-the-wild low-res faces with significant pose and illumination variations.",
"title": ""
},
{
"docid": "68f3b3521b426b696419a58e6d389aae",
"text": "A new scan that matches an aided Inertial Navigation System (INS) with a low-cost LiDAR is proposed as an alternative to GNSS-based navigation systems in GNSS-degraded or -denied environments such as indoor areas, dense forests, or urban canyons. In these areas, INS-based Dead Reckoning (DR) and Simultaneous Localization and Mapping (SLAM) technologies are normally used to estimate positions as separate tools. However, there are critical implementation problems with each standalone system. The drift errors of velocity, position, and heading angles in an INS will accumulate over time, and on-line calibration is a must for sustaining positioning accuracy. SLAM performance is poor in featureless environments where the matching errors can significantly increase. Each standalone positioning method cannot offer a sustainable navigation solution with acceptable accuracy. This paper integrates two complementary technologies-INS and LiDAR SLAM-into one navigation frame with a loosely coupled Extended Kalman Filter (EKF) to use the advantages and overcome the drawbacks of each system to establish a stable long-term navigation process. Static and dynamic field tests were carried out with a self-developed Unmanned Ground Vehicle (UGV) platform-NAVIS. The results prove that the proposed approach can provide positioning accuracy at the centimetre level for long-term operations, even in a featureless indoor environment.",
"title": ""
},
{
"docid": "001764b6037862def1e37fec85984293",
"text": "We present a basic technique to fill-in missing parts of a video sequence taken from a static camera. Two important cases are considered. The first case is concerned with the removal of non-stationary objects that occlude stationary background. We use a priority based spatio-temporal synthesis scheme for inpainting the stationary background. The second and more difficult case involves filling-in moving objects when they are partially occluded. For this, we propose a priority scheme to first inpaint the occluded moving objects and then fill-in the remaining area with stationary background using the method proposed for the first case. We use as input an optical-flow based mask, which tells if an undamaged pixel is moving or is stationary. The moving object is inpainted by copying patches from undamaged frames, and this copying is independent of the background of the moving object in either frame. This work has applications in a variety of different areas, including video special effects and restoration and enhancement of damaged videos. The examples shown in the paper illustrate these ideas.",
"title": ""
},
{
"docid": "e2ed03468a61a529f498646485cdbee6",
"text": "Statistical classification of byperspectral data is challenging because the inputs are high in dimension and represent multiple classes that are sometimes quite mixed, while the amount and quality of ground truth in the form of labeled data is typically limited. The resulting classifiers are often unstable and have poor generalization. This work investigates two approaches based on the concept of random forests of classifiers implemented within a binary hierarchical multiclassifier system, with the goal of achieving improved generalization of the classifier in analysis of hyperspectral data, particularly when the quantity of training data is limited. A new classifier is proposed that incorporates bagging of training samples and adaptive random subspace feature selection within a binary hierarchical classifier (BHC), such that the number of features that is selected at each node of the tree is dependent on the quantity of associated training data. Results are compared to a random forest implementation based on the framework of classification and regression trees. For both methods, classification results obtained from experiments on data acquired by the National Aeronautics and Space Administration (NASA) Airborne Visible/Infrared Imaging Spectrometer instrument over the Kennedy Space Center, Florida, and by Hyperion on the NASA Earth Observing 1 satellite over the Okavango Delta of Botswana are superior to those from the original best basis BHC algorithm and a random subspace extension of the BHC.",
"title": ""
},
{
"docid": "e8ecb3597e3019691f128cf6a50239d9",
"text": "Unmanned Aerial Vehicle (UAV) platforms are nowadays a valuable source of data for inspection, surveillance, mapping and 3D modeling issues. As UAVs can be considered as a lowcost alternative to the classical manned aerial photogrammetry, new applications in the shortand close-range domain are introduced. Rotary or fixed wing UAVs, capable of performing the photogrammetric data acquisition with amateur or SLR digital cameras, can fly in manual, semiautomated and autonomous modes. Following a typical photogrammetric workflow, 3D results like Digital Surface or Terrain Models (DTM/DSM), contours, textured 3D models, vector information, etc. can be produced, even on large areas. The paper reports the state of the art of UAV for Geomatics applications, giving an overview of different UAV platforms, applications and case studies, showing also the latest developments of UAV image processing. New perspectives are also addressed.",
"title": ""
},
{
"docid": "372f137098bd5817896d82ed0cb0c771",
"text": "Under today's bursty web traffic, the fine-grained per-container control promises more efficient resource provisioning for web services and better resource utilization in cloud datacenters. In this paper, we present Two-stage Stochastic Programming Resource A llocator (2SPRA). It optimizes resource provisioning for containerized n-tier web services in accordance with fluctuations of incoming workload to accommodate predefined SLOs on response latency. In particular, 2SPRA is capable of minimizing resource over-provisioning by addressing dynamics of web traffic as workload uncertainty in a native stochastic optimization model. Using special-purpose OpenOpt optimization framework, we fully implement 2SPRA in Python and evaluate it against three other existing allocation schemes, in a Docker-based CoreOS Linux VMs on Amazon EC2. We generate workloads based on four real-world web traces of various traffic variations: AOL, WorldCup98, ClarkNet, and NASA. Our experimental results demonstrate that 2SPRA achieves the minimum resource over-provisioning outperforming other schemes. In particular, 2SPRA allocates only 6.16 percent more than application's actual demand on average and at most 7.75 percent in the worst case. It achieves 3x further reduction in total resources provisioned compared to other schemes delivering overall cost-savings of 53.6 percent on average and up to 66.8 percent. Furthermore, 2SPRA demonstrates consistency in its provisioning decisions and robust responsiveness against workload fluctuations.",
"title": ""
},
{
"docid": "f6a24aa476ec27b86e549af6d30f22b6",
"text": "Designing autonomous robotic systems able to manipulate deformable objects without human intervention constitutes a challenging area of research. The complexity of interactions between a robot manipulator and a deformable object originates from a wide range of deformation characteristics that have an impact on varying degrees of freedom. Such sophisticated interaction can only take place with the assistance of intelligent multisensory systems that combine vision data with force and tactile measurements. Hence, several issues must be considered at the robotic and sensory levels to develop genuine dexterous robotic manipulators for deformable objects. This chapter presents a thorough examination of the modern concepts developed by the robotic community related to deformable objects grasping and manipulation. Since the convention widely adopted in the literature is often to extend algorithms originally proposed for rigid objects, a comprehensive coverage on the new trends on rigid objects manipulation is initially proposed. State-of-the-art techniques on robotic interaction with deformable objects are then examined and discussed. The chapter proposes a critical evaluation of the manipulation algorithms, the instrumentation systems adopted and the examination of end-effector technologies, including dexterous robotic hands. The motivation for this review is to provide an extensive appreciation of state-of-the-art solutions to help researchers and developers determine the best possible options when designing autonomous robotic systems to interact with deformable objects. Typically in a robotic setup, when robot manipulators are programmed to perform their tasks, they must have a complete knowledge about the exact structure of the manipulated object (shape, surface texture, rigidity) and about its location in the environment (pose). For some of these tasks, the manipulator becomes in contact with the object. Hence, interaction forces and moments are developed and consequently these interaction forces and moments, as well as the position of the end-effector, must be controlled, which leads to the concept of “force controlled manipulation” (Natale, 2003). There are different control strategies used in 28",
"title": ""
},
{
"docid": "f2eded52dbe84fba54d1796aa8ed63a5",
"text": "Buying airline tickets is an ubiquitous task in which it is difficult for humans to minimize cost due to insufficient information. Even with historical data available for inspection (a recent addition to some travel reservation websites), it is difficult to assess how purchase timing translates into changes in expected cost. To address this problem, we introduce an agent which is able to optimize purchase timing on behalf of customers. We provide results that demonstrate the method can perform much closer to the optimal purchase policy than existing decision theoretic approaches for this domain.",
"title": ""
},
{
"docid": "713d709c14c8943638d2c80e3aeaded2",
"text": "Microfluidics-based biochips combine electronics with biology to open new application areas such as point-of-care medical diagnostics, on-chip DNA analysis, and automated drug discovery. Bioassays are mapped to microfluidic arrays using synthesis tools, and they are executed through the manipulation of sample and reagent droplets by electrical means. Most prior work on CAD for biochips has assumed independent control of electrodes using a large number of (electrical) input pins. Such solutions are not feasible for low-cost disposable biochips that are envisaged for many field applications. A more promising design strategy is to divide the microfluidic array into smaller partitions and use a small number of electrodes to control the electrodes in each partition. We propose a partitioning algorithm based on the concept of \"droplet trace\", which is extracted from the scheduling and droplet routing results produced by a synthesis tool. An efficient pin assignment method, referred to as the \"Connect-5 algorithm\", is combined with the array partitioning technique based on droplet traces. The array partitioning and pin assignment methods are evaluated using a set of multiplexed bioassays.",
"title": ""
},
{
"docid": "16125310a488e0946075264c11e50720",
"text": "A 90-W peak-power 2.14-GHz improved GaN outphasing amplifier with 50.5% average efficiency for wideband code division multiple access (W-CDMA) signals is presented. Independent control of the branch amplifiers by two in-phase/quadrature modulators enables optimum outphasing and input power leveling, yielding significant improvements in gain, efficiency, and linearity. In deep-power backoff operation, the outphasing angle of the branch amplifiers is kept constant below a certain power level. This results in class-B operation for the very low output power levels, yielding less reactive loading of the output stages, and therefore, improved efficiency in power backoff operation compared to the classical outphasing amplifiers. Based on these principles, the optimum design parameters and input signal conditioning are discussed. The resulting theoretical maximum achievable average efficiency for W-CDMA signals is presented. Experimental results support the foregoing theory and show high efficiency over a large bandwidth, while meeting the linearity specifications using low-cost low-complexity memoryless pre-distortion. These properties make this amplifier concept an interesting candidate for future multiband base-station implementations.",
"title": ""
},
{
"docid": "8100a2c4f775d5e64b655de7835f946b",
"text": "primary challenge in responding to both natural and man-made disasters is communication. This has been highlighted by recent disasters such as the 9/11 terrorist attacks and Hurricane Katrina [2, 5, 6]. A problem frequently cited by responders is the lack of radio interoperability. Responding organizations must work in concert to form a cohesive plan of response. However, each group—fire, police, SWAT, HazMat—com-municates with radios set to orthogonal frequencies , making inter-agency communications extremely difficult. The problem is compounded as more local, state, and federal agencies become involved. The communication challenges in emergency response go far beyond simple interop-erability issues. Based on our research, practical observation of first responder exercises and drills, and workshop discussions, we have identified three categories of communication challenges: technological, sociological, and organizational. These three major areas are key to developing and maintaining healthy and effective disaster communication systems. The primary technological challenge after a disaster is rapid deployment of communication systems for first responders and disaster management workers. This is true regardless of whether the communications network has been completely destroyed (power, telephone, and/or network connectivity infrastructure), or, as in the case of some remote geographic areas, the infrastructure was previously nonex-istent. Deployment of a new system is more complicated in areas where partial communication infrastructures remain, than where no prior communication networks existed. This can be due to several factors including interference from existing partial communication networks and the dependency of people on their prior systems. Another important obstacle to overcome is the multi-organizational radio interoperability issue. To make future communication systems capable of withstanding large-or medium-scale disasters, two technological solutions can be incorporated into the design: dual-use technology and built-in architectural and protocol redundancy. Dual-use technology would enable both normal and emergency operational modes. During crises, such devices would work in a network-controlled fashion, achieved using software agents within the communication",
"title": ""
},
{
"docid": "6eb58aabc872d32e552f8ab746038ff5",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm Philippe Flajolet, Éric Fusy, Olivier Gandouet, Frédéric Meunier",
"title": ""
},
{
"docid": "ff0c3f9fa9033be78b107c2f052203fa",
"text": "Complex networks, such as biological, social, and communication networks, often entail uncertainty, and thus, can be modeled as probabilistic graphs. Similar to the problem of similarity search in standard graphs, a fundamental problem for probabilistic graphs is to efficiently answer k-nearest neighbor queries (k-NN), which is the problem of computing the k closest nodes to some specific node. In this paper we introduce a framework for processing k-NN queries in probabilistic graphs. We propose novel distance functions that extend well-known graph concepts, such as shortest paths. In order to compute them in probabilistic graphs, we design algorithms based on sampling. During k-NN query processing we efficiently prune the search space using novel techniques. Our experiments indicate that our distance functions outperform previously used alternatives in identifying true neighbors in real-world biological data. We also demonstrate that our algorithms scale for graphs with tens of millions of edges.",
"title": ""
},
{
"docid": "7272ebab22d3efec95792acece86b4dd",
"text": "Many of today's machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e.g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far. In this paper, we demonstrate that malicious primitive models pose immense threats to the security of ML systems. We present a broad class of model-reuse attacks wherein maliciously crafted models trigger host ML systems to misbehave on targeted inputs in a highly predictable manner. By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference. We provide analytical justification for the effectiveness of model-reuse attacks, which points to the unprecedented complexity of today's primitive models. This issue thus seems fundamental to many ML systems. We further discuss potential countermeasures and their challenges, which lead to several promising research directions.",
"title": ""
}
] |
scidocsrr
|
8fd1f2041f8dc2341cba3ee3d9551c37
|
The determinants of crowdfunding success: A semantic text analytics approach
|
[
{
"docid": "7c98d4c1ab375526c426f8156650cb22",
"text": "Online privacy remains an ongoing source of debate in society. Sensitive to this, many web platforms are offering users greater, more granular control over how and when their information is revealed. However, recent research suggests that information control mechanisms of this sort are not necessarily of economic benefit to the parties involved. We examine the use of these mechanisms and their economic consequences, leveraging data from one of the world's largest global crowdfunding platforms, where contributors can conceal their identity or contribution amounts from public display. We find that information hiding is more likely when contributors are under greater scrutiny or exhibiting “undesirable” behavior. We also identify an anchoring effect from prior contributions, which is eliminated when earlier contributors conceal their amounts. Subsequent analyses indicate that a nuanced approach to the design and provision of information control mechanisms, such as varying default settings based on contribution amounts, can help promote larger contributions.",
"title": ""
},
{
"docid": "e267fe4d2d7aa74ded8988fcdbfb3474",
"text": "Consumers have recently begun to play a new role in some markets: that of providing capital and investment support to the offering. This phenomenon, called crowdfunding, is a collective effort by people who network and pool their money together, usually via the Internet, in order to invest in and support efforts initiated by other people or organizations. Successful service businesses that organize crowdfunding and act as intermediaries are emerging, attesting to the viability of this means of attracting investment. Employing a “Grounded Theory” approach, this paper performs an in-depth qualitative analysis of three cases involving crowdfunding initiatives: SellaBand in the music business, Trampoline in financial services, and Kapipal in non-profit services. These cases were selected to represent a diverse set of crowdfunding operations that vary in terms of risk/return for the investorconsumer and the type of consumer involvement. The analysis offers important insights about investor behaviour in crowdfunding service models, the potential determinants of such behaviour, and variations in behaviour and determinants across different service models. The findings have implications for service managers interested in launching and/or managing crowdfunding initiatives, and for service theory in terms of extending the consumer’s role from co-production and co-creation to investment.",
"title": ""
}
] |
[
{
"docid": "81060b9d045e2935a77967d0318c4086",
"text": "Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE.",
"title": ""
},
{
"docid": "1ceab925041160f17163940360354c55",
"text": "A complete reconstruction of D.H. Lehmer’s ENIAC set-up for computing the exponents of p modulo 2 is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations).",
"title": ""
},
{
"docid": "bcee978b0c7b8d533b05ce64daca92e3",
"text": "Sentiment analysis of short texts is challenging because of the limited contextual information they usually contain. In recent years, deep learning models such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been applied to text sentiment analysis with comparatively remarkable results. In this paper, we describe a jointed CNN and RNN architecture, taking advantage of the coarse-grained local features generated by CNN and long-distance dependencies learned via RNN for sentiment analysis of short texts. Experimental results show an obvious improvement upon the state-of-the-art on three benchmark corpora, MR, SST1 and SST2, with 82.28%, 51.50% and 89.95% accuracy, respectively. 1",
"title": ""
},
{
"docid": "1b3afef7a857d436635a3de056559e1f",
"text": "This paper presents Haggle, an architecture for mobile devices that enables seamless network connectivity and application functionality in dynamic mobile environments. Current applications must contain significant network binding and protocol logic, which makes them inflexible to the dynamic networking environments facing mobile devices. Haggle allows separating application logic from transport bindings so that applications can be communication agnostic. Internally, the Haggle framework provides a mechanism for late-binding interfaces, names, protocols, and resources for network communication. This separation allows applications to easily utilize multiple communication modes and methods across infrastructure and infrastructure-less environments. We provide a prototype implementation of the Haggle framework and evaluate it by demonstrating support for two existing legacy applications, email and web browsing. Haggle makes it possible for these applications to seamlessly utilize mobile networking opportunities both with and without infrastructure.",
"title": ""
},
{
"docid": "8841397018c52a57ce3f1b025fa76a7a",
"text": "The G-banding technique was performed on chromosomes from gill tissue of three cupped oyster species: Crassostrea gigas, Crassostrea angulata and Crassostrea virginica. Identification of the ten individual chromosome pairs was obtained. Comparative analysis of G-banded karyotypes of the three species showed that their banding patterns generally resembled each other, with chromosome pair 3 being similar in all three species. However, differences from one species to another were also observed. The G-banding pattern highlighted greater similarities between C. gigas and C. angulata than between these two species and C. virginica, thus providing an additional argument for genetic divergence between these two evolutionary lineages. C. gigas and C. angulata showed a different G-banding patterns on the two arms of chromosome pair 7, which agrees with their taxonomic separation. The application of this banding technique offers a new approach to specific problems in oyster taxonomy and genetics. © Inra/Elsevier, Paris chromosome / G-banding / Crassostrea gigas / Crassostrea angulata / Crassostrea",
"title": ""
},
{
"docid": "4097fe8240f8399de8c0f7f6bdcbc72f",
"text": "Feature extraction of EEG signals is core issues on EEG based brain mapping analysis. The classification of EEG signals has been performed using features extracted from EEG signals. Many features have proved to be unique enough to use in all brain related medical application. EEG signals can be classified using a set of features like Autoregression, Energy Spectrum Density, Energy Entropy, and Linear Complexity. However, different features show different discriminative power for different subjects or different trials. In this research, two-features are used to improve the performance of EEG signals. Neural Network based techniques are applied to feature extraction of EEG signal. This paper discuss on extracting features based on Average method and Max & Min method of the data set. The Extracted Features are classified using Neural Network Temporal Pattern Recognition Technique. The two methods are compared and performance is analyzed based on the results obtained from the Neural Network classifier.",
"title": ""
},
{
"docid": "4421a42fc5589a9b91215b68e1575a3f",
"text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"title": ""
},
{
"docid": "a40c24c01f13952516a613724dac98b7",
"text": "In this work, we address the task of dense stereo matching with Convolutional Neural Networks (CNNs). Particularly, we focus on improving matching cost computation by better aggregating contextual information. Towards this goal, we advocate to use atrous convolution, a powerful tool for dense prediction task that allows us to control the resolution at which feature responses are computed within CNNs and to enlarge the receptive field of the network without losing image resolution and requiring learning extra parameters. Aiming to improve the performance of atrous convolution, we propose different frameworks for further boosting performance. We evaluate our models on KITTI 2015 benchmark, the result shows that we achieve on-par performance with fewer post-processing methods applied.",
"title": ""
},
{
"docid": "fe11678f122efc57603321b61c1f52eb",
"text": "Recognition of grocery products in store shelves poses peculiar challenges. Firstly, the task mandates the recognition of an extremely high number of different items, in the order of several thousands for medium-small shops, with many of them featuring small inter and intra class variability. Then, available product databases usually include just one or a few studio-quality images per product (referred to herein as reference images), whilst at test time recognition is performed on pictures displaying a portion of a shelf containing several products and taken in the store by cheap cameras (referred to as query images). Moreover, as the items on sale in a store as well as their appearance change frequently over time, a practical recognition system should handle seamlessly new products/packages. Inspired by recent advances in object detection and image retrieval, we propose to leverage on state of the art object detectors based on deep learning to obtain an initial productagnostic item detection. Then, we pursue product recognition through a similarity search between global descriptors computed on reference and cropped query images. To maximize performance, we learn an ad-hoc global descriptor by a CNN trained on reference images based on an image embedding loss. Our system is computationally expensive at training time but can perform recognition rapidly and accurately at test time.",
"title": ""
},
{
"docid": "2c68945d68f8ccf90648bec7fd5b0547",
"text": "The number of seniors and other people needing daily assistance continues to increase, but the current human resources available to achieve this in the coming years will certainly be insufficient. To remedy this situation, smart habitats have emerged as an innovative avenue for supporting needs of daily assistance. Smart homes aim to provide cognitive assistance in decision making by giving hints, suggestions, and reminders, with different kinds of effectors, to residents. To implement such technology, the first challenge to overcome is the recognition of ongoing activity. Some researchers have proposed solutions based on binary sensors or cameras, but these types of approaches infringed on residents' privacy. A new affordable activity-recognition system based on passive RFID technology can detect errors related to cognitive impairment. The entire system relies on an innovative model of elliptical trilateration with several filters, as well as on an ingenious representation of activities with spatial zones. The authors have deployed the system in a real smart-home prototype; this article renders the results of a complete set of experiments conducted on this new activity-recognition system with real scenarios.",
"title": ""
},
{
"docid": "17055a66f80354bf5a614a510a4ef689",
"text": "People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.",
"title": ""
},
{
"docid": "89f157fd5c42ba827b7d613f80770992",
"text": "Generating emotional language is a key step towards building empathetic natural language processing agents. However, a major challenge for this line of research is the lack of large-scale labeled training data, and previous studies are limited to only small sets of human annotated sentiment labels. Additionally, explicitly controlling the emotion and sentiment of generated text is also difficult. In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis. We collect a large corpus of Twitter conversations that include emojis in the response and assume the emojis convey the underlying emotions of the sentence. We investigate several conditional variational autoencoders training on these conversations, which allow us to use emojis to control the emotion of the generated text. Experimentally, we show in our quantitative and qualitative analyses that the proposed models can successfully generate highquality abstractive conversation responses in accordance with designated emotions.",
"title": ""
},
{
"docid": "0f0799a04328852b8cfa742cbc2396c9",
"text": "Bitcoin does not scale, because its synchronization mechanism, the blockchain, limits the maximum rate of transactions the network can process. However, using off-blockchain transactions it is possible to create long-lived channels over which an arbitrary number of transfers can be processed locally between two users, without any burden to the Bitcoin network. These channels may form a network of payment service providers (PSPs). Payments can be routed between any two users in real time, without any confirmation delay. In this work we present a protocol for duplex micropayment channels, which guarantees end-to-end security and allow instant transfers, laying the foundation of the PSP network.",
"title": ""
},
{
"docid": "68f0bdda44beba9203a785b8be1035bb",
"text": "Nasal mucociliary clearance is one of the most important factors affecting nasal delivery of drugs and vaccines. This is also the most important physiological defense mechanism inside the nasal cavity. It removes inhaled (and delivered) particles, microbes and substances trapped in the mucus. Almost all inhaled particles are trapped in the mucus carpet and transported with a rate of 8-10 mm/h toward the pharynx. This transport is conducted by the ciliated cells, which contain about 100-250 motile cellular appendages called cilia, 0.3 µm wide and 5 µm in length that beat about 1000 times every minute or 12-15 Hz. For efficient mucociliary clearance, the interaction between the cilia and the nasal mucus needs to be well structured, where the mucus layer is a tri-layer: an upper gel layer that floats on the lower, more aqueous solution, called the periciliary liquid layer and a third layer of surfactants between these two main layers. Pharmacokinetic calculations of the mucociliary clearance show that this mechanism may account for a substantial difference in bioavailability following nasal delivery. If the formulation irritates the nasal mucosa, this mechanism will cause the irritant to be rapidly diluted, followed by increased clearance, and swallowed. The result is a much shorter duration inside the nasal cavity and therefore less nasal bioavailability.",
"title": ""
},
{
"docid": "351daae8d137eaff56caf4640c83cbfc",
"text": "There are numerous applications in which we would like to assess what opinions are being expressed in text documents. For example, Martha Stewart’s company may have wished to assess the degree of harshness of news articles about her in the recent past. Likewise, a World Bank official may wish to assess the degree of criticism of a proposed dam in Bangladesh. The ability to gauge opinion on a given topic is therefore of critical interest. In this paper, we develop a suite of algorithms which take as input, a set D of documents as well as a topic t, and gauge the degree of opinion expressed about topic t in the set D of documents. Our algorithms can return both a number (larger the number, more positive the opinion) as well as a qualitative opinion (e.g. harsh, complimentary). We assess the accuracy of these algorithms via human experiments and show that the best of these algorithms can accurately reflect human opinions. We have also conducted performance experiments showing that our algorithms are computationally fast.",
"title": ""
},
{
"docid": "d5debb44bb6cf518bbc3d8d5f88201e7",
"text": "In multi-label learning, each training example is associated with multiple class labels and the task is to learn a mapping from the feature space to the power set of label space. It is generally demanding and time-consuming to obtain labels for training examples, especially for multi-label learning task where a number of class labels need to be annotated for the instance. To circumvent this difficulty, semi-supervised multi-label learning aims to exploit the readily-available unlabeled data to help build multi-label predictive model. Nonetheless, most semi-supervised solutions to multi-label learning work under transductive setting, which only focus on making predictions on existing unlabeled data and cannot generalize to unseen instances. In this paper, a novel approach named COINS is proposed to learning from labeled and unlabeled data by adapting the well-known co-training strategy which naturally works under inductive setting. In each co-training round, a dichotomy over the feature space is learned by maximizing the diversity between the two classifiers induced on either dichotomized feature subset. After that, pairwise ranking predictions on unlabeled data are communicated between either classifier for model refinement. Extensive experiments on a number of benchmark data sets show that COINS performs favorably against state-of-the-art multi-label learning approaches.",
"title": ""
},
{
"docid": "b01436481aa77ebe7538e760132c5f3c",
"text": "We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.",
"title": ""
},
{
"docid": "13a4d7ce920b6b215a76d34708303e14",
"text": "ion is also critical to the success of planning and scheduling activities. In our scenarios, the crew will often have to deal with planning and scheduling at a very high level (e.g., what crops do I need to plant now so they can be harvested in six months) and planning and scheduling at a detailed level (e.g., what is my next task). The autonomous system must be able to move between various time scales and levels of abstraction, presenting the correct level of information to the user at the correct time. Model-based diagnosis and recovery When something goes wrong, a robust autonomous should figure out what went wrong and recover as best as it can. A model-based diagnosis and recovery system, such as Livingstone [Williams and Nayak, 96], does this. It is analogous to the autonomic and immune systems of a living creature. If the autonomous system has a model of the system it controls, it can use this to figure out what is the most likely cause that explains the observed symptoms as well as how can the system recover given this diagnosis so its mission can continue. For example, if the pressure of a tank is low, it could be because the tank has a leak, the pump blew a fuse, a valve is not open to fill the tank or not closed to keep the tank from draining. However, it could be that the tank pressure is not low and the pressure sensor is defective. By analyzing the system from other sensors, it may say the pressure is normal or suggest closing a valve, resetting the pump circuit breaker, or requesting a crewmember to check the tank for a leak.",
"title": ""
},
{
"docid": "e9ba4e76a3232e25233a4f5fe206e8ba",
"text": "Systems code is often written in low-level languages like C/C++, which offer many benefits but also delegate memory management to programmers. This invites memory safety bugs that attackers can exploit to divert control flow and compromise the system. Deployed defense mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., CFI) often have high overhead and limited guarantees [19, 15, 9]. We introduce code-pointer integrity (CPI), a new design point that guarantees the integrity of all code pointers in a program (e.g., function pointers, saved return addresses) and thereby prevents all control-flow hijack attacks, including return-oriented programming. We also introduce code-pointer separation (CPS), a relaxation of CPI with better performance properties. CPI and CPS offer substantially better security-to-overhead ratios than the state of the art, they are practical (we protect a complete FreeBSD system and over 100 packages like apache and postgresql), effective (prevent all attacks in the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for C and 8.4% for C/C++. A prototype implementation of CPI and CPS can be obtained from http://levee.epfl.ch.",
"title": ""
},
{
"docid": "118526b566b800d9dea30d2e4c904feb",
"text": "With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, in this case domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper we introduce a query based, Arabic text, single document summarization using an existing Arabic language thesaurus and an extracted knowledge base. We use an Arabic corpus to extract domain knowledge represented by topic related concepts/ keywords and the lexical relations among them. The user’s query is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion. For the summarization dataset, Essex Arabic Summaries Corpus was used. It has many topic based articles with multiple human summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet.",
"title": ""
}
] |
scidocsrr
|
ce75749e2f558ac953323ec5541b7b67
|
Analysis of the 802.11i 4-way handshake
|
[
{
"docid": "8dcb99721a06752168075e6d45ee64c7",
"text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti ble to malicious denial-of-service (DoS) attacks tar geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.",
"title": ""
}
] |
[
{
"docid": "3653e29e71d70965317eb4c450bc28da",
"text": "This paper comprises an overview of different aspects for wire tension control devices and algorithms according to the state of industrial use and state of research. Based on a typical winding task of an orthocyclic winding scheme, possible new principles for an alternative piezo-electric actuator and an electromechanical tension control will be derived and presented.",
"title": ""
},
{
"docid": "3eebecff1cb89f5490602f43717902b7",
"text": "Radiation therapy (RT) is an integral part of prostate cancer treatment across all stages and risk groups. Immunotherapy using a live, attenuated, Listeria monocytogenes-based vaccines have been shown previously to be highly efficient in stimulating anti-tumor responses to impact on the growth of established tumors in different tumor models. Here, we evaluated the combination of RT and immunotherapy using Listeria monocytogenes-based vaccine (ADXS31-142) in a mouse model of prostate cancer. Mice bearing PSA-expressing TPSA23 tumor were divided to 5 groups receiving no treatment, ADXS31-142, RT (10 Gy), control Listeria vector and combination of ADXS31-142 and RT. Tumor growth curve was generated by measuring the tumor volume biweekly. Tumor tissue, spleen, and sera were harvested from each group for IFN-γ ELISpot, intracellular cytokine assay, tetramer analysis, and immunofluorescence staining. There was a significant tumor growth delay in mice that received combined ADXS31-142 and RT treatment as compared with mice of other cohorts and this combined treatment causes complete regression of their established tumors in 60 % of the mice. ELISpot and immunohistochemistry of CD8+ cytotoxic T Lymphocytes (CTL) showed a significant increase in IFN-γ production in mice with combined treatment. Tetramer analysis showed a fourfold and a greater than 16-fold increase in PSA-specific CTLs in animals receiving ADXS31-142 alone and combination treatment, respectively. A similar increase in infiltration of CTLs was observed in the tumor tissues. Combination therapy with RT and Listeria PSA vaccine causes significant tumor regression by augmenting PSA-specific immune response and it could serve as a potential treatment regimen for prostate cancer.",
"title": ""
},
{
"docid": "89fd46da8542a8ed285afb0cde9cc236",
"text": "Collaborative Filtering with Implicit Feedbacks (e.g., browsing or clicking records), named as CF-IF, is demonstrated to be an effective way in recommender systems. Existing works of CF-IF can be mainly classified into two categories, i.e., point-wise regression based and pairwise ranking based, where the latter one relaxes assumption and usually obtains better performance in empirical studies. In real applications, implicit feedback is often very sparse, causing CF-IF based methods to degrade significantly in recommendation performance. In this case, side information (e.g., item content) is usually introduced and utilized to address the data sparsity problem. Nevertheless, the latent feature representation learned from side information by topic model may not be very effective when the data is too sparse. To address this problem, we propose collaborative deep ranking (CDR), a hybrid pair-wise approach with implicit feedback, which leverages deep feature representation of item content into Bayesian framework of pair-wise ranking model in this paper. The experimental analysis on a real-world dataset shows CDR outperforms three state-of-art methods in terms of recall metric under different sparsity level.",
"title": ""
},
{
"docid": "06cc255e124702878e2106bf0e8eb47c",
"text": "Agent technology has been recognized as a promising paradigm for next generation manufacturing systems. Researchers have attempted to apply agent technology to manufacturing enterprise integration, enterprise collaboration (including supply chain management and virtual enterprises), manufacturing process planning and scheduling, shop floor control, and to holonic manufacturing as an implementation methodology. This paper provides an update review on the recent achievements in these areas, and discusses some key issues in implementing agent-based manufacturing systems such as agent encapsulation, agent organization, agent coordination and negotiation, system dynamics, learning, optimization, security and privacy, tools and standards. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f2492c40f98e3cccc3ac3ab7accf4af7",
"text": "Accurate detection of single-trial event-related potentials (ERPs) in the electroencephalogram (EEG) is a difficult problem that requires efficient signal processing and machine learning techniques. Supervised spatial filtering methods that enhance the discriminative information in EEG data are commonly used to improve single-trial ERP detection. We propose a convolutional neural network (CNN) with a layer dedicated to spatial filtering for the detection of ERPs and with training based on the maximization of the area under the receiver operating characteristic curve (AUC). The CNN is compared with three common classifiers: 1) Bayesian linear discriminant analysis; 2) multilayer perceptron (MLP); and 3) support vector machines. Prior to classification, the data were spatially filtered with xDAWN (for the maximization of the signal-to-signal-plus-noise ratio), common spatial pattern, or not spatially filtered. The 12 analytical techniques were tested on EEG data recorded in three rapid serial visual presentation experiments that required the observer to discriminate rare target stimuli from frequent nontarget stimuli. Classification performance discriminating targets from nontargets depended on both the spatial filtering method and the classifier. In addition, the nonlinear classifier MLP outperformed the linear methods. Finally, training based AUC maximization provided better performance than training based on the minimization of the mean square error. The results support the conclusion that the choice of the systems architecture is critical and both spatial filtering and classification must be considered together.",
"title": ""
},
{
"docid": "25e50a3e98b58f833e1dd47aec94db21",
"text": "Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.",
"title": ""
},
{
"docid": "3467f4be08c4b8d6cd556f04f324ce67",
"text": "Round robin arbiter (RRA) is a critical block in nowadays designs. It is widely found in System-on-chips and Network-on-chips. The need of an efficient RRA has increased extensively as it is a limiting performance block. In this paper, we deliver a comparative review between different RRA architectures found in literature. We also propose a novel efficient RRA architecture. The FPGA implementation results of the previous RRA architectures and our proposed one are given, that show the improvements of the proposed RRA.",
"title": ""
},
{
"docid": "c69e002a71132641947d8e30bb2e74f7",
"text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.",
"title": ""
},
{
"docid": "7f68d6a6432f55684ad79a4f79406dab",
"text": "Half of patients with heart failure (HF) have a preserved left ventricular ejection fraction (HFpEF). Morbidity and mortality in HFpEF are similar to values observed in patients with HF and reduced EF, yet no effective treatment has been identified. While early research focused on the importance of diastolic dysfunction in the pathophysiology of HFpEF, recent studies have revealed that multiple non-diastolic abnormalities in cardiovascular function also contribute. Diagnosis of HFpEF is frequently challenging and relies upon careful clinical evaluation, echo-Doppler cardiography, and invasive haemodynamic assessment. In this review, the principal mechanisms, diagnostic approaches, and clinical trials are reviewed, along with a discussion of novel treatment strategies that are currently under investigation or hold promise for the future.",
"title": ""
},
{
"docid": "3edf5d1cce2a26fbf5c2cc773649629b",
"text": "We conducted three experiments to investigate the mental images associated with idiomatic phrases in English. Our hypothesis was that people should have strong conventional images for many idioms and that the regularity in people's knowledge of their images for idioms is due to the conceptual metaphors motivating the figurative meanings of idioms. In the first study, subjects were asked to form and describe their mental images for different idiomatic expressions. Subjects were then asked a series of detailed questions about their images regarding the causes and effects of different events within their images. We found high consistency in subjects' images of idioms with similar figurative meanings despite differences in their surface forms (e.g., spill the beans and let the cat out of the bag). Subjects' responses to detailed questions about their images also showed a high degree of similarity in their answers. Further examination of subjects' imagery protocols supports the idea that the conventional images and knowledge associated with idioms are constrained by the conceptual metaphors (e.g., the MIND IS A CONTAINER and IDEAS ARE ENTITIES) which motivate the figurative meanings of idioms. The results of two control studies showed that the conventional images associated with idioms are not solely based on their figurative meanings (Experiment 2) and that the images associated with literal phrases (e.g., spill the peas) were quite varied and unlikely to be constrained by conceptual metaphor (Experiment 3). These findings support the view that idioms are not \"dead\" metaphors with their meanings being arbitrarily determined. Rather, the meanings of many idioms are motivated by speakers' tacit knowledge of the conceptual metaphors underlying the meanings of these figurative phrases.",
"title": ""
},
{
"docid": "69ced55a44876f7cc4e57f597fcd5654",
"text": "A wideband circularly polarized (CP) antenna with a conical radiation pattern is investigated. It consists of a feeding probe and parasitic dielectric parallelepiped elements that surround the probe. Since the structure of the antenna looks like a bird nest, it is named as bird-nest antenna. The probe, which protrudes from a circular ground plane, operates in its fundamental monopole mode that generates omnidirectional linearly polarized (LP) fields. The dielectric parallelepipeds constitute a wave polarizer that converts omnidirectional LP fields of the probe into omnidirectional CP fields. To verify the design, a prototype operating in C band was fabricated and measured. The reflection coefficient, axial ratio (AR), radiation pattern, and antenna gain are studied, and reasonable agreement between the measured and simulated results is observed. The prototype has a 10-dB impedance bandwidth of 41.0% and a 3-dB AR bandwidth of as wide as 54.9%. A parametric study was carried out to characterize the proposed antenna. Also, a design guideline is given to facilitate designs of the antenna.",
"title": ""
},
{
"docid": "db3abbca12b7a1c4e611aa3707f65563",
"text": "This paper describes the background and methods for the prod uction of CIDOC-CRM compliant data sets from diverse collec tions of source data. The construction of such data sets is based on data in column format, typically exported for databases, as well as free text, typically created through scanning and OCR proce ssing or transcription.",
"title": ""
},
{
"docid": "7db5807fc15aeb8dfe4669a8208a8978",
"text": "This document is an output from a project funded by the UK Department for International Development (DFID) for the benefit of developing countries. The views expressed are not necessarily those of DFID. Contents Contents i List of tables ii List of figures ii List of boxes ii Acronyms iii Acknowledgements iv Summary 1 1. Introduction: why worry about disasters? 7 Objectives of this Study 7 Global disaster trends 7 Why donors should be concerned 9 What donors can do 9 2. What makes a disaster? 11 Characteristics of a disaster 11 Disaster risk reduction 12 The diversity of hazards 12 Vulnerability and capacity, coping and adaptation 15 Resilience 16 Poverty and vulnerability: links and differences 16 'The disaster management cycle' 17 3. Why should disasters be a development concern? 19 3.1 Disasters hold back development 19 Disasters undermine efforts to achieve the Millennium Development Goals 19 Macroeconomic impacts of disasters 21 Reallocation of resources from development to emergency assistance 22 Disaster impact on communities and livelihoods 23 3.2 Disasters are rooted in development failures 25 Dominant development models and risk 25 Development can lead to disaster 26 Poorly planned attempts to reduce risk can make matters worse 29 Disaster responses can themselves exacerbate risk 30 3.3 'Disaster-proofing' development: what are the gains? 31 From 'vicious spirals' of failed development and disaster risk… 31 … to 'virtuous spirals' of risk reduction 32 Disaster risk reduction can help achieve the Millennium Development Goals 33 … and can be cost-effective 33 4. Why does development tend to overlook disaster risk? 36 4.1 Introduction 36 4.2 Incentive, institutional and funding structures 36 Political incentives and governance in disaster prone countries 36 Government-donor relations and moral hazard 37 Donors and multilateral agencies 38 NGOs 41 4.3 Lack of exposure to and information on disaster issues 41 4.4 Assumptions about the risk-reducing capacity of development 43 ii 5. Tools for better integrating disaster risk reduction into development 45 Introduction 45 Poverty Reduction Strategy Papers (PRSPs) 45 UN Development Assistance Frameworks (UNDAFs) 47 Country assistance plans 47 National Adaptation Programmes of Action (NAPAs) 48 Partnership agreements with implementing agencies and governments 49 Programme and project appraisal guidelines 49 Early warning and information systems 49 Risk transfer mechanisms 51 International initiatives and policy forums 51 Risk reduction performance targets and indicators for donors 52 6. Conclusions and recommendations 53 6.1 Main conclusions 53 6.2 Recommendations 54 Core recommendation …",
"title": ""
},
{
"docid": "4a9a53444a74f7125faa99d58a5b0321",
"text": "The new transformed read-write Web has resulted in a rapid growth of user generated content on the Web resulting into a huge volume of unstructured data. A substantial part of this data is unstructured text such as reviews and blogs. Opinion mining and sentiment analysis (OMSA) as a research discipline has emerged during last 15 years and provides a methodology to computationally process the unstructured data mainly to extract opinions and identify their sentiments. The relatively new but fast growing research discipline has changed a lot during these years. This paper presents a scientometric analysis of research work done on OMSA during 20 0 0–2016. For the scientometric mapping, research publications indexed in Web of Science (WoS) database are used as input data. The publication data is analyzed computationally to identify year-wise publication pattern, rate of growth of publications, types of authorship of papers on OMSA, collaboration patterns in publications on OMSA, most productive countries, institutions, journals and authors, citation patterns and an year-wise citation reference network, and theme density plots and keyword bursts in OMSA publications during the period. A somewhat detailed manual analysis of the data is also performed to identify popular approaches (machine learning and lexicon-based) used in these publications, levels (document, sentence or aspect-level) of sentiment analysis work done and major application areas of OMSA. The paper presents a detailed analytical mapping of OMSA research work and charts the progress of discipline on various useful parameters. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "abc160fc578bb40935afa7aea93cf6ca",
"text": "This study investigates the effect of leader and follower behavior on employee voice, team task responsibility and team effectiveness. This study distinguishes itself by including both leader and follower behavior as predictors of team effectiveness. In addition, employee voice and team task responsibility are tested as potential mediators of the relationship between task-oriented behaviors (informing, directing, verifying) and team effectiveness as well as the relationship between relation-oriented behaviors (positive feedback, intellectual stimulation, individual consideration) and team effectiveness. This cross-sectional exploratory study includes four methods: 1) inter-reliable coding of leader and follower behavior during staff meetings; 2) surveys of 57 leaders; 3) surveys of643 followers; 4) survey of 56 lean coaches. Regression analyses showed that both leaders and followers display more task-oriented behaviors opposed to relation-oriented behaviors during staff meetings. Contrary to the hypotheses, none of the observed leader behaviors positively influences employee voice, team task responsibility or team effectiveness. However, all three task-oriented follower behaviors indirectly influence team effectiveness. The findings from this research illustrate that follower behaviors has more influence on team effectiveness compared to leader behavior. Practical implications, strengths and limitations of the research are discussed. Moreover, future research directions including the mediating role of culture and psychological safety are proposed as well.",
"title": ""
},
{
"docid": "e97c0bbb74534a16c41b4a717eed87d5",
"text": "This paper is discussing about the road accident severity survey using data mining, where different approaches have been considered. We have collected research work carried out by different researchers based on road accidents. Article describing the review work in context of road accident case’s using data mining approach. The article is consisting of collections of methods in different scenario with the aim to resolve the road accident. Every method is somewhere seeming to productive in some ways to decrease the no of causality. It will give a better edge to different country where the no of accidents is leading to fatality of life.",
"title": ""
},
{
"docid": "7539a738cad3a36336dc7019e2aabb21",
"text": "In this paper a compact antenna for ultrawideband applications is presented. The antenna is based on the biconical antenna design and has two identical elements. Each element is composed of a cone extended with a ring and an inner cylinder. The modification of the well-known biconical structure is made in order to reduce the influence of the radiation of the feeding cable. To obtain the optimum parameters leading to a less impact of the cable effect on the antenna performance, during the optimization process the antenna was coupled with a feeding coaxial cable. The proposed antenna covers the frequency range from 1.5 to 41 GHz with voltage standing wave ratio below 2 and has an omnidirectional radiation pattern. The realized total efficiency is above 85 % which indicates a good performance.",
"title": ""
},
{
"docid": "a87ba6d076c3c05578a6f6d9da22ac79",
"text": "Here we review and extend a new unitary model for the pathophysiology of involutional osteoporosis that identifies estrogen (E) as the key hormone for maintaining bone mass and E deficiency as the major cause of age-related bone loss in both sexes. Also, both E and testosterone (T) are key regulators of skeletal growth and maturation, and E, together with GH and IGF-I, initiate a 3- to 4-yr pubertal growth spurt that doubles skeletal mass. Although E is required for the attainment of maximal peak bone mass in both sexes, the additional action of T on stimulating periosteal apposition accounts for the larger size and thicker cortices of the adult male skeleton. Aging women undergo two phases of bone loss, whereas aging men undergo only one. In women, the menopause initiates an accelerated phase of predominantly cancellous bone loss that declines rapidly over 4-8 yr to become asymptotic with a subsequent slow phase that continues indefinitely. The accelerated phase results from the loss of the direct restraining effects of E on bone turnover, an action mediated by E receptors in both osteoblasts and osteoclasts. In the ensuing slow phase, the rate of cancellous bone loss is reduced, but the rate of cortical bone loss is unchanged or increased. This phase is mediated largely by secondary hyperparathyroidism that results from the loss of E actions on extraskeletal calcium metabolism. The resultant external calcium losses increase the level of dietary calcium intake that is required to maintain bone balance. Impaired osteoblast function due to E deficiency, aging, or both also contributes to the slow phase of bone loss. Although both serum bioavailable (Bio) E and Bio T decline in aging men, Bio E is the major predictor of their bone loss. Thus, both sex steroids are important for developing peak bone mass, but E deficiency is the major determinant of age-related bone loss in both sexes.",
"title": ""
},
{
"docid": "296705d6bfc09f58c8e732a469b17871",
"text": "Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response.",
"title": ""
},
{
"docid": "ac57fab046cfd02efa1ece262b07492f",
"text": "Interactive Narrative is an approach to interactive entertainment that enables the player to make decisions that directly affect the direction and/or outcome of the narrative experience being delivered by the computer system. Interactive narrative requires two seemingly conflicting requirements: coherent narrative and user agency. We present an interactive narrative system that uses a combination of narrative control and autonomous believable character agents to augment a story world simulation in which the user has a high degree of agency with narrative plot control. A drama manager called the Automated Story Director gives plot-based guidance to believable agents. The believable agents are endowed with the autonomy necessary to carry out directives in the most believable fashion possible. Agents also handle interaction with the user. When the user performs actions that change the world in such a way that the Automated Story Director can no longer drive the intended narrative forward, it is able to adapt the plot to incorporate the user’s changes and still achieve",
"title": ""
}
] |
scidocsrr
|
e0458ea6464048855c2b65819e927bb8
|
Towards correct network virtualization
|
[
{
"docid": "6dc1a6c032196a748e005ce49d735752",
"text": "Network virtualization is a powerful way to run multiple architectures or experiments simultaneously on a shared infrastructure. However, making efficient use of the underlying resources requires effective techniques for virtual network embedding--mapping each virtual network to specific nodes and links in the substrate network. Since the general embedding problem is computationally intractable, past research restricted the problem space to allow efficient solutions, or focused on designing heuristic algorithms. In this paper, we advocate a different approach: rethinking the design of the substrate network to enable simpler embedding algorithms and more efficient use of resources, without restricting the problem space. In particular, we simplify virtual link embedding by: i) allowing the substrate network to split a virtual link over multiple substrate paths and ii) employing path migration to periodically re-optimize the utilization of the substrate network. We also explore node-mapping algorithms that are customized to common classes of virtual-network topologies. Our simulation experiments show that path splitting, path migration,and customized embedding algorithms enable a substrate network to satisfy a much larger mix of virtual networks",
"title": ""
}
] |
[
{
"docid": "5d44349955d07a212bc11f6edfaec8b0",
"text": "This investigation develops an innovative algorithm for multiple autonomous unmanned aerial vehicle (UAV) mission routing. The concept of a UAV Swarm Routing Problem (SRP) as a new combinatorics problem, is developed as a variant of the Vehicle Routing Problem with Time Windows (VRPTW). Solutions of SRP problem model result in route assignments per vehicle that successfully track to all targets, on time, within distance constraints. A complexity analysis and multi-objective formulation of the VRPTW indicates the necessity of a stochastic solution approach leading to a multi-objective evolutionary algorithm. A full problem definition of the SRP as well as a multi-objective formulation parallels that of the VRPTW method. Benchmark problems for the VRPTW are modified in order to create SRP benchmarks. The solutions show the SRP solutions are comparable or better than the same VRPTW solutions, while also representing a more realistic UAV swarm routing solution.",
"title": ""
},
{
"docid": "f850321173db137674eb74a0dd2afc30",
"text": "The relational data model has been dominant and widely used since 1970. However, as the need to deal with big data grows, new data models, such as Hadoop and NoSQL, were developed to address the limitation of the traditional relational data model. As a result, determining which data model is suitable for applications has become a challenge. The purpose of this paper is to provide insight into choosing the suitable data model by conducting a benchmark using Yahoo! Cloud Serving Benchmark (YCSB) on three different database systems: (1) MySQL for relational data model, (2) MongoDB for NoSQL data model, and (3) HBase for Hadoop framework. The benchmark was conducted by running four different workloads. Each workload is executed using a different increasing operation and thread count, while observing how their change respectively affects throughput, latency, and runtime.",
"title": ""
},
{
"docid": "6ebb0bccba167e4b093e7832621e3e23",
"text": "Bump-less Cu/adhesive hybrid bonding is a promising technology for 2.5D/3D integration. The remaining issues of this technology include high Cu–Cu bonding temperature, long thermal-compression time (low throughput), and large thermal stress. In this paper, we investigate a Cu-first hybrid bonding process in hydrogen(H)-containing formic acid (HCOOH) vapor ambient, lowering the bonding temperature to 180 °C and shortening the thermal-compression time to 600 s. We find that the H-containing HCOOH vapor pre-bonding treatment is effective for Cu surface activation and friendly to adhesives at treatment temperature of 160–200 °C. The effects of surface activation (temperature and time) on Cu–Cu bonding and cyclo-olefin polymer (COP) adhesive bonding are studied by shear tests, fracture surface observations, and interfacial observations. Cu/adhesive hybrid bonding was successfully demonstrated at a bonding temperature of 180 °C with post-bonding adhesive curing at 200 °C.",
"title": ""
},
{
"docid": "1683cf711705b78b9465d8053a94b473",
"text": "In this paper, we investigate the problem of counting rosette leaves from an RGB image, an important task in plant phenotyping. We propose a data-driven approach for this task generalized over different plant species and imaging setups. To accomplish this task, we use state-of-the-art deep learning architectures: a deconvolutional network for initial segmentation and a convolutional network for leaf counting. Evaluation is performed on the leaf counting challenge dataset at CVPPP-2017. Despite the small number of training samples in this dataset, as compared to typical deep learning image sets, we obtain satisfactory performance on segmenting leaves from the background as a whole and counting the number of leaves using simple data augmentation strategies. Comparative analysis is provided against methods evaluated on the previous competition datasets. Our framework achieves mean and standard deviation of absolute count difference of 1.62 and 2.30 averaged over all five test datasets.",
"title": ""
},
{
"docid": "eaa6daff2f28ea7f02861e8c67b9c72b",
"text": "The demand of fused magnesium furnaces (FMFs) refers to the average value of the power of the FMFs over a fixed period of time before the current time. The demand is an indicator of the electricity consumption of high energy-consuming FMFs. When the demand exceeds the limit of the Peak Demand (a predetermined maximum demand), the power supply of some FMF will be cut off to ensure that the demand is no more than Peak Demand. But the power cutoff will destroy the heat balance, reduce the quality and yield of the product. The composition change of magnesite in FMFs will cause demand spike occasionally, which a sudden increase in demand exceeds the limit and then drops below the limit. As a result, demand spike cause the power cutoff. In order to avoid the power cutoff at the moment of demand spike, the demand of FMFs needs to be forecasted. This paper analyzes the dynamic model of the demand of FMFs, using the power data, presents a data-driven demand forecasting method. This method consists of the following: PACF based decision module for the number of the input variables of the forecasting model, RBF neural network (RBFNN) based power variation rate forecasting model and demand forecasting model. Simulations based on actual data and industrial experiments at a fused magnesia plant show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "3fdd81a3e2c86f43152f72e159735a42",
"text": "Class imbalance learning tackles supervised learning problems where some classes have significantly more examples than others. Most of the existing research focused only on binary-class cases. In this paper, we study multiclass imbalance problems and propose a dynamic sampling method (DyS) for multilayer perceptrons (MLP). In DyS, for each epoch of the training process, every example is fed to the current MLP and then the probability of it being selected for training the MLP is estimated. DyS dynamically selects informative data to train the MLP. In order to evaluate DyS and understand its strength and weakness, comprehensive experimental studies have been carried out. Results on 20 multiclass imbalanced data sets show that DyS can outperform the compared methods, including pre-sample methods, active learning methods, cost-sensitive methods, and boosting-type methods.",
"title": ""
},
{
"docid": "90241619360fe97b83e2777438a6c4f8",
"text": "Although K-means clustering algorithm is simple and popular, it has a fundamental drawback of falling into local optima that depend on the randomly generated initial centroid values. Optimization algorithms are well known for their ability to guide iterative computation in searching for global optima. They also speed up the clustering process by achieving early convergence. Contemporary optimization algorithms inspired by biology, including the Wolf, Firefly, Cuckoo, Bat and Ant algorithms, simulate swarm behavior in which peers are attracted while steering towards a global objective. It is found that these bio-inspired algorithms have their own virtues and could be logically integrated into K-means clustering to avoid local optima during iteration to convergence. In this paper, the constructs of the integration of bio-inspired optimization methods into K-means clustering are presented. The extended versions of clustering algorithms integrated with bio-inspired optimization methods produce improved results. Experiments are conducted to validate the benefits of the proposed approach.",
"title": ""
},
{
"docid": "ef0c5454b9b7854866712e897c29a198",
"text": "This paper presents a new online clustering algorithm called SAFN which is used to learn continuously evolving clusters from non-stationary data. The SAFN uses a fast adaptive learning procedure to take into account variations over time. In non-stationary and multi-class environment, the SAFN learning procedure consists of five main stages: creation, adaptation, mergence, split and elimination. Experiments are carried out in three kinds of datasets to illustrate the performance of the SAFN algorithm for online clustering. Compared with SAKM algorithm, SAFN algorithm shows better performance in accuracy of clustering and multi-class high-dimension data.",
"title": ""
},
{
"docid": "e66ae650db7c4c75a88ee6cf1ea8694d",
"text": "Traditional recommender systems minimize prediction error with respect to users' choices. Recent studies have shown that recommender systems have a positive effect on the provider's revenue.\n In this paper we show that by providing a set of recommendations different than the one perceived best according to user acceptance rate, the recommendation system can further increase the business' utility (e.g. revenue), without any significant drop in user satisfaction. Indeed, the recommendation system designer should have in mind both the user, whose taste we need to reveal, and the business, which wants to promote specific content.\n We performed a large body of experiments comparing a commercial state-of-the-art recommendation engine with a modified recommendation list, which takes into account the utility (or revenue) which the business obtains from each suggestion that is accepted by the user. We show that the modified recommendation list is more desirable for the business, as the end result gives the business a higher utility (or revenue). To study possible reduce in satisfaction by providing the user worse suggestions, we asked the users how they perceive the list of recommendation that they received. Differences in user satisfaction between the lists is negligible, and not statistically significant.\n We also uncover a phenomenon where movie consumers prefer watching and even paying for movies that they have already seen in the past than movies that are new to them.",
"title": ""
},
{
"docid": "13897df01d4c03191dd015a04c3a5394",
"text": "Medical or Health related search queries constitute a significant portion of the total number of queries searched everyday on the web. For health queries, the authenticity or authoritativeness of search results is of utmost importance besides relevance. So far, research in automatic detection of authoritative sources on the web has mainly focused on a) link structure based approaches and b) supervised approaches for predicting trustworthiness. However, the aforementioned approaches have some inherent limitations. For example, several content farm and low quality sites artificially boost their link-based authority rankings by forming a syndicate of highly interlinked domains and content which is algorithmically hard to detect. Moreover, the number of positively labeled training samples available for learning trustworthiness is also limited when compared to the size of the web. In this paper, we propose a novel unsupervised approach to detect and promote authoritative domains in health segment using click-through data. We argue that standard IR metrics such as NDCG are relevance-centric and hence are not suitable for evaluating authority. We propose a new authority-centric evaluation metric based on side-by-side judgment of results. Using real world search query sets, we evaluate our approach both quantitatively and qualitatively and show that it succeeds in significantly improving the authoritativeness of results when compared to a standard web ranking baseline. ∗Corresponding Author",
"title": ""
},
{
"docid": "3bba36e8f3d3a490681e82c8c3a10b11",
"text": "This paper describes the design and implementation of programmable AXI bus Interface modules in Verilog Hardware Description Language (HDL) and implementation in Xilinx Spartan 3E FPGA. All the interface modules are reconfigurable with the data size, burst type, number of transfers in a burst. Multiple masters can communicate with different slave memory locations concurrently. An arbiter controls the burst grant to different bus masters based on Round Robin algorithm. Separate decoder modules are implemented for write address channel, write data channel, write response channel, read address channel, read data channel. The design can support a maximum of 16 masters. All the RTL simulations are performed using Modelsim RTL Simulator. Each independent module is synthesized in XC3S250EPQ208-5 FPGA and the maximum speed is found to be 298.958 MHz. All the design modules can be integrated to create a soft IP for the AXI BUS system.",
"title": ""
},
{
"docid": "86aaee95a4d878b53fd9ee8b0735e208",
"text": "The tensegrity concept has long been considered as a basis for lightweight and compact packaging deployable structures, but very few studies are available. This paper presents a complete design study of a deployable tensegrity mast with all the steps involved: initial formfinding, structural analysis, manufacturing and deployment. Closed-form solutions are used for the formfinding. A manufacturing procedure in which the cables forming the outer envelope of the mast are constructed by two-dimensional weaving is used. The deployment of the mast is achieved through the use of self-locking hinges. A stiffness comparison between the tensegrity mast and an articulated truss mast shows that the tensegrity mast is weak in bending.",
"title": ""
},
{
"docid": "b0e94a0fdaf280d9e1942befdc4ac660",
"text": "In SCARA robots, which are often used in industrial applications, all joint axes are parallel, covering three degrees of freedom in translation and one degree of freedom in rotation. Therefore, conventional approaches for the hand-eye calibration of articulated robots cannot be used for SCARA robots. In this paper, we present a new linear method that is based on dual quaternions and extends the work of Daniilid is 1999 (IJRR) for SCARA robots. To improve the accuracy, a subsequent nonlinear optimization is proposed. We address several practical implementation issues and show the effectiveness of the method by evaluating it on synthetic and real data.",
"title": ""
},
{
"docid": "73f8a5e5e162cc9b1ed45e13a06e78a5",
"text": "Two major projects in the U.S. and Europe have joined in a collaboration to work toward achieving interoperability among language resources. In the U.S., the project, Sustainable Interoperability for Language Technology (SILT) has been funded by the National Science Foundation under the INTEROP program, and in Europe, FLaReNet, Fostering Language Resources Network, has been funded by the European Commission under the eContentPlus framework. This international collaborative effort involves members of the language processing community and others working in related areas to build consensus regarding the sharing of data and technologies for language resources and applications, to work towards interoperability of existing data, and, where possible, to promote standards for annotation and resource building. This paper focuses on the results of a recent workshop whose goal was to arrive at operational definitions for interoperability over four thematic areas, including metadata for describing language resources, data categories and their semantics, resource publication requirements, and software sharing.",
"title": ""
},
{
"docid": "ff67f2bbf20f5ad2bef6641e8e7e3deb",
"text": "An observation one can make when reviewing the literature on physical activity is that health-enhancing exercise habits tend to wear off as soon as individuals enter adolescence. Therefore, exercise habits should be promoted and preserved early in life. This article focuses on the formation of physical exercise habits. First, the literature on motivational determinants of habitual exercise and related behaviours is discussed, and the concept of habit is further explored. Based on this literature, a theoretical model of exercise habit formation is proposed. More specifically, expanding on the idea that habits are the result of automated cognitive processes, it is argued that physical exercise habits are capable of being automatically activated by the situational features that normally precede these behaviours. These habits may enhance health as a result of consistent performance over a long period of time. Subsequently, obstacles to the formation of exercise habits are discussed and interventions that may anticipate these obstacles are presented. Finally, implications for theory and practice are briefly discussed.",
"title": ""
},
{
"docid": "861b170e5da6941e2cf55d8b7d9799b6",
"text": "Scaling wireless charging to power levels suitable for heavy duty passenger vehicles and mass transit bus requires indepth assessment of wireless power transfer (WPT) architectures, component sizing and stress, package size, electrical insulation requirements, parasitic loss elements, and cost minimization. It is demonstrated through an architecture comparison that the voltage rating of the power inverter semiconductors will be higher for inductor-capacitor-capacitor (LCC) than for a more conventional Series-Parallel (S-P) tuning. Higher voltage at the source inverter dc bus facilitates better utilization of the semiconductors, hence lower cost. Electrical and thermal stress factors of the passive components are explored, in particular the compensating capacitors and coupling coils. Experimental results are presented for a prototype, precommercial, 10 kW wireless charger designed for heavy duty (HD) vehicle application. Results are in good agreement with theory and validate a design that minimizes component stress.",
"title": ""
},
{
"docid": "6a0c54fcac95f86df54a0508588aee61",
"text": "Liveness detection (often referred to as presentation attack detection) is the ability to detect artificial objects presented to a biometric device with an intention to subvert the recognition system. This paper presents the database of iris printout images with a controlled quality, and its fundamental application, namely development of liveness detection method for iris recognition. The database gathers images of only those printouts that were accepted by an example commercial camera, i.e. the iris template calculated for an artefact was matched to the corresponding iris reference of the living eye. This means that the quality of the employed imitations is not accidental and precisely controlled. The database consists of 729 printout images for 243 different eyes, and 1274 images of the authentic eyes, corresponding to imitations. It may thus serve as a good benchmark for at least two challenges: a) assessment of the liveness detection algorithms, and b) assessment of the eagerness of matching real and fake samples by iris recognition methods. To our best knowledge, the iris printout database of such properties is the first worldwide published as of today. In its second part, the paper presents an example application of this database, i.e. the development of liveness detection method based on iris image frequency analysis. We discuss how to select frequency windows and regions of interest to make the method sensitive to “alien frequencies” resulting from the printing process. The proposed method shows a very promising results, since it may be configured to achieve no false alarms when the rate of accepting the iris printouts is approximately 5% (i.e. 95% of presentation attack trials are correctly identified). This favorable compares to the results of commercial equipment used in the database development, as this device accepted all the printouts used. The method employs the same image as used in iris recognition process, hence no investments into the capture devices is required, and may be applied also to other carriers for printed iris patterns, e.g. contact lens.",
"title": ""
},
{
"docid": "43b9753d934d2e7598d6342a81f21bed",
"text": "A system has been developed which is capable of inducing brain injuries of graded severity from mild concussion to instantaneous death. A pneumatic shock tester subjects a monkey to a non-impact controlled single sagittal rotation which displaces the head 60 degrees in 10-20 msec. Results derived from 53 experiments show that a good correlation exists between acceleration delivered to the head, the resultant neurological status and the brain pathology. A simple experimental trauma severity (ETS) scale is offered based on changes in the heart rate, respiratory rate, corneal reflex and survivability. ETS grades 1 and 2 show heart rate or respiratory changes but no behavioral or pathological abnormality. ETS grades 3 and 4 have temporary corneal reflex abolition, behavioral unconsciousness, and post-traumatic behavioral abnormalities. Occasional subdural haematomas are seen. Larger forces cause death (ETS 5) from primary apnea or from large subdural haematomas. At the extreme range, instantaneous death (ETS 6) occurs because of pontomedullary lacerations. This model and the ETS scale offer the ability to study a broad spectrum of types of experimental head injury and underscore the importance of angular acceleration as a mechanism of head injury.",
"title": ""
},
{
"docid": "b4409a8e8a47bc07d20cebbfaccb83fd",
"text": "We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.",
"title": ""
}
] |
scidocsrr
|
16880cc223b10e55afce93c0630e34b5
|
Scheduling techniques for hybrid circuit/packet networks
|
[
{
"docid": "8fcc8c61dd99281cfda27bbad4b7623a",
"text": "Modern data centers are massive, and support a range of distributed applications across potentially hundreds of server racks. As their utilization and bandwidth needs continue to grow, traditional methods of augmenting bandwidth have proven complex and costly in time and resources. Recent measurements show that data center traffic is often limited by congestion loss caused by short traffic bursts. Thus an attractive alternative to adding physical bandwidth is to augment wired links with wireless links in the 60 GHz band.\n We address two limitations with current 60 GHz wireless proposals. First, 60 GHz wireless links are limited by line-of-sight, and can be blocked by even small obstacles. Second, even beamforming links leak power, and potential interference will severely limit concurrent transmissions in dense data centers. We propose and evaluate a new wireless primitive for data centers, 3D beamforming, where 60 GHz signals bounce off data center ceilings, thus establishing indirect line-of-sight between any two racks in a data center. We build a small 3D beamforming testbed to demonstrate its ability to address both link blockage and link interference, thus improving link range and number of concurrent transmissions in the data center. In addition, we propose a simple link scheduler and use traffic simulations to show that these 3D links significantly expand wireless capacity compared to their 2D counterparts.",
"title": ""
}
] |
[
{
"docid": "5b6daefbefd44eea4e317e673ad91da3",
"text": "A three-dimensional (3-D) thermogram can provide spatial information; however, it is rarely applied because it lacks an accurate method in obtaining the intrinsic and extrinsic parameters of an infrared (IR) camera. Conventional methods cannot be used for such calibration because an IR camera cannot capture visible calibration patterns. Therefore, in the current study, a trinocular vision system composed of two visible cameras and an IR camera is constructed and a calibration board with miniature bulbs is designed. The two visible cameras compose a binocular vision system that obtains 3-D information from the miniature bulbs while the IR camera captures the calibration board to obtain the two dimensional subpixel coordinates of miniature bulbs. The corresponding algorithm is proposed to calibrate the IR camera based on the gathered information. Experimental results show that the proposed calibration can accurately obtain the intrinsic and extrinsic parameters of the IR camera, and meet the requirements of its application.",
"title": ""
},
{
"docid": "74f674ddfd04959303bb89bd6ef22b66",
"text": "Ethernet is the survivor of the LAN wars. It is hard to find an IP packet that has not passed over an Ethernet segment. One important reason for this is Ethernet's simplicity and ease of configuration. However, Ethernet has always been known to be an insecure technology. Recent successful malware attacks and the move towards cloud computing in data centers demand that attention be paid to the security aspects of Ethernet. In this paper, we present known Ethernet related threats and discuss existing solutions from business, hacker, and academic communities. Major issues, like insecurities related to Address Resolution Protocol and to self-configurability, are discussed. The solutions fall roughly into three categories: accepting Ethernet's insecurity and circling it with firewalls; creating a logical separation between the switches and end hosts; and centralized cryptography based schemes. However, none of the above provides the perfect combination of simplicity and security befitting Ethernet.",
"title": ""
},
{
"docid": "9868b2a338911071e5e0553d6aa87eb7",
"text": "This paper reports on a workshop in June 2007 on the topic of the insider threat. Attendees represented academia and research institutions, consulting firms, industry—especially the financial services sector, and government. Most participants were from the United States. Conventional wisdom asserts that insiders account for roughly a third of the computer security loss. Unfortunately, there is currently no way to validate or refute that assertion, because data on the insider threat problem is meager at best. Part of the reason so little data exists on the insider threat problem is that the concepts of insider and insider threat are not consistently defined. Consequently, it is hard to compare even the few pieces of insider threat data that do exist. Monitoring is a means of addressing the insider threat, although it is more successful to verify a case of suspected insider attack than it is to identify insider attacks. Monitoring has (negative) implications for personal privacy. However, companies generally have wide leeway to monitor the activity of their employees. Psychological profiling of potential insider attackers is appealing but may be hard to accomplish. More productive may be using psychological tools to promote positive behavior on the part of employees.",
"title": ""
},
{
"docid": "b200836d9046e79b61627122419d93c4",
"text": "Digital evidence plays a vital role in determining legal case admissibility in electronic- and cyber-oriented crimes. Considering the complicated level of the Internet of Things (IoT) technology, performing the needed forensic investigation will be definitely faced by a number of challenges and obstacles, especially in digital evidence acquisition and analysis phases. Based on the currently available network forensic methods and tools, the performance of IoT forensic will be producing a deteriorated digital evidence trail due to the sophisticated nature of IoT connectivity and data exchangeability via the “things”. In this paper, a revision of IoT digital evidence acquisition procedure is provided. In addition, an improved theoretical framework for IoT forensic model that copes with evidence acquisition issues is proposed and discussed.",
"title": ""
},
{
"docid": "e13b4b92c639a5b697356466e00e05c3",
"text": "In fashion retailing, the display of product inventory at the store is important to capture consumers’ attention. Higher inventory levels might allow more attractive displays and thus increase sales, in addition to avoiding stock-outs. We develop a choice model where product demand is indeed affected by inventory, and controls for product and store heterogeneity, seasonality, promotions and potential unobservable shocks in each market. We empirically test the model with daily traffic, inventory and sales data from a large retailer, at the store-day-product level. We find that the impact of inventory level on sales is positive and highly significant, even in situations of extremely high service level. The magnitude of this effect is large: each 1% increase in product-level inventory at the store increases sales of 0.58% on average. This supports the idea that inventory has a strong role in helping customers choose a particular product within the assortment. We finally describe how a retailer should optimally decide its inventory levels within a category and describe the properties of the optimal solution. Applying such optimization to our data set yields consistent and significant revenue improvements, of more than 10% for any date and store compared to current practices. Submitted: April 6, 2016. Revised: May 17, 2017",
"title": ""
},
{
"docid": "cc8e52fdb69a9c9f3111287905f02bfc",
"text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.",
"title": ""
},
{
"docid": "acab6a0a8b5e268cd0a5416bd00b4f55",
"text": "We propose SocialFilter, a trust-aware collaborative spam mitigation system. Our proposal enables nodes with no email classification functionality to query the network on whether a host is a spammer. It employs Sybil-resilient trust inference to weigh the reports concerning spamming hosts that collaborating spam-detecting nodes (reporters) submit to the system. It weighs the spam reports according to the trustworthiness of their reporters to derive a measure of the system's belief that a host is a spammer. SocialFilter is the first collaborative unwanted traffic mitigation system that assesses the trustworthiness of spam reporters by both auditing their reports and by leveraging the social network of the reporters' administrators. The design and evaluation of our proposal offers us the following lessons: a) it is plausible to introduce Sybil-resilient Online-Social-Network-based trust inference mechanisms to improve the reliability and the attack-resistance of collaborative spam mitigation; b) using social links to obtain the trustworthiness of reports concerning spammers can result in comparable spam-blocking effectiveness with approaches that use social links to rate-limit spam (e.g., Ostra [27]); c) unlike Ostra, in the absence of reports that incriminate benign email senders, SocialFilter yields no false positives.",
"title": ""
},
{
"docid": "dfc383a057aa4124dfc4237e607c321a",
"text": "Obfuscation is applied to large quantities of benign and malicious JavaScript throughout the web. In situations where JavaScript source code is being submitted for widespread use, such as in a gallery of browser extensions (e.g., Firefox), it is valuable to require that the code submitted is not obfuscated and to check for that property. In this paper, we describe NOFUS, a static, automatic classifier that distinguishes obfuscated and non-obfuscated JavaScript with high precision. Using a collection of examples of both obfuscated and non-obfuscated JavaScript, we train NOFUS to distinguish between the two and show that the classifier has both a low false positive rate (about 1%) and low false negative rate (about 5%). Applying NOFUS to collections of deployed JavaScript, we show it correctly identifies obfuscated JavaScript files from Alexa top 50 websites. While prior work conflates obfuscation with maliciousness (assuming that detecting obfuscation implies maliciousness), we show that the correlation is weak. Yes, much malware is hidden using obfuscation, but so is benign JavaScript. Further, applying NOFUS to known JavaScript malware, we show our classifier finds 15% of the files are unobfuscated, showing that not all malware is obfuscated.",
"title": ""
},
{
"docid": "6b3db3006f8314559bbbe41620466c6e",
"text": "Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.",
"title": ""
},
{
"docid": "e120320dbe8fa0e2475b96a0b07adec8",
"text": "BACKGROUND\nProne hip extension (PHE) is a common and widely accepted test used for assessment of the lumbo-pelvic movement pattern. Considerable increased in lumbar lordosis during this test has been considered as impairment of movement patterns in lumbo-pelvic region. The purpose of this study was to investigate the change of lumbar lordosis in PHE test in subjects with and without low back pain (LBP).\n\n\nMETHOD\nA two-way mixed design with repeated measurements was used to investigate the lumbar lordosis changes during PHE in two groups of subjects with and without LBP. An equal number of subjects (N = 30) were allocated to each group. A standard flexible ruler was used to measure the size of lumbar lordosis in prone-relaxed position and PHE test in each group.\n\n\nRESULT\nThe result of two-way mixed-design analysis of variance revealed significant health status by position interaction effect for lumbar lordosis (P < 0.001). The main effect of test position on lumbar lordosis was statistically significant (P < 0.001). The lumbar lordosis was significantly greater in the PHE compared to prone-relaxed position in both subjects with and without LBP. The amount of difference in positions was statistically significant between two groups (P < 0.001) and greater change in lumbar lordosis was found in the healthy group compared to the subjects with LBP.\n\n\nCONCLUSIONS\nGreater change in lumbar lordosis during this test may be due to more stiffness in lumbopelvic muscles in the individuals with LBP.",
"title": ""
},
{
"docid": "a3e8a50b38e276d19dc301fcf8818ea1",
"text": "Automated diagnosis of skin cancer is an active area of research with different classification methods proposed so far. However, classification models based on insufficient labeled training data can badly influence the diagnosis process if there is no self-advising and semi supervising capability in the model. This paper presents a semi supervised, self-advised learning model for automated recognition of melanoma using dermoscopic images. Deep belief architecture is constructed using labeled data together with unlabeled data, and fine tuning done by an exponential loss function in order to maximize separation of labeled data. In parallel a self-advised SVM algorithm is used to enhance classification results by counteracting the effect of misclassified data. To increase generalization capability and redundancy of the model, polynomial and radial basis function based SA-SVMs and Deep network are trained using training samples randomly chosen via a bootstrap technique. Then the results are aggregated using least square estimation weighting. The proposed model is tested on a collection of 100 dermoscopic images. The variation in classification error is analyzed with respect to the ratio of labeled and unlabeled data used in the training phase. The classification performance is compared with some popular classification methods and the proposed model using the deep neural processing outperforms most of the popular techniques including KNN, ANN, SVM and semi supervised algorithms like Expectation maximization and transductive SVM.",
"title": ""
},
{
"docid": "4ee6894fade929db82af9cb62fecc0f9",
"text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.",
"title": ""
},
{
"docid": "48d778934127343947b494fe51f56a33",
"text": "In this paper, we present a simple method for animating natural phenomena such as erosion, sedimentation, and acidic corrosion. We discretize the appropriate physical or chemical equations using finite differences, and we use the results to modify the shape of a solid body. We remove mass from an object by treating its surface as a level set and advecting it inward, and we deposit the chemical and physical byproducts into simulated fluid. Similarly, our technique deposits sediment onto a surface by advecting the level set outward. Our idea can be used for off-line high quality animations as well as interactive applications such as games, and we demonstrate both in this paper.",
"title": ""
},
{
"docid": "a07472c2f086332bf0f97806255cb9d5",
"text": "The Learning Analytics Dashboard (LAD) is an application to show students’ online behavior patterns in a virtual learning environment. This supporting tool works by tracking students’ log-files, mining massive amounts of data to find meaning, and visualizing the results so they can be comprehended at a glance. This paper reviews previously developed applications to analyze their features. Based on the implications from the review of previous studies as well as a preliminary investigation on the need for such tools, an early version of the LAD was designed and developed. Also, in order to improve the LAD, a usability test incorporating a stimulus recall interview was conducted with 38 college students in two blended learning classes. Evaluation of this tool was performed in an experimental research setting with a control group and additional surveys were conducted asking students’ about perceived usefulness, conformity, level of understanding of graphs, and their behavioral changes. The results indicated that this newly developed learning analytics tool did not significantly impact on their learning achievement. However, lessons learned from the usability and pilot tests support that visualized information impacts on students’ understanding level; and the overall satisfaction with dashboard plays as a covariant that impacts on both the degree of understanding and students’ perceived change of behavior. Taking in the results of the tests and students’ openended responses, a scaffolding strategy to help them understand the meaning of the information displayed was included in each sub section of the dashboard. Finally, this paper discusses future directions in regard to improving LAD so that it better supports students’ learning performance, which might be helpful for those who develop learning analytics applications for students.",
"title": ""
},
{
"docid": "67b5bd59689c325365ac765a17886169",
"text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.",
"title": ""
},
{
"docid": "81ca5239dbd60a988e7457076aac05d7",
"text": "OBJECTIVE\nFrontline health professionals need a \"red flag\" tool to aid their decision making about whether to make a referral for a full diagnostic assessment for an autism spectrum condition (ASC) in children and adults. The aim was to identify 10 items on the Autism Spectrum Quotient (AQ) (Adult, Adolescent, and Child versions) and on the Quantitative Checklist for Autism in Toddlers (Q-CHAT) with good test accuracy.\n\n\nMETHOD\nA case sample of more than 1,000 individuals with ASC (449 adults, 162 adolescents, 432 children and 126 toddlers) and a control sample of 3,000 controls (838 adults, 475 adolescents, 940 children, and 754 toddlers) with no ASC diagnosis participated. Case participants were recruited from the Autism Research Centre's database of volunteers. The control samples were recruited through a variety of sources. Participants completed full-length versions of the measures. The 10 best items were selected on each instrument to produce short versions.\n\n\nRESULTS\nAt a cut-point of 6 on the AQ-10 adult, sensitivity was 0.88, specificity was 0.91, and positive predictive value (PPV) was 0.85. At a cut-point of 6 on the AQ-10 adolescent, sensitivity was 0.93, specificity was 0.95, and PPV was 0.86. At a cut-point of 6 on the AQ-10 child, sensitivity was 0.95, specificity was 0.97, and PPV was 0.94. At a cut-point of 3 on the Q-CHAT-10, sensitivity was 0.91, specificity was 0.89, and PPV was 0.58. Internal consistency was >0.85 on all measures.\n\n\nCONCLUSIONS\nThe short measures have potential to aid referral decision making for specialist assessment and should be further evaluated.",
"title": ""
},
{
"docid": "99a4fc6540802ff820fef9ca312cdc1c",
"text": "Problem diagnosis is one crucial aspect in the cloud operation that is becoming increasingly challenging. On the one hand, the volume of logs generated in today's cloud is overwhelmingly large. On the other hand, cloud architecture becomes more distributed and complex, which makes it more difficult to troubleshoot failures. In order to address these challenges, we have developed a tool, called LOGAN, that enables operators to quickly identify the log entries that potentially lead to the root cause of a problem. It constructs behavioral reference models from logs that represent the normal patterns. When problem occurs, our tool enables operators to inspect the divergence of current logs from the reference model and highlight logs likely to contain the hints to the root cause. To support these capabilities we have designed and developed several mechanisms. First, we developed log correlation algorithms using various IDs embedded in logs to help identify and isolate log entries that belong to the failed request. Second, we provide efficient log comparison to help understand the differences between different executions. Finally we designed mechanisms to highlight critical log entries that are likely to contain information pertaining to the root cause of the problem. We have implemented the proposed approach in a popular cloud management system, OpenStack, and through case studies, we demonstrate this tool can help operators perform problem diagnosis quickly and effectively.",
"title": ""
},
{
"docid": "211037c38a50ff4169f3538c3b6af224",
"text": "In this paper we present a method to obtain a depth map from a single image of a scene by exploiting both image content and user interaction. Assuming that regions with low gradients will have similar depth values, we formulate the problem as an optimization process across a graph, where pixels are considered as nodes and edges between neighbouring pixels are assigned weights based on the image gradient. Starting from a number of userdefined constraints, depth values are propagated between highly connected nodes i.e. with small gradients. Such constraints include, for example, depth equalities and inequalities between pairs of pixels, and may include some information about perspective. This framework provides a depth map of the scene, which is useful for a number of applications.",
"title": ""
},
{
"docid": "5d0a77058d6b184cb3c77c05363c02e0",
"text": "For two-class discrimination, Ref. [1] claimed that, when covariance matrices of the two classes were unequal, a (class) unbalanced dataset had a negative effect on the performance of linear discriminant analysis (LDA). Through re-balancing 10 realworld datasets, Ref. [1] provided empirical evidence to support the claim using AUC (Area Under the receiver operating characteristic Curve) as the performance metric. We suggest that such a claim is vague if not misleading, there is no solid theoretical analysis presented in [1], and AUC can lead to a quite different conclusion from that led to by misclassification error rate (ER) on the discrimination performance of LDA for unbalanced datasets. Our empirical and simulation studies suggest that, for LDA, the increase of the median of AUC (and thus the improvement of performance of LDA) from re-balancing is relatively small, while, in contrast, the increase of the median of ER (and thus the decline in performance of LDA) from re-balancing is relatively large. Therefore, from our study, there is no reliable empirical evidence to support the claim that a (class) unbalanced data set has a negative effect on the performance of LDA. In addition, re-balancing affects the performance of LDA for datasets with either equal or unequal covariance matrices, indicating that having unequal covariance matrices is not a key reason for the difference in performance between original and re-balanced data.",
"title": ""
},
{
"docid": "dfd88750bc1d42e8cc798d2097426910",
"text": "Melanoma is one of the most lethal forms of skin cancer. It occurs on the skin surface and develops from cells known as melanocytes. The same cells are also responsible for benign lesions commonly known as moles, which are visually similar to melanoma in its early stage. If melanoma is treated correctly, it is very often curable. Currently, much research is concentrated on the automated recognition of melanomas. In this paper, we propose an automated melanoma recognition system, which is based on deep learning method combined with so called hand-crafted RSurf features and Local Binary Patterns. The experimental evaluation on a large publicly available dataset demonstrates high classification accuracy, sensitivity, and specificity of our proposed approach when it is compared with other classifiers on the same dataset.",
"title": ""
}
] |
scidocsrr
|
dace1ba50c98825c4f04cd0296c66488
|
Application of Data Mining in Educational Database for Predicting Behavioural Patterns of the Students
|
[
{
"docid": "26e24e4a59943f9b80d6bf307680b70c",
"text": "We present a machine-learned model that can automatically detect when a student using an intelligent tutoring system is off-task, i.e., engaged in behavior which does not involve the system or a learning task. This model was developed using only log files of system usage (i.e. no screen capture or audio/video data). We show that this model can both accurately identify each student's prevalence of off-task behavior and can distinguish off-task behavior from when the student is talking to the teacher or another student about the subject matter. We use this model in combination with motivational and attitudinal instruments, developing a profile of the attitudes and motivations associated with off-task behavior, and compare this profile to the attitudes and motivations associated with other behaviors in intelligent tutoring systems. We discuss how the model of off-task behavior can be used within interactive learning environments which respond to when students are off-task.",
"title": ""
},
{
"docid": "7834f32e3d6259f92f5e0beb3a53cc04",
"text": "An educational institution needs to have an approximate prior knowledge of enrolled students to predict their performance in future academics. This helps them to identify promising students and also provides them an opportunity to pay attention to and improve those who would probably get lower grades. As a solution, we have developed a system which can predict the performance of students from their previous performances using concepts of data mining techniques under Classification. We have analyzed the data set containing information about students, such as gender, marks scored in the board examinations of classes X and XII, marks and rank in entrance examinations and results in first year of the previous batch of students. By applying the ID3 (Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we have predicted the general and individual performance of freshly admitted students in future examinations.",
"title": ""
}
] |
[
{
"docid": "1ef6623e117998098ee609ea79d5f17d",
"text": "Effective enforcement of laws and policies requires expending resources to prevent and detect offenders, as well as appropriate punishment schemes to deter violators. In particular, enforcement of privacy laws and policies in modern organizations that hold large volumes of personal information (e.g., hospitals, banks) relies heavily on internal audit mechanisms. We study economic considerations in the design of these mechanisms, focusing in particular on effective resource allocation and appropriate punishment schemes. We present an audit game model that is a natural generalization of a standard security game model for resource allocation with an additional punishment parameter. Computing the Stackelberg equilibrium for this game is challenging because it involves solving an optimization problem with non-convex quadratic constraints. We present an additive FPTAS that efficiently computes the solution.",
"title": ""
},
{
"docid": "f9806d3542f575d53ef27620e4aa493b",
"text": "Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery. In no area this is so clear as in bioinformatics, led by technological breakthroughs in data acquisition technologies. It has been argued that bioinformatics could quickly become the field of research generating the largest data repositories, beating other data-intensive areas such as high-energy physics or astroinformatics. Over the last decade, deep learning has become a disruptive advance in machine learning, giving new live to the long-standing connectionist paradigm in artificial intelligence. Deep learning methods are ideally suited to large-scale data and, therefore, they should be ideally suited to knowledge discovery in bioinformatics and biomedicine at large. In this brief paper, we review key aspects of the application of deep learning in bioinformatics and medicine, drawing from the themes covered by the contributions to an ESANN 2018 special session devoted to this topic.",
"title": ""
},
{
"docid": "435c6eb000618ef63a0f0f9f919bc0b4",
"text": "Selective sampling is an active variant of online learning in which the learner is allowed to adaptively query the label of an observed example. The goal of selective sampling is to achieve a good trade-off between prediction performance and the number of queried labels. Existing selective sampling algorithms are designed for vector-based data. In this paper, motivated by the ubiquity of graph representations in real-world applications, we propose to study selective sampling on graphs. We first present an online version of the well-known Learning with Local and Global Consistency method (OLLGC). It is essentially a second-order online learning algorithm, and can be seen as an online ridge regression in the Hilbert space of functions defined on graphs. We prove its regret bound in terms of the structural property (cut size) of a graph. Based on OLLGC, we present a selective sampling algorithm, namely Selective Sampling with Local and Global Consistency (SSLGC), which queries the label of each node based on the confidence of the linear function on graphs. Its bound on the label complexity is also derived. We analyze the low-rank approximation of graph kernels, which enables the online algorithms scale to large graphs. Experiments on benchmark graph datasets show that OLLGC outperforms the state-of-the-art first-order algorithm significantly, and SSLGC achieves comparable or even better results than OLLGC while querying substantially fewer nodes. Moreover, SSLGC is overwhelmingly better than random sampling.",
"title": ""
},
{
"docid": "ab2c4d5317d2e10450513283c21ca6d3",
"text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.",
"title": ""
},
{
"docid": "385c7c16af40ae13b965938ac3bce34c",
"text": "The information age has brought a deluge of data. Much of this is in text form, insurmountable in scope for humans and incomprehensible in structure for computers. Text mining is an expanding field of research that seeks to utilize the information contained in vast document collections. General data mining methods based on machine learning face challenges with the scale of text data, posing a need for scalable text mining methods. This thesis proposes a solution to scalable text mining: generative models combined with sparse computation. A unifying formalization for generative text models is defined, bringing together research traditions that have used formally equivalent models, but ignored parallel developments. This framework allows the use of methods developed in different processing tasks such as retrieval and classification, yielding effective solutions across different text mining tasks. Sparse computation using inverted indices is proposed for inference on probabilistic models. This reduces the computational complexity of the common text mining operations according to sparsity, yielding probabilistic models with the scalability of modern search engines. The proposed combination provides sparse generative models: a solution for text mining that is general, effective, and scalable. Extensive experimentation on text classification and ranked retrieval datasets are conducted, showing that the proposed solution matches or outperforms the leading task-specific methods in effectiveness, with a order of magnitude decrease in classification times for Wikipedia article categorization with a million classes. The developed methods were further applied in two 2014 Kaggle data mining prize competitions with over a hundred competing teams, earning first and second places.",
"title": ""
},
{
"docid": "8f177b79f0b89510bd84e1f503b5475f",
"text": "We propose a distributed cooperative framework among base stations (BS) with load balancing (dubbed as inter-BS for simplicity) for improving energy efficiency of OFDMA-based cellular access networks. Proposed inter-BS cooperation is formulated following the principle of ecological self-organization. Based on the network traffic, BSs mutually cooperate for distributing traffic among themselves and thus, the number of active BSs is dynamically adjusted for energy savings. For reducing the number of inter-BS communications, a three-step measure is taken by using estimated load factor (LF), initializing the algorithm with only the active BSs and differentiating neighboring BSs according to their operating modes for distributing traffic. An exponentially weighted moving average (EWMA)-based technique is proposed for estimating the LF in advance based on the historical data. Various selection schemes for finding the best BSs to distribute traffic are also explored. Furthermore, we present an analytical formulation for modeling the dynamic switching of BSs. A thorough investigation under a wide range of network settings is carried out in the context of an LTE system. Results demonstrate a significant enhancement in network energy efficiency yielding a much higher savings than the compared schemes. Moreover, frequency of inter-BS correspondences can be reduced by over 80%.",
"title": ""
},
{
"docid": "06e6704699652849e745df7c472fdc7b",
"text": "Despite extensive research, many methods in software quality prediction still exhibit some degree of uncertainty in their results. Rather than treating this as a problem, this paper asks if this uncertainty is a resource that can simplify software quality prediction. For example, Deb’s principle of ε-dominance states that if there exists some ε value below which it is useless or impossible to distinguish results, then it is superfluous to explore anything less than ε . We say that for “large ε problems”, the results space of learning effectively contains just a few regions. If many learners are then applied to such large ε problems, they would exhibit a “many roads lead to Rome” property; i.e., many different software quality prediction methods would generate a small set of very similar results. This paper explores DART, an algorithm especially selected to succeed for large ε software quality prediction problems. DART is remarkable simple yet, on experimentation, it dramatically outperforms three sets of state-of-the-art defect prediction methods. The success of DART for defect prediction begs the questions: how many other domains in software quality predictors can also be radically simplified? This will be a fruitful direction for future work.",
"title": ""
},
{
"docid": "02ed562cb1a532f937a8590226bb44dc",
"text": "We present a new algorithm for approximate inference in prob abilistic programs, based on a stochastic gradient for variational programs. Th is method is efficient without restrictions on the probabilistic program; it is pa rticularly practical for distributions which are not analytically tractable, inclu ding highly structured distributions that arise in probabilistic programs. We show ho w t automatically derive mean-field probabilistic programs and optimize them , and demonstrate that our perspective improves inference efficiency over other al gorithms.",
"title": ""
},
{
"docid": "3cda92028692a25411d74e5a002740ac",
"text": "Protecting sensitive information from unauthorized disclosure is a major concern of every organization. As an organization’s employees need to access such information in order to carry out their daily work, data leakage detection is both an essential and challenging task. Whether caused by malicious intent or an inadvertent mistake, data loss can result in significant damage to the organization. Fingerprinting is a content-based method used for detecting data leakage. In fingerprinting, signatures of known confidential content are extracted and matched with outgoing content in order to detect leakage of sensitive content. Existing fingerprinting methods, however, suffer from two major limitations. First, fingerprinting can be bypassed by rephrasing (or minor modification) of the confidential content, and second, usually the whole content of document is fingerprinted (including non-confidential parts), resulting in false alarms. In this paper we propose an extension to the fingerprinting approach that is based on sorted k-skip-n-grams. The proposed method is able to produce a fingerprint of the core confidential content which ignores non-relevant (non-confidential) sections. In addition, the proposed fingerprint method is more robust to rephrasing and can also be used to detect a previously unseen confidential document and therefore provide better detection of intentional leakage incidents.",
"title": ""
},
{
"docid": "4d4c0d5a0abcd38aff2ba514f080edc0",
"text": "We present an approach to adaptively utilize deep neural networks in order to reduce the evaluation time on new examples without loss of classification performance. Rather than attempting to redesign or approximate existing networks, we propose two schemes that adaptively utilize networks. First, we pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example. By allowing examples correctly classified using early layers of the system to exit, we avoid the computational time associated with full evaluation of the network. Building upon this approach, we then learn a network selection system that adaptively selects the network to be evaluated for each example. We exploit the fact that many examples can be correctly classified using relatively efficient networks and that complex, computationally costly networks are only necessary for a small fraction of examples. By avoiding evaluation of these complex networks for a large fraction of examples, computational time can be dramatically reduced. Empirically, these approaches yield dramatic reductions in computational cost, with up to a 2.8x speedup on state-of-the-art networks from the ImageNet image recognition challenge with minimal (less than 1%) loss of accuracy.",
"title": ""
},
{
"docid": "d2292d2e530bca678ab36f387488f8f3",
"text": "One key advantage of 4G OFDM system is the relatively simple receiver implementation due to the orthogonal resource allocation. However, from sum-capacity and spectral efficiency points of view, orthogonal systems are never the achieving schemes. With the rapid development of mobile communication systems, a novel concept of non-orthogonal transmission for 5G mobile communications has attracted researches all around the world. In this trend, many new multiple access schemes and waveform modulation technologies were proposed. In this paper, some promising ones of them were discussed which include Non-orthogonal Multiple Access (NOMA), Sparse Code Multiple Access (SCMA), Multi-user Shared Access (MUSA), Pattern Division Multiple Access (PDMA) and some main new waveforms including Filter-bank based Multicarrier (FBMC), Universal Filtered Multi-Carrier (UFMC), Generalized Frequency Division Multiplexing (GFDM). By analyzing and comparing features of these technologies, a research direction of guiding on future 5G multiple access and waveform are given.",
"title": ""
},
{
"docid": "cbefaf40a904b6218bbdca0042f57b14",
"text": "For the purpose of automatically evaluating speakers’ humor usage, we build a presentation corpus containing humorous utterances based on TED talks. Compared to previous data resources supporting humor recognition research, ours has several advantages, including (a) both positive and negative instances coming from a homogeneous data set, (b) containing a large number of speakers, and (c) being open. Focusing on using lexical cues for humor recognition, we systematically compare a newly emerging text classification method based on Convolutional Neural Networks (CNNs) with a well-established conventional method using linguistic knowledge. The CNN method shows its advantages on both higher recognition accuracies and being able to learn essential features auto-",
"title": ""
},
{
"docid": "3bc34f3ef98147015e2ad94a6c615348",
"text": "Objective methods for assessing perceptual image quality traditionally attempt to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MatLab implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/~lcv/ssim/. Keywords— Image quality assessment, perceptual quality, human visual system, error sensitivity, structural similarity, structural information, image coding, JPEG, JPEG2000",
"title": ""
},
{
"docid": "06f8b713ed4020c99403c28cbd1befbc",
"text": "In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many securityand trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs’s cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.",
"title": ""
},
{
"docid": "9074416729e07ba4ec11ebd0021b41ed",
"text": "The purpose of this study is to examine the relationships between internet addiction and depression, anxiety, and stress. Participants were 300 university students who were enrolled in mid-size state University, in Turkey. In this study, the Online Cognition Scale and the Depression Anxiety Stress Scale were used. In correlation analysis, internet addiction was found positively related to depression, anxiety, and stress. According to path analysis results, depression, anxiety, and stress were predicted positively by internet addiction. This research shows that internet addiction has a direct impact on depression, anxiety, and stress.",
"title": ""
},
{
"docid": "1b5c1cbe3f53c1f3a50557ff3144887e",
"text": "The emergence of antibiotic resistant Staphylococcus aureus presents a worldwide problem that requires non-antibiotic strategies. This study investigated the anti-biofilm and anti-hemolytic activities of four red wines and two white wines against three S. aureus strains. All red wines at 0.5-2% significantly inhibited S. aureus biofilm formation and hemolysis by S. aureus, whereas the two white wines had no effect. Furthermore, at these concentrations, red wines did not affect bacterial growth. Analyses of hemolysis and active component identification in red wines revealed that the anti-biofilm compounds and anti-hemolytic compounds largely responsible were tannic acid, trans-resveratrol, and several flavonoids. In addition, red wines attenuated S. aureus virulence in vivo in the nematode Caenorhabditis elegans, which is killed by S. aureus. These findings show that red wines and their compounds warrant further attention in antivirulence strategies against persistent S. aureus infection.",
"title": ""
},
{
"docid": "96e24fabd3567a896e8366abdfaad78e",
"text": "Interior permanent magnet synchronous motor (IPMSM) is usually applied to traction motor in the hybrid electric vehicle (HEV). All motors including IPMSM have different parameters and characteristics with various combinations of the number of poles and slots. The proper combination can improve characteristics of traction system ultimately. This paper deals with analysis of the characteristics of IPMSM for mild type HEV according to the combinations of number of poles and slots. The specific models with 16-pole/18-slot, 16-pole/24-slot and 12-pole/18-slot combinations are introduced. And the advantages and disadvantages of these three models are compared. The characteristics of each model are computed in d-q axis equivalent circuit analysis and finite element analysis. After then, the proper combination of the number of poles and slots for HEV traction motor is presented after comparing these three models.",
"title": ""
},
{
"docid": "cd0e7cace1b89af72680f9d8ef38bdf3",
"text": "Analyzing stock market trends and sentiment is an interdisciplinary area of research being undertaken by many disciplines such as Finance, Computer Science, Statistics, and Economics. It has been well established that real time news plays a strong role in the movement of stock prices. With the advent of electronic and online news sources, analysts have to deal with enormous amounts of real-time, unstructured streaming data. In this paper, we present an automated text mining based approach to aggregate news stories from diverse sources and create a News Corpus. The Corpus is filtered down to relevant sentences and analyzed using Natural Language Processing (NLP) techniques. A sentiment metric, called NewsSentiment, utilizing the count of positive and negative polarity words is proposed as a measure of the sentiment of the overall news corpus. We have used various open source packages and tools to develop the news collection and aggregation engine as well as the sentiment evaluation engine. Extensive experimentation has been done using news stories about various stocks. The time variation of NewsSentiment shows a very strong correlation with the actual stock price movement. Our proposed metric has many applications in analyzing current news stories and predicting stock trends for specific companies and sectors of the economy.",
"title": ""
},
{
"docid": "6a02c629f83049712c09ebe43d9a4ac9",
"text": "The term model-driven engineering (MDE) is typically used to describe software development approaches in which abstract models of software systems are created and systematically transformed to concrete implementations. In this paper we give an overview of current research in MDE and discuss some of the major challenges that must be tackled in order to realize the MDE vision of software development. We argue that full realizations of the MDE vision may not be possible in the near to medium-term primarily because of the wicked problems involved. On the other hand, attempting to realize the vision will provide insights that can be used to significantly reduce the gap between evolving software complexity and the technologies used to manage complexity.",
"title": ""
},
{
"docid": "230a79e785aec288582ee12de3d6c262",
"text": "OBJECTIVE\nThe goal of enhanced nutrition in critically ill patients is to improve outcome by reducing lean tissue wasting. However, such effect has not been proven. This study aimed to assess the effect of early administration of parenteral nutrition on muscle volume and composition by repeated quantitative CT.\n\n\nDESIGN\nA preplanned substudy of a randomized controlled trial (Early Parenteral Nutrition Completing Enteral Nutrition in Adult Critically Ill Patients [EPaNIC]), which compared early initiation of parenteral nutrition when enteral nutrition was insufficient (early parenteral nutrition) with tolerating a pronounced nutritional deficit for 1 week in ICU (late parenteral nutrition). Late parenteral nutrition prevented infections and accelerated recovery.\n\n\nSETTING\nUniversity hospital.\n\n\nPATIENTS\nFifteen EPaNIC study neurosurgical patients requiring prescheduled repeated follow-up CT scans and six healthy volunteers matched for age, gender, and body mass index.\n\n\nINTERVENTION\nRepeated abdominal and femoral quantitative CT images were obtained in a standardized manner on median ICU day 2 (interquartile range, 2-3) and day 9 (interquartile range, 8-10). Intramuscular, subcutaneous, and visceral fat compartments were delineated manually. Muscle and adipose tissue volume and composition were quantified using standard Hounsfield Unit ranges.\n\n\nMEASUREMENTS AND MAIN RESULTS\nCritical illness evoked substantial loss of femoral muscle volume in 1 week's time, irrespective of the nutritional regimen. Early parenteral nutrition reduced the quality of the muscle tissue, as reflected by the attenuation, revealing increased intramuscular water/lipid content. Early parenteral nutrition also increased the volume of adipose tissue islets within the femoral muscle compartment. These changes in skeletal muscle quality correlated with caloric intake. In the abdominal muscle compartments, changes were similar, albeit smaller. Femoral and abdominal subcutaneous adipose tissue compartments were unaffected by disease and nutritional strategy.\n\n\nCONCLUSIONS\nEarly parenteral nutrition did not prevent the pronounced wasting of skeletal muscle observed over the first week of critical illness. Furthermore, early parenteral nutrition increased the amount of adipose tissue within the muscle compartments.",
"title": ""
}
] |
scidocsrr
|
4e3bac67202b90957932894c971ff95e
|
Towards native code offloading based MCC frameworks for multimedia applications: A survey
|
[
{
"docid": "677dea61996aa5d1461998c09ecc334f",
"text": "Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices, while such applications drain increasingly more battery power of mobile devices. Offloading some parts of the application running on mobile devices onto remote servers/clouds is a promising approach to extend the battery life of mobile devices. However, as data transmission of offloading causes delay and energy costs for mobile devices, it is necessary to carefully design application partitioning/offloading schemes to weigh the benefits against the transmission delay and costs. Due to bandwidth fluctuations in the wireless environment, static partitionings in previous work are unsuitable for mobile platforms with a fixed bandwidth assumption, while dynamic partitionings result in high overhead of continuous partitioning for mobile devices. Therefore, we propose a novel partitioning scheme taking the bandwidth as a variable to improve static partitioning and avoid high costs of dynamic partitioning. Firstly, we construct application Object Relation Graphs (ORGs) by combining static analysis and dynamic profiling to propose partitioning optimization models. Then based on our novel executiontime and energy optimization partitioning models, we propose the Branch-and-Bound based Application Partitioning (BBAP) algorithm and Min-Cut based Greedy Application Partitioning (MCGAP) algorithm. BBAP is suited to finding the optimal partitioning solutions for small applications, while MCGAP is applicable to quickly obtaining suboptimal solutions for large-scale applications. Experimental results demonstrate that both algorithms can adapt to bandwidth fluctuations well, and significantly reduce application execution time and energy consumption by optimally distributing components between mobile devices and servers. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8869cab615e5182c7c03f074ead081f7",
"text": "This article introduces the principal concepts of multimedia cloud computing and presents a novel framework. We address multimedia cloud computing from multimedia-aware cloud (media cloud) and cloud-aware multimedia (cloud media) perspectives. First, we present a multimedia-aware cloud, which addresses how a cloud can perform distributed multimedia processing and storage and provide quality of service (QoS) provisioning for multimedia services. To achieve a high QoS for multimedia services, we propose a media-edge cloud (MEC) architecture, in which storage, central processing unit (CPU), and graphics processing unit (GPU) clusters are presented at the edge to provide distributed parallel processing and QoS adaptation for various types of devices.",
"title": ""
}
] |
[
{
"docid": "9973dab94e708f3b87d52c24b8e18672",
"text": "We show that two popular discounted reward natural actor-critics, NAC-LSTD and eNAC, follow biased estimates of the natural policy gradient. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation and prove their convergence to globally optimal policies for discrete problems and locally optimal policies for continuous problems. Finally, we argue that the bias makes the existing algorithms more appropriate for the average reward setting.",
"title": ""
},
{
"docid": "4ab8913fff86d8a737ed62c56fe2b39d",
"text": "This paper draws on the social and behavioral sciences in an endeavor to specify the nature and microfoundations of the capabilities necessary to sustain superior enterprise performance in an open economy with rapid innovation and globally dispersed sources of invention, innovation, and manufacturing capability. Dynamic capabilities enable business enterprises to create, deploy, and protect the intangible assets that support superior longrun business performance. The microfoundations of dynamic capabilities—the distinct skills, processes, procedures, organizational structures, decision rules, and disciplines—which undergird enterprise-level sensing, seizing, and reconfiguring capacities are difficult to develop and deploy. Enterprises with strong dynamic capabilities are intensely entrepreneurial. They not only adapt to business ecosystems, but also shape them through innovation and through collaboration with other enterprises, entities, and institutions. The framework advanced can help scholars understand the foundations of long-run enterprise success while helping managers delineate relevant strategic considerations and the priorities they must adopt to enhance enterprise performance and escape the zero profit tendency associated with operating in markets open to global competition. Copyright 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "71333997a4f9f38de0b53697d7b7cff1",
"text": "Environmental sustainability of a supply chain depends on the purchasing strategy of the supply chain members. Most of the earlier models have focused on cost, quality, lead time, etc. issues but not given enough importance to carbon emission for supplier evaluation. Recently, there is a growing pressure on supply chain members for reducing the carbon emission of their supply chain. This study presents an integrated approach for selecting the appropriate supplier in the supply chain, addressing the carbon emission issue, using fuzzy-AHP and fuzzy multi-objective linear programming. Fuzzy AHP (FAHP) is applied first for analyzing the weights of the multiple factors. The considered factors are cost, quality rejection percentage, late delivery percentage, green house gas emission and demand. These weights of the multiple factors are used in fuzzy multi-objective linear programming for supplier selection and quota allocation. An illustration with a data set from a realistic situation is presented to demonstrate the effectiveness of the proposed model. The proposed approach can handle realistic situation when there is information vagueness related to inputs. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b803d626421c7e7eaf52635c58523e8f",
"text": "Force-directed algorithms are among the most flexible methods for calculating layouts of simple undirected graphs. Also known as spring embedders, such algorithms calculate the layout of a graph using only information contained within the structure of the graph itself, rather than relying on domain-specific knowledge. Graphs drawn with these algorithms tend to be aesthetically pleasing, exhibit symmetries, and tend to produce crossing-free layouts for planar graphs. In this survey we consider several classical algorithms, starting from Tutte’s 1963 barycentric method, and including recent scalable multiscale methods for large and dynamic graphs.",
"title": ""
},
{
"docid": "a50763db7b9c73ab5e29389d779c343d",
"text": "Near to real-time emotion recognition is a promising task for human-computer interaction (HCI) and human-robot interaction (HRI). Using knowledge about the user's emotions depends upon the possibility to extract information about users' emotions during HCI or HRI without explicitly asking users about the feelings they are experiencing. To be able to sense the user's emotions without interrupting the HCI, we present a new method applied to the emotional experience of the user for extracting semantic information from the autonomic nervous system (ANS) signals associated with emotions. We use the concepts of 1st person - where the subject consciously (and subjectively) extracts the semantic meaning of a given lived experience, (e.g. `I felt amused') - and 3rd person approach - where the experimenter interprets the semantic meaning of the subject's experience from a set of externally (and objectively) measured variables (e.g. galvanic skin response measures). Based on the 3rd person approach, our technique aims at psychologically interpreting physiological parameters (skin conductance and heart rate), and at producing a continuous extraction of the user's affective state during HCI or HRI. We also combine it with the 1st person approach measure which allows a tailored interpretation of the physiological measure closely related to the user own emotional experience",
"title": ""
},
{
"docid": "99bd908e217eb9f56c40abd35839e9b3",
"text": "How does the physical structure of an arithmetic expression affect the computational processes engaged in by reasoners? In handwritten arithmetic expressions containing both multiplications and additions, terms that are multiplied are often placed physically closer together than terms that are added. Three experiments evaluate the role such physical factors play in how reasoners construct solutions to simple compound arithmetic expressions (such as \"2 + 3 × 4\"). Two kinds of influence are found: First, reasoners incorporate the physical size of the expression into numerical responses, tending to give larger responses to more widely spaced problems. Second, reasoners use spatial information as a cue to hierarchical expression structure: More narrowly spaced subproblems within an expression tend to be solved first and tend to be multiplied. Although spatial relationships besides order are entirely formally irrelevant to expression semantics, reasoners systematically use these relationships to support their success with various formal properties.",
"title": ""
},
{
"docid": "9c25a2e343e9e259a9881fd13983c150",
"text": "Advances in cognitive, affective, and social neuroscience raise a host of new questions concerning the ways in which neuroscience can and should be used. These advances also challenge our intuitions about the nature of humans as moral and spiritual beings. Neuroethics is the new field that grapples with these issues. The present article surveys a number of applications of neuroscience to such diverse arenas as marketing, criminal justice, the military, and worker productivity. The ethical, legal, and societal effects of these applications are discussed. Less practical, but perhaps ultimately more consequential, is the impact of neuroscience on our worldview and our understanding of the human person.",
"title": ""
},
{
"docid": "10fd3a7acae83f698ad04c4d0f011600",
"text": "A continuous-rate digital clock and data recovery (CDR) with automatic frequency acquisition is presented. The proposed automatic frequency acquisition scheme implemented using a conventional bang-bang phase detector (BBPD) requires minimum additional hardware, is immune to input data transition density, and is applicable to subrate CDRs. A ring-oscillator-based two-stage fractional-N phase-locked loop (PLL) is used as a digitally controlled oscillator (DCO) to achieve wide frequency range, low noise, and to decouple the tradeoff between jitter transfer (JTRAN) bandwidth and ring oscillator noise suppression in conventional CDRs. The CDR is implemented using a digital D/PLL architecture to decouple JTRAN bandwidth from jitter tolerance (JTOL) corner frequency, eliminate jitter peaking, and remove JTRAN dependence on BBPD gain. Fabricated in a 65 nm CMOS process, the prototype CDR achieves error-free operation (BER <; 10-12) from 4 to 10.5 Gb/s with pseudorandom binary sequence (PRBS) data sequences ranging from PRBS7 to PRBS31. The proposed automatic frequency acquisition scheme always locks the CDR loop within 1000 ppm residual frequency error in worst case. At 10 Gb/s, the CDR consumes 22.5 mW power and achieves a recovered clock long-term jitter of 2.2 psrms/24.0 pspp with PRBS31 input data. The measured JTRAN bandwidth and JTOL corner frequencies are 0.2 and 9 MHz, respectively.",
"title": ""
},
{
"docid": "509fa5630ed7e3e7bd914fb474da5071",
"text": "Languages with rich type systems are beginning to employ a blend of type inference and type checking, so that the type inference engine is guided by programmer-supplied type annotations. In this paper we show, for the first time, how to combine the virtues of two well-established ideas: unification-based inference, and bidi-rectional propagation of type annotations. The result is a type system that conservatively extends Hindley-Milner, and yet supports both higher-rank types and impredicativity.",
"title": ""
},
{
"docid": "ed5185ea36f61a9216c6f0183b81d276",
"text": "Blockchain technology enables the creation of a decentralized environment where transactions and data are not under the control of any third party organization. Any transaction ever completed is recorded in a public ledger in a verifiable and permanent way. Based on blockchain technology, we propose a global higher education credit platform, named EduCTX. This platform is based on the concept of the European Credit Transfer and Accumulation System (ECTS). It constitutes a globally trusted, decentralized higher education credit and grading system that can offer a globally unified viewpoint for students and higher education institutions (HEIs), as well as for other potential stakeholders such as companies, institutions and organizations. As a proof of concept, we present a prototype implementation of the environment, based on the open-source Ark Blockchain Platform. Based on a globally distributed peer-to-peer network, EduCTX will process, manage and control ECTX tokens, which represent credits that students gain for completed courses such as ECTS. HEIs are the peers of the blockchain network. The platform is a first step towards a more transparent and technologically advanced form of higher education systems. The EduCTX platform represents the basis of the EduCTX initiative which anticipates that various HEIs would join forces in order to create a globally efficient, simplified and ubiquitous environment in order to avoid language and administrative barriers. Therefore we invite and encourage HEIs to join the EduCTX initiative and the EduCTX blockchain network.",
"title": ""
},
{
"docid": "7c0677ad61691beecd7f89d5c70f2b5b",
"text": "Bidirectional dc-dc converters (BDC) have recently received a lot of attention due to the increasing need to systems with the capability of bidirectional energy transfer between two dc buses. Apart from traditional application in dc motor drives, new applications of BDC include energy storage in renewable energy systems, fuel cell energy systems, hybrid electric vehicles (HEV) and uninterruptible power supplies (UPS). The fluctuation nature of most renewable energy resources, like wind and solar, makes them unsuitable for standalone operation as the sole source of power. A common solution to overcome this problem is to use an energy storage device besides the renewable energy resource to compensate for these fluctuations and maintain a smooth and continuous power flow to the load. As the most common and economical energy storage devices in medium-power range are batteries and super-capacitors, a dc-dc converter is always required to allow energy exchange between storage device and the rest of system. Such a converter must have bidirectional power flow capability with flexible control in all operating modes. In HEV applications, BDCs are required to link different dc voltage buses and transfer energy between them. For example, a BDC is used to exchange energy between main batteries (200-300V) and the drive motor with 500V dc link. High efficiency, lightweight, compact size and high reliability are some important requirements for the BDC used in such an application. BDCs also have applications in line-interactive UPS which do not use double conversion technology and thus can achieve higher efficiency. In a line-interactive UPS, the UPS output terminals are connected to the grid and therefore energy can be fed back to the inverter dc bus and charge the batteries via a BDC during normal mode. In backup mode, the battery feeds the inverter dc bus again via BDC but in reverse power flow direction. BDCs can be classified into non-isolated and isolated types. Non-isolated BDCs (NBDC) are simpler than isolated BDCs (IBDC) and can achieve better efficiency. However, galvanic isolation is required in many applications and mandated by different standards. The",
"title": ""
},
{
"docid": "1f752034b5307c0118d4156d0b95eab3",
"text": "Importance\nTherapy-related myeloid neoplasms are a potentially life-threatening consequence of treatment for autoimmune disease (AID) and an emerging clinical phenomenon.\n\n\nObjective\nTo query the association of cytotoxic, anti-inflammatory, and immunomodulating agents to treat patients with AID with the risk for developing myeloid neoplasm.\n\n\nDesign, Setting, and Participants\nThis retrospective case-control study and medical record review included 40 011 patients with an International Classification of Diseases, Ninth Revision, coded diagnosis of primary AID who were seen at 2 centers from January 1, 2004, to December 31, 2014; of these, 311 patients had a concomitant coded diagnosis of myelodysplastic syndrome (MDS) or acute myeloid leukemia (AML). Eighty-six cases met strict inclusion criteria. A case-control match was performed at a 2:1 ratio.\n\n\nMain Outcomes and Measures\nOdds ratio (OR) assessment for AID-directed therapies.\n\n\nResults\nAmong the 86 patients who met inclusion criteria (49 men [57%]; 37 women [43%]; mean [SD] age, 72.3 [15.6] years), 55 (64.0%) had MDS, 21 (24.4%) had de novo AML, and 10 (11.6%) had AML and a history of MDS. Rheumatoid arthritis (23 [26.7%]), psoriasis (18 [20.9%]), and systemic lupus erythematosus (12 [14.0%]) were the most common autoimmune profiles. Median time from onset of AID to diagnosis of myeloid neoplasm was 8 (interquartile range, 4-15) years. A total of 57 of 86 cases (66.3%) received a cytotoxic or an immunomodulating agent. In the comparison group of 172 controls (98 men [57.0%]; 74 women [43.0%]; mean [SD] age, 72.7 [13.8] years), 105 (61.0%) received either agent (P = .50). Azathioprine sodium use was observed more frequently in cases (odds ratio [OR], 7.05; 95% CI, 2.35- 21.13; P < .001). Notable but insignificant case cohort use among cytotoxic agents was found for exposure to cyclophosphamide (OR, 3.58; 95% CI, 0.91-14.11) followed by mitoxantrone hydrochloride (OR, 2.73; 95% CI, 0.23-33.0). Methotrexate sodium (OR, 0.60; 95% CI, 0.29-1.22), mercaptopurine (OR, 0.62; 95% CI, 0.15-2.53), and mycophenolate mofetil hydrochloride (OR, 0.66; 95% CI, 0.21-2.03) had favorable ORs that were not statistically significant. No significant association between a specific length of time of exposure to an agent and the drug's category was observed.\n\n\nConclusions and Relevance\nIn a large population with primary AID, azathioprine exposure was associated with a 7-fold risk for myeloid neoplasm. The control and case cohorts had similar systemic exposures by agent category. No association was found for anti-tumor necrosis factor agents. Finally, no timeline was found for the association of drug exposure with the incidence in development of myeloid neoplasm.",
"title": ""
},
{
"docid": "c451d86c6986fab1a1c4cd81e87e6952",
"text": "Large-scale is a trend in person re-identi- fication (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed PDH method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets.",
"title": ""
},
{
"docid": "a1018c89d326274e4b71ffc42f4ebba2",
"text": "We describe a method for improving the classification of short text strings using a combination of labeled training data plus a secondary corpus of unlabeled but related longer documents. We show that such unlabeled background knowledge can greatly decrease error rates, particularly if the number of examples or the size of the strings in the training set is small. This is particularly useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular problem on the World Wide Web. Our approach views the task as one of information integration using WHIRL, a tool that combines database functionalities with techniques from the information-retrieval literature.",
"title": ""
},
{
"docid": "b770124e1e5a7b4161b7f00a9bf3916f",
"text": "In the biomedical domain large amount of text documents are unstructured information is available in digital text form. Text Mining is the method or technique to find for interesting and useful information from unstructured text. Text Mining is also an important task in medical domain. The technique uses for Information retrieval, Information extraction and natural language processing (NLP). Traditional approaches for information retrieval are based on key based similarity. These approaches are used to overcome these problems; Semantic text mining is to discover the hidden information from unstructured text and making relationships of the terms occurring in them. In the biomedical text, the text should be in the form of text which can be present in the books, articles, literature abstracts, and so forth. Most of information is stored in the text format, so in this paper we will focus on the role of ontology for semantic text mining by using WordNet. Specifically, we have presented a model for extracting concepts from text documents using linguistic ontology in the domain of medical.",
"title": ""
},
{
"docid": "e090bb879e35dbabc5b3c77c98cd6832",
"text": "Immunity of analog circuit blocks is becoming a major design risk. This paper presents an automated methodology to simulate the susceptibility of a circuit during the design phase. More specifically, we propose a CAD tool which determines the fail/pass criteria of a signal under direct power injection (DPI). This contribution describes the function of the tool which is validated by a LDO regulator.",
"title": ""
},
{
"docid": "585c589cdab52eaa63186a70ac81742d",
"text": "BACKGROUND\nThere has been a rapid increase in the use of technology-based activity trackers to promote behavior change. However, little is known about how individuals use these trackers on a day-to-day basis or how tracker use relates to increasing physical activity.\n\n\nOBJECTIVE\nThe aims were to use minute level data collected from a Fitbit tracker throughout a physical activity intervention to examine patterns of Fitbit use and activity and their relationships with success in the intervention based on ActiGraph-measured moderate to vigorous physical activity (MVPA).\n\n\nMETHODS\nParticipants included 42 female breast cancer survivors randomized to the physical activity intervention arm of a 12-week randomized controlled trial. The Fitbit One was worn daily throughout the 12-week intervention. ActiGraph GT3X+ accelerometer was worn for 7 days at baseline (prerandomization) and end of intervention (week 12). Self-reported frequency of looking at activity data on the Fitbit tracker and app or website was collected at week 12.\n\n\nRESULTS\nAdherence to wearing the Fitbit was high and stable, with a mean of 88.13% of valid days over 12 weeks (SD 14.49%). Greater adherence to wearing the Fitbit was associated with greater increases in ActiGraph-measured MVPA (binteraction=0.35, P<.001). Participants averaged 182.6 minutes/week (SD 143.9) of MVPA on the Fitbit, with significant variation in MVPA over the 12 weeks (F=1.91, P=.04). The majority (68%, 27/40) of participants reported looking at their tracker or looking at the Fitbit app or website once a day or more. Changes in Actigraph-measured MVPA were associated with frequency of looking at one's data on the tracker (b=-1.36, P=.07) but not significantly associated with frequency of looking at one's data on the app or website (P=.36).\n\n\nCONCLUSIONS\nThis is one of the first studies to explore the relationship between use of a commercially available activity tracker and success in a physical activity intervention. A deeper understanding of how individuals engage with technology-based trackers may enable us to more effectively use these types of trackers to promote behavior change.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02332876; https://clinicaltrials.gov/ct2/show/NCT02332876?term=NCT02332876 &rank=1 (Archived by WebCite at http://www.webcitation.org/6wplEeg8i).",
"title": ""
},
{
"docid": "7e32376722b669d592a4a97fc1d6bf89",
"text": "The main challenge in achieving good image morphs is to create a map that aligns corresponding image elements. Our aim is to help automate this often tedious task. We compute the map by optimizing the compatibility of corresponding warped image neighborhoods using an adaptation of structural similarity. The optimization is regularized by a thin-plate spline and may be guided by a few user-drawn points. We parameterize the map over a halfway domain and show that this representation offers many benefits. The map is able to treat the image pair symmetrically, model simple occlusions continuously, span partially overlapping images, and define extrapolated correspondences. Moreover, it enables direct evaluation of the morph in a pixel shader without mesh rasterization. We improve the morphs by optimizing quadratic motion paths and by seamlessly extending content beyond the image boundaries. We parallelize the algorithm on a GPU to achieve a responsive interface and demonstrate challenging morphs obtained with little effort.",
"title": ""
},
{
"docid": "ef584ca8b3e9a7f8335549927df1dc16",
"text": "Rapid evolution in technology and the internet brought us to the era of online services. E-commerce is nothing but trading goods or services online. Many customers share their good or bad opinions about products or services online nowadays. These opinions become a part of the decision-making process of consumer and make an impact on the business model of the provider. Also, understanding and considering reviews will help to gain the trust of the customer which will help to expand the business. Many users give reviews for the single product. Such thousands of review can be analyzed using big data effectively. The results can be presented in a convenient visual form for the non-technical user. Thus, the primary goal of research work is the classification of customer reviews given for the product in the map-reduce framework.",
"title": ""
},
{
"docid": "1de2d4e5b74461c142e054ffd2e62c2d",
"text": "Table : Comparisons of CNN, LSTM and SWEM architectures. Columns correspond to the number of compositional parameters, computational complexity and sequential operations, respectively. v Consider a text sequence represented as X, composed of a sequence of words. Let {v#, v$, ...., v%} denote the respective word embeddings for each token, where L is the sentence/document length; v The compositional function, X → z, aims to combine word embeddings into a fixed-length sentence/document representation z. Typically, LSTM or CNN are employed for this purpose;",
"title": ""
}
] |
scidocsrr
|
6dc25cce5e69a89a3b8e06723b61693b
|
Predictive translation memory: a mixed-initiative system for human language translation
|
[
{
"docid": "90fc941f6db85dd24b47fa06dd0bb0aa",
"text": "Recent debate has centered on the relative promise of focusinguser-interface research on developing new metaphors and tools thatenhance users abilities to directly manipulate objects versusdirecting effort toward developing interface agents that provideautomation. In this paper, we review principles that show promisefor allowing engineers to enhance human-computer interactionthrough an elegant coupling of automated services with directmanipulation. Key ideas will be highlighted in terms of the Lookoutsystem for scheduling and meeting management.",
"title": ""
}
] |
[
{
"docid": "d2d4b51e3d7d0172946140dacad82db8",
"text": "The integration of supply chains offers many benefits; yet, it may also render organisations more vulnerable to electronic fraud (e-fraud). E-fraud can drain on organisations’ financial resources, and can have a significant adverse effect on the ability to achieve their strategic objectives. Therefore, efraud control should be part of corporate board-level due diligence, and should be integrated into organisations’ practices and business plans. Management is responsible for taking into consideration the relevant cultural, strategic and implementation elements that inter-relate with each other and to coordinating the human, technological and financial resources necessary to designing and implementing policies and procedures for controlling e-fraud. Due to the characteristics of integrated supply chains, a move from the traditional vertical approach to a systemic, horizontal-vertical approach is necessary. Although the e-fraud risk cannot be eliminated, risk mitigation policies and processes tailored to an organisation’s particular vulnerabilities can significantly reduce the risk and may even preclude certain classes of frauds. In this paper, a conceptual framework of e-fraud control in an integrated supply chain is proposed. The proposed conceptual framework can help managers and practitioners better understand the issues and plan the activities involved in a systemic, horizontal-vertical approach to e-fraud control in an integrated supply chain, and can be a basis upon which empirical studies can be build.",
"title": ""
},
{
"docid": "0a31ab53b887cf231d7ca1a286763e5f",
"text": "Humans acquire their most basic physical concepts early in development, but continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model and human learners on a challenging task of inferring novel physical laws in microworlds given short movies. People are generally able to perform this task and behave in line with model predictions. Yet they also make systematic errors suggestive of how a top-down Bayesian approach to learning might be complemented by a more bottomup feature-based approximate inference scheme, to best explain theory learning at an algorithmic level.",
"title": ""
},
{
"docid": "79cdd24d14816f45b539f31606a3d5ee",
"text": "The huge increase in type 2 diabetes is a burden worldwide. Many marketed compounds do not address relevant aspects of the disease; they may already compensate for defects in insulin secretion and insulin action, but loss of secreting cells (β-cell destruction), hyperglucagonemia, gastric emptying, enzyme activation/inhibition in insulin-sensitive cells, substitution or antagonizing of physiological hormones and pathways, finally leading to secondary complications of diabetes, are not sufficiently addressed. In addition, side effects for established therapies such as hypoglycemias and weight gain have to be diminished. At present, nearly 1000 compounds have been described, and approximately 180 of these are going to be developed (already in clinical studies), some of them directly influencing enzyme activity, influencing pathophysiological pathways, and some using G-protein-coupled receptors. In addition, immunological approaches and antisense strategies are going to be developed. Many compounds are derived from physiological compounds (hormones) aiming at improving their kinetics and selectivity, and others are chemical compounds that were obtained by screening for a newly identified target in the physiological or pathophysiological machinery. In some areas, great progress is observed (e.g., incretin area); in others, no great progress is obvious (e.g., glucokinase activators), and other areas are not recommended for further research. For all scientific areas, conclusions with respect to their impact on diabetes are given. Potential targets for which no chemical compound has yet been identified as a ligand (agonist or antagonist) are also described.",
"title": ""
},
{
"docid": "4acc30bade98c1257ab0a904f3695f3d",
"text": "Manoeuvre assistance is currently receiving increasing attention from the car industry. In this article we focus on the implementation of a reverse parking assistance and more precisely, a reverse parking manoeuvre planner. This paper is based on a manoeuvre planning technique presented in previous work and specialised in planning reverse parking manoeuvre. Since a key part of the previous method was not explicited, our goal in this paper is to present a practical and reproducible way to implement a reverse parking manoeuvre planner. Our implementation uses a database engine to search for the elementary movements that will make the complete parking manoeuvre. Our results have been successfully tested on a real platform: the CSIRO Autonomous Tractor.",
"title": ""
},
{
"docid": "045a56e333b1fe78677b8f4cc4c20ecc",
"text": "Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions.",
"title": ""
},
{
"docid": "c798c5c19dddb968f15f7bc7734ac2e4",
"text": "Information extraction relevant to the user queries is the challenging task in the ontology environment due to data varieties such as image, video, and text. The utilization of appropriate semantic entities enables the content-based search on annotated text. Recently, the automatic extraction of textual content in the audio-visual content is an advanced research area in a multimedia (MM) environment. The annotation of the video includes several tags and comments. This paper proposes the Collaborative Tagging (CT) model based on the Block Acquiring Page Segmentation (BAPS) method to retrieve the tag-based information. The information extraction in this model includes the Ontology-Based Information Extraction (OBIE) based on the single ontology utilization. The semantic annotation phase in the proposed work inserts the metadata with limited machine-readable terms. The insertion process is split into two major processes such as database uploading to server and extraction of images/web pages based on the results of semantic phase. Novel weight-based novel clustering algorithms are introduced to extract knowledge from MM contents. The ranking based on the weight value in the semantic annotation phase supports the image/web page retrieval process effectively. The comparative analysis of the proposed BAPS-CT with the existing information retrieval (IR) models regarding the average precision rate, time cost, and storage space rate assures the effectiveness of BAPS-CT in OMIR.",
"title": ""
},
{
"docid": "87835d75704f493639744abbf0119bdb",
"text": "Developers of cloud-scale applications face a difficult decision of which kind of storage to use, summarised by the CAP theorem. Currently the choice is between classical CP databases, which provide strong guarantees but are slow, expensive, and unavailable under partition, and NoSQL-style AP databases, which are fast and available, but too hard to program against. We present an alternative: Cure provides the highest level of guarantees that remains compatible with availability. These guarantees include: causal consistency (no ordering anomalies), atomicity (consistent multi-key updates), and support for high-level data types (developer friendly API) with safe resolution of concurrent updates (guaranteeing convergence). These guarantees minimise the anomalies caused by parallelism and distribution, thus facilitating the development of applications. This paper presents the protocols for highly available transactions, and an experimental evaluation showing that Cure is able to achieve scalability similar to eventually-consistent NoSQL databases, while providing stronger guarantees.",
"title": ""
},
{
"docid": "b9720d1350bf89c8a94bb30276329ce2",
"text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.",
"title": ""
},
{
"docid": "259972cd20a1f763b07bef4619dc7f70",
"text": "This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.",
"title": ""
},
{
"docid": "4161b52b832c0b80d0815b9e80a5dda0",
"text": "Machine Comprehension (MC) is a challenging task in Natural Language Processing field, which aims to guide the machine to comprehend a passage and answer the given question. Many existing approaches on MC task are suffering the inefficiency in some bottlenecks, such as insufficient lexical understanding, complex question-passage interaction, incorrect answer extraction and so on. In this paper, we address these problems from the viewpoint of how humans deal with reading tests in a scientific way. Specifically, we first propose a novel lexical gating mechanism to dynamically combine the words and characters representations. We then guide the machines to read in an interactive way with attention mechanism and memory network. Finally we add a checking layer to refine the answer for insurance. The extensive experiments on two popular datasets SQuAD and TriviaQA show that our method exceeds considerable performance than most stateof-the-art solutions at the time of submission.",
"title": ""
},
{
"docid": "abbafaaf6a93e2a49a692690d4107c9a",
"text": "Virtual teams have become a ubiquitous form of organizing, but the impact of social structures within and between teams on group performance remains understudied. This paper uses the case study of a massively multiplayer online game and server log data from over 10,000 players to examine the connection between group social capital (operationalized through guild network structure measures) and team effectiveness, given a variety of in-game social networks. Three different networks, social, task, and exchange networks, are compared and contrasted while controlling for group size, group age, and player experience. Team effectiveness is maximized at a roughly moderate level of closure across the networks, suggesting that this is the optimal level of the groupâs network density. Guilds with high brokerage, meaning they have diverse connections with other groups, were more effective in achievement-oriented networks. In addition, guilds with central leaders were more effective when they teamed up with other guild leaders.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "8dc3ba4784ea55183e96b466937d050b",
"text": "One of the major problems that clinical neuropsychology has had in memory clinics is to apply ecological, easily administrable and sensitive tests that can make the diagnosis of dementia both precocious and reliable. Often the choice of the best neuropsychological test is hard because of a number of variables that can influence a subject’s performance. In this regard, tests originally devised to investigate cognitive functions in healthy adults are not often appropriate to analyze cognitive performance in old subjects with low education because of their intrinsically complex nature. In the present paper, we present normative values for the Rey–Osterrieth Complex Figure B Test (ROCF-B) a simple test that explores constructional praxis and visuospatial memory. We collected normative data of copy, immediate and delayed recall of the ROCF-B in a group of 346 normal Italian subjects above 40 years. A multiple regression analysis was performed to evaluate the potential effect of age, sex, and education on the three tasks administered to the subjects. Age and education had a significant effect on copying, immediate recall, and delayed recall as well as on the rate of forgetting. Correction grids and equivalent scores with cut-off values relative to each task are available. The availability of normative values can make the ROCF-B a valid instrument to assess non-verbal memory in adults and in the elderly for whom the commonly used ROCF-A is too demanding.",
"title": ""
},
{
"docid": "65eb604a2d45f29923ba24976130adc1",
"text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.",
"title": ""
},
{
"docid": "5dec0745ee631ec4ffbed6402093e35b",
"text": "BACKGROUND\nAdolescent breast hypertrophy can have long-term negative medical and psychological impacts. In select patients, breast reduction surgery is the best treatment. Unfortunately, many in the general and medical communities hold certain misconceptions regarding the indications and timing of this procedure. Several etiologies of adolescent breast hypertrophy, including juvenile gigantomastia, adolescent macromastia, and obesity-related breast hypertrophy, complicate the issue. It is our hope that this paper will clarify these misconceptions through a combined retrospective and literature review.\n\n\nMETHODS\nA retrospective review was conducted looking at adolescent females (≤18 years old) who had undergone bilateral breast reduction surgery. Their preoperative comorbidities, BMI, reduction volume, postoperative complications, and subjective satisfaction were recorded. In addition, a literature review was completed.\n\n\nRESULTS\n34 patients underwent bilateral breast reduction surgery. The average BMI was 29.5 kg/m(2). The average volume resected during bilateral breast reductions was 1820.9 g. Postoperative complications include dehiscence (9%), infection (3%), and poor scarring (6%). There were no cases of recurrence or need for repeat operation. Self-reported patient satisfaction was 97%. All patients described significant improvements in self body-image and participation in social activities. The literature review yielded 25 relevant reported articles, 24 of which are case studies.\n\n\nCONCLUSION\nReduction mammaplasty is safe and effective. It is the preferred treatment method for breast hypertrophy in the adolescent female and may be the only way to alleviate the increased social, psychological, and physical strain caused by this condition.",
"title": ""
},
{
"docid": "353fae3edb830aa86db682f28f64fd90",
"text": "The penetration of renewable resources in power system has been increasing in recent years. Many of these resources are uncontrollable and variable in nature, wind in particular, are relatively unpredictable. At high penetration levels, volatility of wind power production could cause problems for power system to maintain system security and reliability. One of the solutions being proposed to improve reliability and performance of the system is to integrate energy storage devices into the network. In this paper, unit commitment and dispatch schedule in power system with and without energy storage is examined for different level of wind penetration. Battery energy storage (BES) is considered as an alternative solution to store energy. The SCUC formulation and solution technique with wind power and BES is presented. The proposed formulation and model is validated with eight-bus system case study. Further, a discussion on the role of BES on locational pricing, economic, peak load shaving, and transmission congestion management had been made.",
"title": ""
},
{
"docid": "260e574e9108e05b98df7e4ed489e5fc",
"text": "Why are we not living yet with robots? If robots are not common everyday objects, it is maybe because we have looked for robotic applications without considering with sufficient attention what could be the experience of interacting with a robot. This article introduces the idea of a value profile, a notion intended to capture the general evolution of our experience with different kinds of objects. After discussing value profiles of commonly used objects, it offers a rapid outline of the challenging issues that must be investigated concerning immediate, short-term and long-term experience with robots. Beyond science-fiction classical archetypes, the picture emerging from this analysis is the one of versatile everyday robots, autonomously developing in interaction with humans, communicating with one another, changing shape and body in order to be adapted to their various context of use. To become everyday objects, robots will not necessary have to be useful, but they will have to be at the origins of radically new forms of experiences.",
"title": ""
},
{
"docid": "60ff841b0b13442c2afd5dd73178145a",
"text": "Detecting inferences in documents is critical for ensuring privacy when sharing information. In this paper, we propose a refined and practical model of inference detection using a reference corpus. Our model is inspired by association rule mining: inferences are based on word co-occurrences. Using the model and taking the Web as the reference corpus, we can find inferences and measure their strength through web-mining algorithms that leverage search engines such as Google or Yahoo!.\n Our model also includes the important case of private corpora, to model inference detection in enterprise settings in which there is a large private document repository. We find inferences in private corpora by using analogues of our Web-mining algorithms, relying on an index for the corpus rather than a Web search engine.\n We present results from two experiments. The first experiment demonstrates the performance of our techniques in identifying all the keywords that allow for inference of a particular topic (e.g. \"HIV\") with confidence above a certain threshold. The second experiment uses the public Enron e-mail dataset. We postulate a sensitive topic and use the Enron corpus and the Web together to find inferences for the topic.\n These experiments demonstrate that our techniques are practical, and that our model of inference based on word co-occurrence is well-suited to efficient inference detection.",
"title": ""
},
{
"docid": "f82a49434548e1aa09792877d84b296c",
"text": "Rats and mice have a tendency to interact more with a novel object than with a familiar object. This tendency has been used by behavioral pharmacologists and neuroscientists to study learning and memory. A popular protocol for such research is the object-recognition task. Animals are first placed in an apparatus and allowed to explore an object. After a prescribed interval, the animal is returned to the apparatus, which now contains the familiar object and a novel object. Object recognition is distinguished by more time spent interacting with the novel object. Although the exact processes that underlie this 'recognition memory' requires further elucidation, this method has been used to study mutant mice, aging deficits, early developmental influences, nootropic manipulations, teratological drug exposure and novelty seeking.",
"title": ""
},
{
"docid": "b42cd71b23c933f7b07d270edc1ce53b",
"text": "We propose a modification of the cost function of the Hopfield model whose salient features shine in its Taylor expansion and result in more than pairwise interactions with alternate signs, suggesting a unified framework for handling both with deep learning and network pruning. In our analysis, we heavily rely on the Hamilton-Jacobi correspondence relating the statistical model with a mechanical system. In this picture, our model is nothing but the relativistic extension of the original Hopfield model (whose cost function is a quadratic form in the Mattis magnetization which mimics the non-relativistic Hamiltonian for a free particle). We focus on the low-storage regime and solve the model analytically by taking advantage of the mechanical analogy, thus obtaining a complete characterization of the free energy and the associated self-consistency equations in the thermodynamic limit. On the numerical side, we test the performances of our proposal with MC simulations, showing that the stability of spurious states (limiting the capabilities of the standard Hebbian construction) is sensibly reduced due to presence of unlearning contributions in this extended framework.",
"title": ""
}
] |
scidocsrr
|
db658cff15310e17e94b84f6f173b880
|
The Impact of Individual , Competitive , and Collaborative Mathematics Game Play on Learning , Performance , and Motivation
|
[
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "3b17e2f76f3a3b287423a2d6f4e47125",
"text": "Computer and video games are a maturing medium and industry and have caught the attention of scholars across a variety of disciplines. By and large, computer and video games have been ignored by educators. When educators have discussed games, they have focused on the social consequences of game play, ignoring important educational potentials of gaming. This paper examines the history of games in educational research, and argues that the cognitive potential of games have been largely ignored by educators. Contemporary developments in gaming, particularly interactive stories, digital authoring tools, and collaborative worlds, suggest powerful new opportunities for educational media. VIDEO GAMES IN AMERICAN CULTURE Now just over thirty years old, video games have quickly become one of the most pervasive, profitable, and influential forms of entertainment in the United States and across the world. In 2001, computer and console game software and hardware exceeded $6.35 billion in the United States, and an estimated $19 billion worldwide (IDSA 2002). To contextualize these figures, in October 23, 2001, the Sony PlayStation system debuted in the US, netting well over $150 million in twenty-four hours, over six times the opening day revenues of Star Wars: The Phantom Menace, which netted $25 million. Twenty-five million Americans or, one out of every four households, owns a Sony Playstation (Sony Corporate website 2000). Not only are video games a powerful force not only in the entertainment and economic sector, but in the American cultural landscape, as well. 1 There may be distinctions between the technical features and cultural significance of computer and video games that are worth exploring when discussing games in education, but for the purposes of this paper, they will both be treated as “video games” to simplify matters. Nintendo’s Pokemon, which, like Pac-Man and The Mario Brothers, before it, has evolved from a video game into a cultural phenomena. In the past few years, Pokemon has spun off a television show, a full feature film, a line of toys, and a series of trading cards, making these little creatures giants in youth culture. Given the pervasive influence of video games on American culture, many educators have taken an interest in what the effects these games have on players, and how some of the motivating aspects of video games might be harnessed to facilitate learning. Other educators fear that video games might foster violence, aggression, negative imagery of women, or social isolation (Provenzo 1991). Other educators see video games as powerfully motivating digital environments and study video games in order to determine how motivational components of popular video games might be integrated into instructional design (Bowman 1982; Bracey 1992; Driskell & Dwyer 1984). Conducted during the age of Nintendo, these studies are few in number and somewhat outdated, given recent advancements in game theory and game design. These studies also tend to focus on deriving principles from traditional action (or “twitch”) games, missing important design knowledge embodied in adventure, sports, strategy, puzzle, or role-playing games (RPGs), as well as hybrid games which combine multiple genres (Appleman & Goldsworthy 1999; Saltzman 1999). Likewise, they fail to consider the social contexts of gaming and more recent developments in gaming, such as the Internet. In this paper, I argue that video games are such a popular and influential medium for a combination of many factors. Primarily, however, video games elicit powerful emotional reactions in their players, such as fear, power, aggression, wonder, or joy. Video game designers create these emotions by a balancing a number of game components, such as character traits, game rewards, obstacles, game narrative, competition with other humans, and opportunities for collaboration with other players. Understanding the dynamics behind these design considerations might be useful for instructional technologists who design interactive digital learning environments. Further, video game playing occurs in rich socio-cultural contexts, bringing friends and family together, serving as an outlet for adolescents, and providing the “raw material” for youth culture. Finally, video game research reveals many patterns in how humans interact with technology that become increasingly important to instructional technologists as they become designers of digital environments. Through studying video games, instructional technologists can better understand the impact of technology on individuals and communities, how to support digital environments by situating them in rich social contexts. LEARNERS AS “PAC-MAN” PLAYERS: USING VIDEO GAMES TO UNDERSTAND ENGAGEMENT Since the widespread popularity of PacMan in the early 1980s, some educators have wondered if “the magic of ‘Pac-Man‘cannot be bottled and unleashed in the classroom to enhance student involvement, enjoyment, and commitment” (Bowman 1982, p. 14). A few educators have undertaken this project, defining elements of game design that might be used to make learning environments more engaging (Bowman 1982; Bracey 1992; Driskell & Dwyer 1984; Malone 1981). Through a series of observations, surveys, and interviews, Malone (1981) generated three main elements that “Make video games fun”: Challenge, fantasy, and curiosity. Malone uses these concepts to outline several guidelines for creating enjoyable education programs. Malone (1981) argues that educational programs should have: • clear goals that students find meaningful, • multiple goal structures and scoring to give students feedback on their progress, • multiple difficulty levels to adjust the game difficulty to learner skill, • random elements of surprise, • an emotionally appealing fantasy and metaphor that is related to game skills. In a case study of Super Mario Brothers 2, Provenzo (1991) finds this framework very powerful in explaining why Super Mario Brothers 2 has become one of the most successful video games of all time. Bowman’s checklist provides educators an excellent starting point for understanding game design and analyzing educational games, but at best, it only suggests an underlying theoretical model of why",
"title": ""
}
] |
[
{
"docid": "487c011cb0701b4b909dedca2d128fe6",
"text": "It is necessary and essential to discovery protein function from the novel primary sequences. Wet lab experimental procedures are not only time-consuming, but also costly, so predicting protein structure and function reliably based only on amino acid sequence has significant value. TATA-binding protein (TBP) is a kind of DNA binding protein, which plays a key role in the transcription regulation. Our study proposed an automatic approach for identifying TATA-binding proteins efficiently, accurately, and conveniently. This method would guide for the special protein identification with computational intelligence strategies. Firstly, we proposed novel fingerprint features for TBP based on pseudo amino acid composition, physicochemical properties, and secondary structure. Secondly, hierarchical features dimensionality reduction strategies were employed to improve the performance furthermore. Currently, Pretata achieves 92.92% TATA-binding protein prediction accuracy, which is better than all other existing methods. The experiments demonstrate that our method could greatly improve the prediction accuracy and speed, thus allowing large-scale NGS data prediction to be practical. A web server is developed to facilitate the other researchers, which can be accessed at http://server.malab.cn/preTata/ .",
"title": ""
},
{
"docid": "e8b6812dc4e12557c42a17ce1383778b",
"text": "China is the world’s most populous country and a major emitter of greenhouse gases. Consequently, much research has focused on China’s influence on climate change but somewhat less has been written about the impact of climate change on China. China experienced explosive economic growth in recent decades, but with only 7% of the world’s arable land available to feed 22% of the world’s population, China's economy may be vulnerable to climate change itself. We find, however, that notwithstanding the clear warming that has occurred in China in recent decades, current understanding does not allow a clear assessment of the impact of anthropogenic climate change on China’s water resources and agriculture and therefore China’s ability to feed its people. To reach a more definitive conclusion, future work must improve regional climate simulations—especially of precipitation—and develop a better understanding of the managed and unmanaged responses of crops to changes in climate, diseases, pests and atmospheric constituents.",
"title": ""
},
{
"docid": "23aace507b4419d92020aa793bd6e62f",
"text": "State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose modelling the statistical relationship of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose or into a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth map. To prevent over-fitting and to better exploit unlabeled depth maps, the generator and discriminator are trained jointly. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized samples and unlabeled depth maps. The proposed discriminator network architecture is highly efficient and runs at 90fps on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks.",
"title": ""
},
{
"docid": "209890a949e8d4aac6e730bc8e7c1793",
"text": "Spanish-English bilinguals and English monolinguals completed 12 semantic, 10 letter, and 2 proper name fluency categories. Bilinguals produced fewer exemplars than monolinguals on all category types, but the difference between groups was larger (and more consistent) on semantic categories. Bilinguals and monolinguals produced the same number of errors across all category types. The authors discuss 2 accounts of the similarities and differences between groups and the interaction with category type, including (a) cross-language interference and (b) relatively weak connections in the bilingual lexical system because of reduced use of words specific to each language. Surprisingly, bilinguals' fluency scores did not improve when they used words in both languages. This result suggests that voluntary language switching incurs a processing cost.",
"title": ""
},
{
"docid": "106ec8b5c3f5bff145be2bbadeeafe68",
"text": "Objective: To provide a parsimonious clustering pipeline that provides comparable performance to deep learning-based clustering methods, but without using deep learning algorithms, such as autoencoders. Materials and methods: Clustering was performed on six benchmark datasets, consisting of five image datasets used in object, face, digit recognition tasks (COIL20, COIL100, CMU-PIE, USPS, and MNIST) and one text document dataset (REUTERS-10K) used in topic recognition. K-means, spectral clustering, Graph Regularized Non-negative Matrix Factorization, and K-means with principal components analysis algorithms were used for clustering. For each clustering algorithm, blind source separation (BSS) using Independent Component Analysis (ICA) was applied. Unsupervised feature learning (UFL) using reconstruction cost ICA (RICA) and sparse filtering (SFT) was also performed for feature extraction prior to the cluster algorithms. Clustering performance was assessed using the normalized mutual information and unsupervised clustering accuracy metrics. Results: Performing, ICA BSS after the initial matrix factorization step provided the maximum clustering performance in four out of six datasets (COIL100, CMU-PIE, MNIST, and REUTERS-10K). Applying UFL as an initial processing component helped to provide the maximum performance in three out of six datasets (USPS, COIL20, and COIL100). Compared to state-of-the-art non-deep learning clustering methods, ICA BSS and/ or UFL with graph-based clustering algorithms outperformed all other methods. With respect to deep learning-based clustering algorithms, the new methodology presented here obtained the following rankings: COIL20, 2nd out of 5; COIL100, 2nd out of 5; CMU-PIE, 2nd out of 5; USPS, 3rd out of 9; MNIST, 8th out of 15; and REUTERS-10K, 4th out of 5. Discussion: By using only ICA BSS and UFL using RICA and SFT, clustering accuracy that is better or on par with many deep learning-based clustering algorithms was achieved. For instance, by applying ICA BSS to spectral clustering on the MNIST dataset, we obtained an accuracy of 0.882. This is better than the well-known Deep Embedded Clustering algorithm that had obtained an accuracy of 0.818 using stacked denoising autoencoders in its model. Open Access © The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. RESEARCH Gultepe and Makrehchi Hum. Cent. Comput. Inf. Sci. (2018) 8:25 https://doi.org/10.1186/s13673-018-0148-3 *Correspondence: [email protected] Department of Electrical and Computer Engineering, University of Ontario Institute of Technology, 2000 Simcoe St N, Oshawa, ON L1H 7K4, Canada Page 2 of 19 Gultepe and Makrehchi Hum. Cent. Comput. Inf. Sci. (2018) 8:25 Conclusion: Using the new clustering pipeline presented here, effective clustering performance can be obtained without employing deep clustering algorithms and their accompanying hyper-parameter tuning procedure.",
"title": ""
},
{
"docid": "ff1cc31ab089d5d1d09002866c7dc043",
"text": "In almost every scientific field, measurements are performed over time. These observations lead to a collection of organized data called time series. The purpose of time-series data mining is to try to extract all meaningful knowledge from the shape of data. Even if humans have a natural capacity to perform these tasks, it remains a complex problem for computers. In this article we intend to provide a survey of the techniques applied for time-series data mining. The first part is devoted to an overview of the tasks that have captured most of the interest of researchers. Considering that in most cases, time-series task relies on the same components for implementation, we divide the literature depending on these common aspects, namely representation techniques, distance measures, and indexing methods. The study of the relevant literature has been categorized for each individual aspects. Four types of robustness could then be formalized and any kind of distance could then be classified. Finally, the study submits various research trends and avenues that can be explored in the near future. We hope that this article can provide a broad and deep understanding of the time-series data mining research field.",
"title": ""
},
{
"docid": "fe145a6dcbad2644d6a5c9720fc6c268",
"text": "Human papillomaviruses (HPVs) are well known for being linked to the development of cervical cancers, most of them being caused by the high-risk (HR) oncogenic genotypes, mainly 16 and 18. The efficacy of 2LPAPI® (Labo’Life), a micro-immunotherapy homeopathic drug, has been evaluated in HR-HPV infected women (n = 18), in a private gynecology practice, by comparing them to an untreated control group (n = 18). Patients were 20 to 45 years old and had cytology with Atypical Squamous Cells of Undetermined Significance (ASCUS) or Low grade Superficial Intra Lesions/ Cervical Intraepithelial Neoplasia Grade I (LSIL/CINI). Patients freely chose to be treated with the drug or not. Those deciding not to take the drug remained untreated and were followed as a control group. The drug was taken at the regimen of one capsule per day during 6 months. HR-HPV and cytology were evaluated at 6 and 12 months. After 12 months, HR-HPV was cleared in 78% of the patients taking the drug versus 44% in those not taking it (p = 0.086). In patients over 25 years, HR-HPV clearance in the treated group was significantly higher (81.3%) than in the control group (20%) (p = 0.004). The difference in the regression of the lesion grades almost reached statistical significance (p = 0.053). This follow-up confirms that the micro-immunotherapy drug 2LPAPI® is a safe and effective therapeutic approach to treat HR-HPV cervical lesions in women over 25 years.",
"title": ""
},
{
"docid": "38fe414175262260f705ce06bbfc1bc8",
"text": "Augmented reality, in which virtual content is seamlessly integrated with displays of real-world scenes, is a growing area of interactive design. With the rise of personal mobile devices capable of producing interesting augmented reality environments, the vast potential of AR has begun to be explored. This paper surveys the current state-of-the-art in augmented reality. It describes work performed in different application domains and explains the exiting issues encountered when building augmented reality applications considering the ergonomic and technical limitations of mobile devices. Future directions and areas requiring further research are introduced and discussed.",
"title": ""
},
{
"docid": "8b416a37b319153eca38105c6de3fd2a",
"text": "UNSUPERVISED ANOMALY DETECTION IN SEQUENCES USING LONG SHORT TERM MEMORY RECURRENT NEURAL NETWORKS Majid S. alDosari George Mason University, 2016 Thesis Director: Dr. Kirk D. Borne Long Short Term Memory (LSTM) recurrent neural networks (RNNs) are evaluated for their potential to generically detect anomalies in sequences. First, anomaly detection techniques are surveyed at a high level so that their shortcomings are exposed. The shortcomings are mainly their inflexibility in the use of a context ‘window’ size and/or their suboptimal performance in handling sequences. Furthermore, high-performing techniques for sequences are usually associated with their respective knowledge domains. After discussing these shortcomings, RNNs are exposed mathematically as generic sequence modelers that can handle sequences of arbitrary length. From there, results from experiments using RNNs show their ability to detect anomalies in a set of test sequences. The test sequences had different types of anomalies and unique normal behavior. Given the characteristics of the test data, it was concluded that the RNNs were not only able to generically distinguish rare values in the data (out of context) but were also able to generically distinguish abnormal patterns (in context). In addition to the anomaly detection work, a solution for reproducing computational research is described. The solution addresses reproducing compute applications based on Docker container technology as well as automating the infrastructure that runs the applications. By design, the solution allows the researcher to seamlessly transition from local (test) application execution to remote (production) execution because little distinction is made between local and remote execution. Such flexibility and automation allows the researcher to be more confident of results and more productive, especially when dealing with multiple machines. Chapter 1: Introduction In the modern world, large amounts of time series data of various types are recorded. Inexpensive and compact instrumentation and storage allows various types of processes to be recorded. For example, human activity being recorded includes physiological signals, automotive traffic, website navigation activity, and communication network traffic. Other kinds of data are captured from instrumentation in industrial processes, automobiles, space probes, telescopes, geological formations, oceans, power lines, and residential thermostats. Furthermore, the data can be machine generated for diagnostic purposes such as web server logs, system startup logs, and satellite status logs. Increasingly, these data are being analyzed. Inexpensive and ubiquitous networking has allowed the data to be transmitted for processing. At the same time, ubiquitous computing has allowed the data to be processed at the location of capture. While the data can be recorded for historical purposes, much value can be obtained from finding anomalous data. However, it is challenging to manually analyze large and varied quantities of data to find anomalies. Even if a procedure can be developed for one type of data, it usually cannot be applied to another type of data. Hence, the problem that is addressed can be stated as follows: find anomalous points in an arbitrary (unlabeled) sequence. So, a solution must use the same procedure to analyze different types of time series data. The solution presented here comes from an unsupervised use of recurrent neural networks. A literature search only readily gives two similar solutions. In the acoustics domain, [1] ¬ In this document, the terms ‘time series’ and ‘sequence’ are used interchangeably without implication to the discussion. Strictly however, a time series is a sequence of time-indexed elements. So a sequence is the more general object. As such, the term ‘sequence’ is used when a general context is more applicable. Furthermore, the terms do not imply that the data are real, discrete, or symbolic. However, literature frequently uses the terms ‘time series’ and ‘sequence’ for real and symbolic data respectively. Here, the term ‘time series’ was used to emphasize that much data is recorded from monitoring devices which implies that a timestamp is associated with each data point. 1 transform audio signals into a sequence of spectral features which are then input to a denoising recurrent autoencoder. Improving on this, [2] use recurrent neural networks (directly) without the use of features (that are specific to a problem domain, like acoustics) to multiple domains. This work closely resembles [2] but presenting a single, highly-automated procedure that applies to many domains is emphasized. First, some background is given on anomaly detection that explains the challenges of finding a solution. Second, recurrent neural networks are introduced as general sequence modelers. Then, experiments will be presented to show that recurrent neural networks can find different types of anomalies in multiple domains. Finally, concluding remarks are given. Outlier, surprise, novelty, and deviation detection are alternative names used in literature. 2 Chapter 2: The Challenge of Anomaly Detection in Sequences",
"title": ""
},
{
"docid": "ac6ce191c14b48695c82f3d230264777",
"text": "We introduce a kernel-based method for change-point analys is within a sequence of temporal observations. Change-point analysis of an unla belled sample of observations consists in, first, testing whether a change in the di stribution occurs within the sample, and second, if a change occurs, estimating the ch ange-point instant after which the distribution of the observations switches f rom one distribution to another different distribution. We propose a test statisti c based upon the maximum kernel Fisher discriminant ratio as a measure of homogeneit y b tween segments. We derive its limiting distribution under the null hypothes is (no change occurs), and establish the consistency under the alternative hypoth esis (a change occurs). This allows to build a statistical hypothesis testing proce dur for testing the presence of a change-point, with a prescribed false-alarm proba bility and detection probability tending to one in the large-sample setting. If a ch nge actually occurs, the test statistic also yields an estimator of the change-po int location. Promising experimental results in temporal segmentation of mental ta sks from BCI data and pop song indexation are presented.",
"title": ""
},
{
"docid": "49e5f9e36efb6b295868a307c1486c60",
"text": "This paper reviews ultrasound segmentation methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images. First, we present a review of articles by clinical application to highlight the approaches that have been investigated and degree of validation that has been done in different clinical domains. Then, we present a classification of methodology in terms of use of prior information. We conclude by selecting ten papers which have presented original ideas that have demonstrated particular clinical usefulness or potential specific to the ultrasound segmentation problem",
"title": ""
},
{
"docid": "bee4b2dfab47848e8429d4b4617ec9e5",
"text": "Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).",
"title": ""
},
{
"docid": "a02a53a7fe03bc687d841e67ee08f641",
"text": "Spontaneous gestures that accompany speech are related to both verbal and spatial processes. We argue that gestures emerge from perceptual and motor simulations that underlie embodied language and mental imagery. We first review current thinking about embodied cognition, embodied language, and embodied mental imagery. We then provide evidence that gestures stem from spatial representations and mental images. We then propose the gestures-as-simulated-action framework to explain how gestures might arise from an embodied cognitive system. Finally, we compare this framework with other current models of gesture production, and we briefly outline predictions that derive from the framework.",
"title": ""
},
{
"docid": "6646b66370ed02eb84661c8505eb7563",
"text": "Re-identification is generally carried out by encoding the appearance of a subject in terms of outfit, suggesting scenarios where people do not change their attire. In this paper we overcome this restriction, by proposing a framework based on a deep convolutional neural network, SOMAnet, that additionally models other discriminative aspects, namely, structural attributes of the human figure (e.g. height, obesity, gender). Our method is unique in many respects. First, SOMAnet is based on the Inception architecture, departing from the usual siamese framework. This spares expensive data preparation (pairing images across cameras) and allows the understanding of what the network learned. Second, and most notably, the training data consists of a synthetic 100K instance dataset, SOMAset, created by photorealistic human body generation software. Synthetic data represents a good compromise between realistic imagery, usually not required in re-identification since surveillance cameras capture low-resolution silhouettes, and complete control of the samples, which is useful in order to customize the data w.r.t. the surveillance scenario at-hand, e.g. ethnicity. SOMAnet, trained on SOMAset and fine-tuned on recent re-identification benchmarks, outperforms all competitors, matching subjects even with different apparel. The combination of synthetic data with Inception architectures opens up new research avenues in re-identification.",
"title": ""
},
{
"docid": "8c89db0cd8c5dc666d7d6b244d35326b",
"text": "Cervical cancer, as the fourth most common cause of death from cancer among women, has no symptoms in the early stage. There are few methods to diagnose cervical cancer precisely at present. Support vector machine (SVM) approach is introduced in this paper for cervical cancer diagnosis. Two improved SVM methods, support vector machine-recursive feature elimination and support vector machine-principal component analysis (SVM-PCA), are further proposed to diagnose the malignant cancer samples. The cervical cancer data are represented by 32 risk factors and 4 target variables: Hinselmann, Schiller, Cytology, and Biopsy. All four targets have been diagnosed and classified by the three SVM-based approaches, respectively. Subsequently, we make the comparison among these three methods and compare our ranking result of risk factors with the ground truth. It is shown that SVM-PCA method is superior to the others.",
"title": ""
},
{
"docid": "cef1270ff3e263d2becf551288b08efe",
"text": "Sentiment Analysis has become a significant research matter for its probable in tapping into the vast amount of opinions generated by the people. Sentiment analysis deals with the computational conduct of opinion, sentiment within the text. People sometimes uses sarcastic text to express their opinion within the text. Sarcasm is a type of communication act in which the people write the contradictory of what they mean in reality. The intrinsically vague nature of sarcasm sometimes makes it hard to understand. Recognizing sarcasm can promote many sentiment analysis applications. Automatic detecting sarcasm is an approach for predicting sarcasm in text. In this paper we have tried to talk of the past work that has been done for detecting sarcasm in the text. This paper talk of approaches, features, datasets, and issues associated with sarcasm detection. Performance values associated with the past work also has been discussed. Various tables that present different dimension of past work like dataset used, features, approaches, performance values has also been discussed.",
"title": ""
},
{
"docid": "acd4de9f6324cc9d3fd9560094c71542",
"text": "Similarity search is one of the fundamental problems for large scale multimedia applications. Hashing techniques, as one popular strategy, have been intensively investigated owing to the speed and memory efficiency. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, most existing supervised methods learn hashing function by treating each training example equally while ignoring the different semantic degree related to the label, i.e. semantic confidence, of different examples. In this paper, we propose a novel semi-supervised hashing framework by leveraging semantic confidence. Specifically, a confidence factor is first assigned to each example by neighbor voting and click count in the scenarios with label and click-through data, respectively. Then, the factor is incorporated into the pairwise and triplet relationship learning for hashing. Furthermore, the two learnt relationships are seamlessly encoded into semi-supervised hashing methods with pairwise and listwise supervision respectively, which are formulated as minimizing empirical error on the labeled data while maximizing the variance of hash bits or minimizing quantization loss over both the labeled and unlabeled data. In addition, the kernelized variant of semi-supervised hashing is also presented. We have conducted experiments on both CIFAR-10 (with label) and Clickture (with click data) image benchmarks (up to one million image examples), demonstrating that our approaches outperform the state-of-the-art hashing techniques.",
"title": ""
},
{
"docid": "8c3f6fcda9965a4dab3936b913c2fe14",
"text": "Automatic Number Plate Recognition (ANPR) became a very important tool in our daily life because of the unlimited increase of cars and transportation systems, which make it impossible to be fully managed and monitored by humans. Examples are so many, like traffic monitoring, tracking stolen cars, managing parking toll, red-light violation enforcement, border and customs checkpoints. Yet, it’s a very challenging problem, due to the diversity of plate formats, different scales, rotations and non-uniform illumination conditions during image acquisition. The objective of this paper is to provide a novel algorithm for license plate recognition in complex scenes, particularly for the all-day traffic surveillance environment. This is achieved using mathematical morphology and artificial neural network (ANN). A preprocessing step is applied to improve the performance of license plate localization and character segmentation in case of severe imaging conditions. The first and second stages utilize edge detection and mathematical morphology followed by connected component analysis. ANN is employed in the last stage to construct a classifier to categorize the input numbers of the license plate. The algorithm has been applied on 102 car images with different backgrounds, license plate angles, distances, lightening conditions, and colors. The average accuracy of the license plate localization is 97.06%, 95.10% for license plate segmentation, and 94.12% for character recognition. The experimental results show the outstanding detection performance of the proposed method comparing with traditional algorithms.",
"title": ""
},
{
"docid": "a00d2d9dde3f767ce6b7308a9cdd8f03",
"text": "Using an improved method of gel electrophoresis, many hitherto unknown proteins have been found in bacteriophage T4 and some of these have been identified with specific gene products. Four major components of the head are cleaved during the process of assembly, apparently after the precursor proteins have assembled into some large intermediate structure.",
"title": ""
},
{
"docid": "37cfea7e4395aa2df109d2ce024b1bd5",
"text": "We develop and extend social capital theory by exploring the creation of organizational social capital within a highly pervasive, yet often overlooked organizational form: family firms. We argue that family firms are unique in that, although they work as a single entity, at least two forms of social capital coexist: the family’s and the firm’s. We investigate mechanisms that link a family’s social capital to the creation of the family firm’s social capital and examine how factors underlying the family’s social capital affect this creation. Moreover, we identify contingency dimensions that affect these relationships and the potential risks associated with family social capital. Finally, we suggest these insights are generalizable to several other types of organizations with similar characteristics.",
"title": ""
}
] |
scidocsrr
|
294a04f9ad01b5739b6a2baa07f59c3a
|
Research Directions on Semantic Web and Education
|
[
{
"docid": "49f68a9534a602074066948a13164ad4",
"text": "Recent developments in Web technologies and using AI techniques to support efforts in making the Web more intelligent and provide higher-level services to its users have opened the door to building the Semantic Web. That fact has a number of important implications for Web-based education, since Web-based education has become a very important branch of educational technology. Classroom independence and platform independence of Web-based education, availability of authoring tools for developing Web-based courseware, cheap and efficient storage and distribution of course materials, hyperlinks to suggested readings, digital libraries, and other sources of references relevant for the course are but a few of a number of clear advantages of Web-based education. However, there are several challenges in improving Web-based education, such as providing for more adaptivity and intelligence. Developments in the Semantic Web, while contributing to the solution to these problems, also raise new issues that must be considered if we are to progress. This paper surveys the basics of the Semantic Web and discusses its importance in future Web-based educational applications. Instead of trying to rebuild some aspects of a human brain, we are going to build a brain of and for humankind. D. Fensel and M.A. Musen (Fensel & Musen, 2001)",
"title": ""
}
] |
[
{
"docid": "eb2d29417686cc86a45c33694688801f",
"text": "We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, where the sun is typically not visible. We leverage recent advances in Bayesian Convolutional Neural Networks to train and implement a sun detection model that infers a three-dimensional sun direction vector from a single RGB image. Crucially, our method also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo visual odometry pipeline where accurate uncertainty estimates are critical for optimal data fusion. Our Bayesian sun detection model achieves a median error of approximately 12 degrees on the KITTI odometry benchmark training set, and yields improvements of up to 42% in translational ARMSE and 32% in rotational ARMSE compared to standard VO. An open source implementation of our Bayesian CNN sun estimator (Sun-BCNN) using Caffe is available at https://github.com/utiasSTARS/sun-bcnn-vo.",
"title": ""
},
{
"docid": "552d253f8cce654dd5ea289ab9520a4c",
"text": "Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online's data policy on reuse of materials please consult the policies page. This paper is a systematic review of the literature on organizational learning and knowledge with relevance to public service organizations. Organizational learning and knowledge are important to public sector organizations, which share complex external challenges with private organizations, but have different drivers and goals for knowledge. The evidence shows that the concepts of organizational learning and knowledge are under-researched in relation to the public sector and, importantly, this raises wider questions about the extent to which context is taken into consideration in terms of learning and knowledge more generally across all sectors. A dynamic model of organizational learning within and across organizational boundaries is developed that depends on four sets of factors: features of the source organization; features of the recipient organization; the characteristics of the relationship between organizations; and the environmental context. The review concludes, first, that defining 'organization' is an important element of understanding organizational learning and knowledge. Second, public organizations constitute an important, distinctive context for the study of organizational learning and knowledge. Third, there continues to be an over-reliance on the private sector as the principal source of theoretical understanding and empirical research and this is conceptually limiting for the understanding of organizational learning and knowledge. Fourth, differences as well as similarities between organizational sectors require conceptualization and research that acknowledge sector-specific aims, values and structures. Finally, it is concluded that frameworks for explaining processes of organizational learning at different levels need to be sufficiently dynamic and complex to accommodate public organizations.",
"title": ""
},
{
"docid": "6b4a4e5271f5a33d3f30053fc6c1a4ff",
"text": "Based on environmental, legal, social, and economic factors, reverse logistics and closed-loop supply chain issues have attracted attention among both academia and practitioners. This attention is evident by the vast number of publications in scientific journals which have been published in recent years. Hence, a comprehensive literature review of recent and state-of-the-art papers is vital to draw a framework of the past, and to shed light on future directions. The aim of this paper is to review recently published papers in reverse logistic and closed-loop supply chain in scientific journals. A total of 382 papers published between January 2007 and March 2013 are selected and reviewed. The papers are then analyzed and categorized to construct a useful foundation of past research. Finally, gaps in the literature are identified to clarify and to suggest future research opportunities. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c8977fe68b265b735ad4261f5fe1ec25",
"text": "We present ACQUINE - Aesthetic Quality Inference Engine, a publicly accessible system which allows users to upload their photographs and have them rated automatically for aesthetic quality. The system integrates a support vector machine based classifier which extracts visual features on the fly and performs real-time classification and prediction. As the first publicly available tool for automatically determining the aesthetic value of an image, this work is a significant first step in recognizing human emotional reaction to visual stimulus. In this paper, we discuss fundamentals behind this system, and some of the challenges faced while creating it. We report statistics generated from over 140,000 images uploaded by Web users. The system is demonstrated at http://acquine.alipr.com.",
"title": ""
},
{
"docid": "dc9abfd745d4267a5fcd66ce1d977acb",
"text": "Advances in information technology and its widespread growth in several areas of business, engineering, medical, and scientific studies are resulting in information/data explosion. Knowledge discovery and decision-making from such rapidly growing voluminous data are a challenging task in terms of data organization and processing, which is an emerging trend known as big data computing, a new paradigm that combines large-scale compute, new data-intensive techniques, and mathematical models to build data analytics. Big data computing demands a huge storage and computing for data curation and processing that could be delivered from on-premise or clouds infrastructures. This paper discusses the evolution of big data computing, differences between traditional data warehousing and big data, taxonomy of big data computing and underpinning technologies, integrated platform of big data and clouds known as big data clouds, layered architecture and components of big data cloud, and finally open-technical challenges and future directions. Copyright © 2015 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "537d47c4bb23d9b60b164d747cb54cd9",
"text": "Comprehending computer programs is one of the core software engineering activities. Software comprehension is required when a programmer maintains, reuses, migrates, reengineers, or enhances software systems. Due to this, a large amount of research has been carried out, in an attempt to guide and support software engineers in this process. Several cognitive models of program comprehension have been suggested, which attempt to explain how a software engineer goes about the process of understanding code. However, research has suggested that there is no one ‘all encompassing’ cognitive model that can explain the behavior of ‘all’ programmers, and that it is more likely that programmers, depending on the particular problem, will swap between models (Letovsky, 1986). This paper identifies the key components of program comprehension models, and attempts to evaluate currently accepted models in this framework. It also highlights the commonalities, conflicts, and gaps between models, and presents possibilities for future research, based on its findings.",
"title": ""
},
{
"docid": "50cddaad75b7598bd9ce50163324e4cf",
"text": "In this paper, we propose a multi-object tracking and reconstruction approach through measurement-level fusion of LiDAR and camera. The proposed method, regardless of object class, estimates 3D motion and structure for all rigid obstacles. Using an intermediate surface representation, measurements from both sensors are processed within a joint framework. We combine optical flow, surface reconstruction, and point-to-surface terms in a tightly-coupled non-linear energy function, which is minimized using Iterative Reweighted Least Squares (IRLS). We demonstrate the performance of our model on different datasets (KITTI with Velodyne HDL-64E and our collected data with 4-layer ScaLa Ibeo), and show an improvement in velocity error and crispness over state-of-the-art trackers.",
"title": ""
},
{
"docid": "ecfa2ca992685dd0eda652f8aa021fb4",
"text": "We investigate the parallelization of reinforcement learning algorithms using MapReduce, a popular parallel computing framework. We present parallel versions of several dynamic programming algorithms, including policy evaluation, policy iteration, and off-policy updates. Furthermore, we design parallel reinforcement learning algorithms to deal with large scale problems using linear function approximation, including model-based projection, least squares policy iteration, temporal difference learning and recent gradient temporal difference learning algorithms. We give time and space complexity analysis of the proposed algorithms. This study demonstrates how parallelization opens new avenues for solving large scale reinforcement learning problems.",
"title": ""
},
{
"docid": "70dc7fe40f55e2b71b79d71d1119a36c",
"text": "In undergoing this life, many people always try to do and get the best. New knowledge, experience, lesson, and everything that can improve the life will be done. However, many people sometimes feel confused to get those things. Feeling the limited of experience and sources to be better is one of the lacks to own. However, there is a very simple thing that can be done. This is what your teacher always manoeuvres you to do this one. Yeah, reading is the answer. Reading a book as this digital image processing principles and applications and other references can enrich your life quality. How can it be?",
"title": ""
},
{
"docid": "2ffb20d66a0d5cb64442c2707b3155c6",
"text": "A botnet is a network of compromised hosts that is under the control of a single, malicious entity, often called the botmaster. We present a system that aims to detect bot-infected machines, independent of any prior information about the command and control channels or propagation vectors, and without requiring multiple infections for correlation. Our system relies on detection models that target the characteristic fact that every bot receives commands from the botmaster to which it responds in a specific way. These detection models are generated automatically from network traffic traces recorded from actual bot instances. We have implemented the proposed approach and demonstrate that it can extract effective detection models for a variety of different bot families. These models are precise in describing the activity of bots and raise very few false positives.",
"title": ""
},
{
"docid": "53afafd2fc1087989a975675ff4098d8",
"text": "The sixth generation of IEEE 802.11 wireless local area networks is under developing in the Task Group 802.11ax. One main physical layer (PHY) novel feature in the IEEE 802.11ax amendment is the specification of orthogonal frequency division multiplexing (OFDM) uplink multi-user multiple-input multiple-output (UL MU-MIMO) techniques. A challenge issue to implement UL MU-MIMO in OFDM PHY is the mitigation of the relative carrier frequency offset (CFO), which can cause intercarrier interference and rotation of the constellation of received symbols, and, consequently, degrading the system performance dramatically if it is not properly mitigated. In this paper, we show that a frequency domain CFO estimation and correction scheme implemented at both transmitter (Tx) and receiver (Rx) coupled with pre-compensation approach at the Tx can decrease the negative effects of the relative CFO.",
"title": ""
},
{
"docid": "31c08c533cd4d971ec0899762829350e",
"text": "Design of the 0.6–50 GHz ultra-wideband (UWB) double-ridged horn antenna (DRHA) is presented in this paper. This work focuses on several upgrades in the model to improve its performance: by adding absorber and perforations in coaxial to waveguide launcher, Luneburg dielectric lens at the aperture of the horn radiation pattern at upper end of the band and voltage standing wave ratio (VSWR) are improved. Radiation pattern and VSWR of new design are compared with antenna before modifications. The improved DRHA has VSWR less than 1.5 at the band from 1 GHz and the main lobe remains along the antenna axis at high frequencies of the band.",
"title": ""
},
{
"docid": "a50ec2ab9d5d313253c6656049d608b3",
"text": "A cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced. The algorithm provides basically an interface to an algebraic process de ned on stochastic matrices, called the MCL process. The graphs may be both weighted (with nonnegative weight) and directed. Let G be such a graph. The MCL algorithm simulates ow in G by rst identifying G in a canonical way with a Markov graph G1. Flow is then alternatingly expanded and contracted, leading to a row of Markov Graphs G(i). Flow expansion corresponds with taking the k power of a stochastic matrix, where k 2 IN . Flow contraction corresponds with a parametrized operator r, r 0, which maps the set of (column) stochastic matrices onto itself. The image rM is obtained by raising each entry in M to the r th power and rescaling each column to have sum 1 again. The heuristic underlying this approach is the expectation that ow between dense regions which are sparsely connected will evaporate. The invariant limits of the process are easily derived and in practice the process converges very fast to such a limit, the structure of which has a generic interpretation as an overlapping clustering of the graph G. Overlap is limited to cases where the input graph has a symmetric structure inducing it. The contraction and expansion parameters of the MCL process in uence the granularity of the output. The algorithm is space and time e cient and lends itself to drastic scaling. This report describes the MCL algorithm and process, convergence towards equilibrium states, interpretation of the states as clusterings, and implementation and scalability. The algorithm is introduced by rst considering several related proposals towards graph clustering, of both combinatorial and probabilistic nature. 2000 Mathematics Subject Classi cation: 05B20, 15A48, 15A51, 62H30, 68R10, 68T10, 90C35.",
"title": ""
},
{
"docid": "84e8986eff7cb95808de8df9ac286e37",
"text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.",
"title": ""
},
{
"docid": "e8df1006565902d1b2f5189a02944bca",
"text": "A research and development collaboration has been started with the goal of producing a prototype hadron calorimeter section for the purpose of proving the Particle Flow Algorithm concept for the International Linear Collider. Given the unique requirements of a Particle Flow Algorithm calorimeter, custom readout electronics must be developed to service these detectors. This paper introduces the DCal or Digital Calorimetry Chip, a custom integrated circuit developed in a 0.25um CMOS process specifically for this International Linear Collider project. The DCal is capable of handling 64 channels, producing a 1-bit Digital-to-Analog conversion of the input (i.e. hit/no hit). It maintains a 24-bit timestamp and is capable of operating either in an externally triggered mode or in a self-triggered mode. Moreover, it is capable of operating either with or without a pipeline delay. Finally, in order to permit the testing of different calorimeter technologies, its analog front end is capable of servicing Particle Flow Algorithm calorimeters made from either Resistive Plate Chambers or Gaseous Electron Multipliers.",
"title": ""
},
{
"docid": "a4d253d6194a9a010660aedb564be39a",
"text": "This work on GGS-NN is motivated by the program verification application, where we need to analyze dynamic data structures created in the heap. On a very high level, in this application a machine learning model analyzes the heap states (a graph with memory nodes and pointers as edges) during the execution of a program and comes up with logical formulas that describes the heap. These logical formulas are then fed into a theorem prover to prove the correctness of the program. Problem-specific node annotations are used to initialize .",
"title": ""
},
{
"docid": "44941e8f5b703bcacb51b6357cba7633",
"text": "Convolutional neural networks provide visual features that perform remarkably well in many computer vision applications. However, training these networks requires significant amounts of supervision. This paper introduces a generic framework to train deep networks, end-to-end, with no supervision. We propose to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. This domain agnostic approach avoids the standard unsupervised learning issues of trivial solutions and collapsing of features. Thanks to a stochastic batch reassignment strategy and a separable square loss function, it scales to millions of images. The proposed approach produces representations that perform on par with state-of-the-art unsupervised methods on ImageNet and PASCAL VOC.",
"title": ""
},
{
"docid": "d8259846c9da256fb5f68537517fe55a",
"text": "Several versions of the Daum-Huang (DH) filter have been introduced recently to address the task of discrete-time nonlinear filtering. The filters propagate a particle set over time to track the system state, but, in contrast to conventional particle filters, there is no proposal density or importance sampling involved. Particles are smoothly migrated using a particle flow derived from a log-homotopy relating the prior and the posterior. Impressive performance has been demonstrated for a wide range of systems, but the implemented algorithms rely on an extended/unscented Kalman filter (EKF/UKF) that is executed in parallel. We illustrate through simulation that the performance of the exact flow DH filter can be compromised when the UKF and EKF fail. By introducing simple but important modifications to the exact flow DH filter implementation, the performance can be improved dramatically.",
"title": ""
},
{
"docid": "b93a476642276ddc0ff956e0434a9c36",
"text": "In this paper, we present a cartoon face generation method that stands on a component-based facial feature extraction approach. Given a frontal face image as an input, our proposed system has the following stages. First, face features are extracted using an extended Active Shape Model. Outlines of the components are locally modified using edge detection, template matching and Hermit interpolation. This modification enhances the diversity of output and accuracy of the component matching required for cartoon generation. Second, to bring cartoon-specific features such as shadows, highlights and, especially, stylish drawing, an array of various face photographs and corresponding hand-drawn cartoon faces are collected. These cartoon templates are automatically decomposed into cartoon components using our proposed method for parameterizing cartoon samples, which is fast and simple. Then, using shape matching methods, the appropriate cartoon component is selected and deformed to fit the input face. Finally, a cartoon face is rendered in a vector format using the rendering rules of the selected template. Experimental results demonstrate effectiveness of our approach in generating life-like cartoon faces.",
"title": ""
},
{
"docid": "f11ff738aaf7a528302e6ec5ed99c43c",
"text": "Vehicles equipped with GPS localizers are an important sensory device for examining people’s movements and activities. Taxis equipped with GPS localizers serve the transportation needs of a large number of people driven by diverse needs; their traces can tell us where passengers were picked up and dropped off, which route was taken, and what steps the driver took to find a new passenger. In this article, we provide an exhaustive survey of the work on mining these traces. We first provide a formalization of the data sets, along with an overview of different mechanisms for preprocessing the data. We then classify the existing work into three main categories: social dynamics, traffic dynamics and operational dynamics. Social dynamics refers to the study of the collective behaviour of a city’s population, based on their observed movements; Traffic dynamics studies the resulting flow of the movement through the road network; Operational dynamics refers to the study and analysis of taxi driver’s modus operandi. We discuss the different problems currently being researched, the various approaches proposed, and suggest new avenues of research. Finally, we present a historical overview of the research work in this field and discuss which areas hold most promise for future research.",
"title": ""
}
] |
scidocsrr
|
359eb65bdd0ebf6d9cc212b42f53cbba
|
Virtual Network Function placement for resilient Service Chain provisioning
|
[
{
"docid": "182bb07fb7dbbaf17b6c7a084f1c4fb2",
"text": "Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.",
"title": ""
},
{
"docid": "cbe9729b403a07386a76447c4339c5f3",
"text": "Network appliances perform different functions on network flows and constitute an important part of an operator's network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives.",
"title": ""
}
] |
[
{
"docid": "2a09d97b350fa249fc6d4bbf641697e2",
"text": "The goal of this study was to investigate the effect of lead and the influence of chelating agents,meso 2, 3-dimercaptosuccinic acid (DMSA) and D-Penicillamine, on the biochemical contents of the brain tissues of Catla catla fingerlings by Fourier Transform Infrared Spectroscopy. FT-IR spectra revealed significant differences in absorbance intensities between control and lead-intoxicated brain tissues, reflecting a change in protein and lipid contents in the brain tissues due to lead toxicity. In addition, the administration of chelating agents, DMSA and D-Penicillamine, improved the protein and lipid contents in the brain tissues compared to lead-intoxicated tissues. Further, DMSA was more effective in reducing the body burden of lead. The protein secondary structure analysis revealed that lead intoxication causes an alteration in protein profile with a decrease in α-helix and an increase in β-sheet structure of Catla catla brain. In conclusion, the study demonstrated that FT-IR spectroscopy could differentiate the normal and lead-intoxicated brain tissues due to intrinsic differences in intensity.",
"title": ""
},
{
"docid": "0612db6f5e30d37122d37b26e2a2bb0a",
"text": "This paper presents a novel approach to procedural generation of urban maps for First Person Shooter (FPS) games. A multi-agent evolutionary system is employed to place streets, buildings and other items inside the Unity3D game engine, resulting in playable video game levels. A computational agent is trained using machine learning techniques to capture the intent of the game designer as part of the multi-agent system, and to enable a semi-automated aesthetic selection for the underlying genetic algorithm.",
"title": ""
},
{
"docid": "7844d2e53deba7bcfef03f06a6bced59",
"text": "In power line communications (PLCs), the multipath-induced dispersion and the impulsive noise are the two fundamental impediments in the way of high-integrity communications. The conventional orthogonal frequency-division multiplexing (OFDM) system is capable of mitigating the multipath effects in PLCs, but it fails to suppress the impulsive noise effects. Therefore, in order to mitigate both the multipath effects and the impulsive effects in PLCs, in this paper, a compressed impairment sensing (CIS)-assisted and interleaved-double-FFT (IDFFT)-aided system is proposed for indoor broadband PLC. Similar to classic OFDM, data symbols are transmitted in the time-domain, while the equalization process is employed in the frequency domain in order to achieve the maximum attainable multipath diversity gain. In addition, a specifically designed interleaver is employed in the frequency domain in order to mitigate the impulsive noise effects, which relies on the principles of compressed sensing (CS). Specifically, by taking advantage of the interleaving process, the impairment impulsive samples can be estimated by exploiting the principle of CS and then cancelled. In order to improve the estimation performance of CS, we propose a beneficial pilot design complemented by a pilot insertion scheme. Finally, a CIS-assisted detector is proposed for the IDFFT system advocated. Our simulation results show that the proposed CIS-assisted IDFFT system is capable of achieving a significantly improved performance compared with the conventional OFDM. Furthermore, the tradeoffs to be struck in the design of the CIS-assisted IDFFT system are also studied.",
"title": ""
},
{
"docid": "3f7c6490ccb6d95bd22644faef7f452f",
"text": "A blockchain is a distributed, decentralised database of records of digital events (transactions) that took place and were shared among the participating parties. Each transaction in the public ledger is verified by consensus of a majority of the participants in the system. Bitcoin may not be that important in the future, but blockchain technology's role in Financial and Non-financial world can't be undermined. In this paper, we provide a holistic view of how Blockchain technology works, its strength and weaknesses, and its role to change the way the business happens today and tomorrow.",
"title": ""
},
{
"docid": "5ebdda11fbba5d0633a86f2f52c7a242",
"text": "What is index modulation (IM)? This is an interesting question that we have started to hear more and more frequently over the past few years. The aim of this paper is to answer this question in a comprehensive manner by covering not only the basic principles and emerging variants of IM, but also reviewing the most recent as well as promising advances in this field toward the application scenarios foreseen in next-generation wireless networks. More specifically, we investigate three forms of IM: spatial modulation, channel modulation and orthogonal frequency division multiplexing (OFDM) with IM, which consider the transmit antennas of a multiple-input multiple-output system, the radio frequency mirrors (parasitic elements) mounted at a transmit antenna and the subcarriers of an OFDM system for IM techniques, respectively. We present the up-to-date advances in these three promising frontiers and discuss possible future research directions for IM-based schemes toward low-complexity, spectrum- and energy-efficient next-generation wireless networks.",
"title": ""
},
{
"docid": "76a9799863bd944fb969539e8817cccd",
"text": "This paper investigates the application of non-orthogonal multiple access (NOMA) in millimeter wave (mm-Wave) communications by exploiting beamforming, user scheduling, and power allocation. Random beamforming is invoked for reducing the feedback overhead of the considered system. A non-convex optimization problem for maximizing the sum rate is formulated, which is proved to be NP-hard. The branch and bound approach is invoked to obtain the $\\epsilon$ -optimal power allocation policy, which is proved to converge to a global optimal solution. To elaborate further, a low-complexity suboptimal approach is developed for striking a good computational complexity-optimality tradeoff, where the matching theory and successive convex approximation techniques are invoked for tackling the user scheduling and power allocation problems, respectively. Simulation results reveal that: 1) the proposed low complexity solution achieves a near-optimal performance and 2) the proposed mm-Wave NOMA system is capable of outperforming conventional mm-Wave orthogonal multiple access systems in terms of sum rate and the number of served users.",
"title": ""
},
{
"docid": "8b12c633e6c9fb177459bb9609afeb1a",
"text": "Chronic osteomyelitis of the jaw is a rare entity in the healthy population of the developed world. It is normally associated with radiation and bisphosphonates ingestion and occurs in immunosuppressed individuals such as alcoholics or diabetics. Two cases are reported of chronic osteomyelitis in healthy individuals with no adverse medical conditions. The management of these cases are described.",
"title": ""
},
{
"docid": "4dbbcaf264cc9beda8644fa926932d2e",
"text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.",
"title": ""
},
{
"docid": "385922d94a35c37776ba816645e964c7",
"text": "In this paper, we develop a unified vision system for small-scale aircraft, known broadly as Micro Air Vehicl es (MAVs), that not only addresses basic flight stability and control, but also enables more intelligent missions, such as ground o bject recognition and moving-object tracking. The proposed syst em defines a framework for real-time image feature extraction, horizon detection and sky/ground segmentation, and contex tual ground object detection. Multiscale Linear Discriminant Analysis (MLDA) defines the first stage of the vision system, and generates a multiscale description of images, incorporati ng both color and texture through a dynamic representation of image details. This representation is ideally suited for horizondetection and sky/ground segmentation of images, which we accomplish through the probabilistic representation of tree-structured belief networks (TSBN). Specifically, we propose incomplete meta TSBNs (IMTSBN) to accommodate the properties of our MLDA representation and to enhance the descriptive component of these statistical models. In the last stage of the vision processi ng, we seamlessly extend this probabilistic framework to perfo rm computationally efficient detection and recognition of obj ects in the segmented ground region, through the idea of visual contexts. By exploiting the concept of visual contexts, we c an quickly focus on candidate regions, where objects of intere st may be found, and then compute additional features through the Complex Wavelet Transform (CWT) and HSI color space for those regions, only. These additional features, while n ot necessary for global regions, are useful in accurate detect ion and recognition of smaller objects. Throughout, our approach is heavily influenced by real-time constraints and robustne ss to transient video noise.",
"title": ""
},
{
"docid": "4520316ecef3051305e547d50fadbb7a",
"text": "The increasing complexity and size of digital designs, in conjunction with the lack of a potent verification methodology that can effectively cope with this trend, continue to inspire engineers and academics in seeking ways to further automate design verification. In an effort to increase performance and to decrease engineering effort, research has turned to artificial intelligence (AI) techniques for effective solutions. The generation of tests for simulation-based verification can be guided by machine-learning techniques. In fact, recent advances demonstrate that embedding machine-learning (ML) techniques into a coverage-directed test generation (CDG) framework can effectively automate the test generation process, making it more effective and less error-prone. This article reviews some of the most promising approaches in this field, aiming to evaluate the approaches and to further stimulate more directed research in this area.",
"title": ""
},
{
"docid": "9afc8df23892162a220b1804fe415a36",
"text": "Social entrepreneurship is gradually becoming a crucial element in the worldwide discussion on volunteerism and civic commitment. It interleaves the passion of a common cause with industrial ethics and is notable and different from the present other types of entrepreneurship models due to its quest for mission associated influence. The previous few years have noticed a striking and surprising progress in the field of social entrepreneurship and has amplified attention ranging throughout all the diverse sectors. The critical difference between social and traditional entrepreneurship can be seen in the founding mission of the venture and the market impressions. Social entrepreneurs emphasize on ways to relieve or eradicate societal pressures and produce progressive externalities or public properties. This study focuses mainly on the meaning of social entrepreneurship to different genres and where does it stand in respect to other forms of entrepreneurship in today’s times.",
"title": ""
},
{
"docid": "b51a1df32ce34ae3f1109a9053b4bc1f",
"text": "Nowadays many automobile manufacturers are switching to Electric Power Steering (EPS) for its advantages on performance and cost. In this paper, a mathematical model of a column type EPS system is established, and its state-space expression is constructed. Then three different control methods are implemented and performance, robustness and disturbance rejection properties of the EPS control systems are investigated. The controllers are tested via simulation and results show a modified Linear Quadratic Gaussian (LQG) controller can track the characteristic curve well and effectively attenuate external disturbances.",
"title": ""
},
{
"docid": "f513a112b7fe4ffa2599a0f144b2e112",
"text": "A defined software process is needed to provide organizations with a consistent framework for performing their work and improving the way they do it. An overall framework for modeling simplifies the task of producing process models, permits them to be tailored to individual needs, and facilitates process evolution. This paper outlines the principles of entity process models and suggests ways in which they can help to address some of the problems with more conventional approaches to modeling software processes.",
"title": ""
},
{
"docid": "fce6ac500501d0096aac3513639c2627",
"text": "Recent technological advances made necessary the use of the robots in various types of applications. Currently, the traditional robot-like scenarios dedicated to industrial applications with repetitive tasks, were replaced by applications which require human interaction. The main field of such applications concerns the rehabilitation and aid of elderly persons. In this study, we present a state-of-the-art of the main research advances in lower limbs actuated orthosis/wearable robots in the literature. This will include a review on researches covering full limb exoskeletons, lower limb exoskeletons and particularly the knee joint orthosis. Rehabilitation using treadmill based device and use of Functional Electrical Stimulation (FES) are also investigated. We discuss finally the challenges not yet solved such as issues related to portability, energy consumption, social constraints and high costs of theses devices.",
"title": ""
},
{
"docid": "e79e94549bca30e3a4483f7fb9992932",
"text": "The use of semantic technologies and Semantic Web ontologies in particular have enabled many recent developments in information integration, search engines, and reasoning over formalised knowledge. Ontology Design Patterns have been proposed to be useful in simplifying the development of Semantic Web ontologies by codifying and reusing modelling best practices. This thesis investigates the quality of Ontology Design Patterns. The main contribution of the thesis is a theoretically grounded and partially empirically evaluated quality model for such patterns including a set of quality characteristics, indicators, measurement methods and recommendations. The quality model is based on established theory on information system quality, conceptual model quality, and ontology evaluation. It has been tested in a case study setting and in two experiments. The main findings of this thesis are that the quality of Ontology Design Patterns can be identified, formalised and measured, and furthermore, that these qualities interact in such a way that ontology engineers using patterns need to make tradeoffs regarding which qualities they wish to prioritise. The developed model may aid them in making these choices. This work has been supported by Jönköping University. Department of Computer and Information Science Linköping University SE-581 83 Linköping, Sweden",
"title": ""
},
{
"docid": "bd882f762be5a9cb67191a7092fc88e3",
"text": "This study tested the criterion validity of the inventory, Mental Toughness 48, by assessing the correlation between mental toughness and physical endurance for 41 male undergraduate sports students. A significant correlation of .34 was found between scores for overall mental toughness and the time a relative weight could be held suspended. Results support the criterion-related validity of the Mental Toughness 48.",
"title": ""
},
{
"docid": "fa604c528539ac5cccdbd341a9aebbf7",
"text": "BACKGROUND\nAn understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts.\n\n\nMETHODS\nThe uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles.\n\n\nRESULTS/CONCLUSIONS\nP-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.",
"title": ""
},
{
"docid": "0d6165524d748494a5c4d0d2f0675c42",
"text": "In Saudi Arabia, breast cancer is diagnosed at advanced stage compared to Western countries. Nevertheless, the perceived barriers to delayed presentation have been poorly examined. Additionally, available breast cancer awareness data are lacking validated measurement tool. The aim of this study is to evaluate the level of breast cancer awareness and perceived barriers to seeking medical care among Saudi women, using internationally validated tool. A cross-sectional study was conducted among adult Saudi women attending a primary care center in Riyadh during February 2014. Data were collected using self-administered questionnaire based on the Breast Cancer Awareness Measure (CAM-breast). Out of 290 women included, 30 % recognized five or more (out of nine) non-lump symptoms of breast cancer, 31 % correctly identified the risky age of breast cancer (set as 50 or 70 years), 28 % reported frequent (at least once a month) breast checking. Considering the three items of the CAM-breast, only 5 % were completely aware while 41 % were completely unaware of breast cancer. The majority (94 %) reported one or more barriers. The most frequently reported barrier was the difficulty of getting a doctor appointment (39 %) followed by worries about the possibility of being diagnosed with breast cancer (31 %) and being too busy to seek medical help (26 %). We are reporting a major gap in breast cancer awareness and several logistic and emotional barriers to seeking medical care among adult Saudi women. The current findings emphasized the critical need for an effective national breast cancer education program to increase public awareness and early diagnosis.",
"title": ""
},
{
"docid": "660f957b70e53819724e504ed3de0776",
"text": "We propose several econometric measures of connectedness based on principalcomponents analysis and Granger-causality networks, and apply them to the monthly returns of hedge funds, banks, broker/dealers, and insurance companies. We find that all four sectors have become highly interrelated over the past decade, likely increasing the level of systemic risk in the finance and insurance industries through a complex and time-varying network of relationships. These measures can also identify and quantify financial crisis periods, and seem to contain predictive power in out-of-sample tests. Our results show an asymmetry in the degree of connectedness among the four sectors, with banks playing a much more important role in transmitting shocks than other financial institutions. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dd9f40db5e52817b25849282ffdafe26",
"text": "Pattern classification methods based on learning-from-examples have been widely applied to character recognition from the 1990s and have brought forth significant improvements of recognition accuracies. This kind of methods include statistical methods, artificial neural networks, support vector machines, multiple classifier combination, etc. In this chapter, we briefly review the learning-based classification methods that have been successfully applied to character recognition, with a special section devoted to the classification of large category set. We then discuss the characteristics of these methods, and discuss the remaining problems in character recognition that can be potentially solved by machine learning methods.",
"title": ""
}
] |
scidocsrr
|
3dde2e750caa8624282518369f4f6a1f
|
Evaluating Display Fidelity and Interaction Fidelity in a Virtual Reality Game
|
[
{
"docid": "c9b7832cd306fc022e4a376f10ee8fc8",
"text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.",
"title": ""
},
{
"docid": "467b4537bdc6a466909d819e67d0ebc1",
"text": "We have created an immersive application for statistical graphics and have investigated what benefits it offers over more traditional data analysis tools. This paper presents a description of both the traditional data analysis tools and our virtual environment, and results of an experiment designed to determine if an immersive environment based on the XGobi desktop system provides advantages over XGobi for analysis of high-dimensional statistical data. The experiment included two aspects of each environment: three structure detection (visualization) tasks and one ease of interaction task. The subjects were given these tasks in both the C2 virtual environment and a workstation running XGobi. The experiment results showed an improvement in participants’ ability to perform structure detection tasks in the C2 to their performance in the desktop environment. However, participants were more comfortable with the interaction tools in the desktop",
"title": ""
}
] |
[
{
"docid": "76049ed267e9327412d709014e8e9ed4",
"text": "A wireless massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of users, with large gains in spectralefficiency and energy-efficiency compared with conventional MIMO technology. Until recently it was believed that in multicellular massive MIMO system, even in the asymptotic regime, as the number of service antennas tends to infinity, the performance is limited by directed inter-cellular interference. This interference results from unavoidable re-use of reverse-link training sequences (pilot contamination) by users in different cells. We devise a new concept that leads to the effective elimination of inter-cell interference in massive MIMO systems. This is achieved by outer multi-cellular precoding, which we call LargeScale Fading Precoding (LSFP). The main idea of LSFP is that each base station linearly combines messages aimed to users from different cells that re-use the same training sequence. Crucially, the combining coefficients depend only on the slowfading coefficients between the users and the base stations. Each base station independently transmits its LSFP-combined symbols using conventional linear precoding that is based on estimated fast-fading coefficients. Further, we derive estimates for downlink and uplink SINRs and capacity lower bounds for the case of massive MIMO systems with LSFP and a finite number of base station antennas.",
"title": ""
},
{
"docid": "d65a047b3f381ca5039d75fd6330b514",
"text": "This paper presents an enhanced algorithm for matching laser scan maps using histogram correlations. The histogram representation effectively summarizes a map's salient features such that pairs of maps can be matched efficiently without any prior guess as to their alignment. The histogram matching algorithm has been enhanced in order to work well in outdoor unstructured environments by using entropy metrics, weighted histograms and proper thresholding of quality metrics. Thus our large-scale scan-matching SLAM implementation has a vastly improved ability to close large loops in real-time even when odometry is not available. Our experimental results have demonstrated a successful mapping of the largest area ever mapped to date using only a single laser scanner. We also demonstrate our ability to solve the lost robot problem by localizing a robot to a previously built map without any prior initialization.",
"title": ""
},
{
"docid": "39007b91989c42880ff96e7c5bdcf519",
"text": "Feature selection has aroused considerable research interests during the last few decades. Traditional learning-based feature selection methods separate embedding learning and feature ranking. In this paper, we propose a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learning and sparse regression are jointly performed. Specifically, the proposed JELSR joins embedding learning with sparse regression to perform feature selection. To show the effectiveness of the proposed framework, we also provide a method using the weight via local linear approximation and adding the ℓ2,1-norm regularization, and design an effective algorithm to solve the corresponding optimization problem. Furthermore, we also conduct some insightful discussion on the proposed feature selection approach, including the convergence analysis, computational complexity, and parameter determination. In all, the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression. Promising experimental results on different kinds of data sets, including image, voice data and biological data, have validated the effectiveness of our proposed algorithm.",
"title": ""
},
{
"docid": "038db4d053ff795f35ae9731f6e27c9a",
"text": "Intravascular injection leading to skin necrosis or blindness is the most serious complication of facial injection with fillers. It may be underreported and the outcome of cases are unclear. Early recognitions of the symptoms and signs may facilitate prompt treatment if it does occur avoiding the potential sequelae of intravascular injection. To determine the frequency of intravascular injection among experienced injectors, the outcomes of these intravascular events, and the management strategies. An internet-based survey was sent to 127 injectors worldwide who act as trainers for dermal fillers globally. Of the 52 respondents from 16 countries, 71 % had ≥11 years of injection experience, and 62 % reported one or more intravascular injections. The most frequent initial signs were minor livedo (63 % of cases), pallor (41 %), and symptoms of pain (37 %). Mildness/absence of pain was a feature of 47 % of events. Hyaluronidase (5 to >500 U) was used immediately on diagnosis to treat 86 % of cases. The most commonly affected areas were the nasolabial fold and nose (39 % each). Of all the cases, only 7 % suffered moderate scarring requiring surface treatments. Uneventful healing was the usual outcome, with 86 % being resolved within 14 days. Intravascular injection with fillers can occur even at the hands of experienced injectors. It may not be always associated with immediate pain or other classical symptoms and signs. Prompt effective management leads to favorable outcomes, and will prevent catastrophic consequences such as skin necrosis. Intravascular injection leading to blindness may not be salvageable and needs further study. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "7a9b9633243d84978d9e975744642e18",
"text": "Our aim is to provide a pixel-level object instance labeling of a monocular image. We build on recent work [27] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [27] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [13]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [27].",
"title": ""
},
{
"docid": "505a9b6139e8cbf759652dc81f989de9",
"text": "SQL injection attacks, a class of injection flaw in which specially crafted input strings leads to illegal queries to databases, are one of the topmost threats to web applications. A Number of research prototypes and commercial products that maintain the queries structure in web applications have been developed. But these techniques either fail to address the full scope of the problem or have limitations. Based on our observation that the injected string in a SQL injection attack is interpreted differently on different databases. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Pattern matching is a technique that can be used to identify or detect any anomaly packet from a sequential action. Injection attack is a method that can inject any kind of malicious string or anomaly string on the original string. Most of the pattern based techniques are used static analysis and patterns are generated from the attacked statements. In this paper, we proposed a detection and prevention technique for preventing SQL Injection Attack (SQLIA) using Aho–Corasick pattern matching algorithm. In this paper, we proposed an overview of the architecture. In the initial stage evaluation, we consider some sample of standard attack patterns and it shows that the proposed algorithm is works well against the SQL Injection Attack. Keywords—SQL Injection Attack; Pattern matching; Static Pattern; Dynamic Pattern",
"title": ""
},
{
"docid": "5931169b6433d77496dfc638988399eb",
"text": "Image annotation has been an important task for visual information retrieval. It usually involves a multi-class multi-label classification problem. To solve this problem, many researches have been conducted during last two decades, although most of the proposed methods rely on the training data with the ground truth. To prepare such a ground truth is an expensive and laborious task that cannot be easily scaled, and “semantic gaps” between low-level visual features and high-level semantics still remain. In this paper, we propose a novel approach, ontology based supervised learning for multi-label image annotation, where classifiers' training is conducted using easily gathered Web data. Moreover, it takes advantage of both low-level visual features and high-level semantic information of given images. Experimental results using 0.507 million Web images database show effectiveness of the proposed framework over existing method.",
"title": ""
},
{
"docid": "940f460457b117c156b6e39e9586a0b9",
"text": "The flipped classroom is an innovative pedagogical approach that focuses on learner-centered instruction. The purposes of this report were to illustrate how to implement the flipped classroom and to describe students' perceptions of this approach within 2 undergraduate nutrition courses. The template provided enables faculty to design before, during, and after class activities and assessments based on objectives using all levels of Bloom's taxonomy. The majority of the 142 students completing the evaluation preferred the flipped method compared with traditional pedagogical strategies. The process described in the report was successful for both faculty and students.",
"title": ""
},
{
"docid": "85b3f55fffff67b9d3a0305b258dcd8e",
"text": "Sézary syndrome (SS) has a poor prognosis and few guidelines for optimizing therapy. The US Cutaneous Lymphoma Consortium, to improve clinical care of patients with SS and encourage controlled clinical trials of promising treatments, undertook a review of the published literature on therapeutic options for SS. An overview of the immunopathogenesis and standardized review of potential current treatment options for SS including metabolism, mechanism of action, overall efficacy in mycosis fungoides and SS, and common or concerning adverse effects is first discussed. The specific efficacy of each treatment for SS, both as monotherapy and combination therapy, is then reported using standardized criteria for both SS and response to therapy with the type of study defined by a modification of the US Preventive Services guidelines for evidence-based medicine. Finally, guidelines for the treatment of SS and suggestions for adjuvant treatment are noted.",
"title": ""
},
{
"docid": "d6ee313e66b33bfebc87bb9174aed00f",
"text": "The majority of arm amputees live in developing countries and cannot afford prostheses beyond cosmetic hands with simple grippers. Customized hand prostheses with high performance are too expensive for the average arm amputee. Currently, commercially available hand prostheses use costly and heavy DC motors for actuation. This paper presents an inexpensive hand prosthesis, which uses a 3D printable design to reduce the cost of customizable parts and novel electro-thermal actuator based on nylon 6-6 polymer muscles. The prosthetic hand was tested and found to be able to grasp a variety of shapes 100% of the time tested (sphere, cylinder, cube, and card) and other commonly used tools. Grip times for each object were repeatable with small standard deviations. With a low estimated material cost of $170 for actuation, this prosthesis could have a potential to be used for low-cost and high-performance system.",
"title": ""
},
{
"docid": "aa749c00010e5391710738cc235c1c35",
"text": "Traditional summarization initiatives have been focused on specific types of documents such as articles, reviews, videos, image feeds, or tweets, a practice which may result in pigeonholing the summarization task in the context of modern, content-rich multimedia collections. Consequently, much of the research to date has revolved around mostly toy problems in narrow domains and working on single-source media types. We argue that summarization and story generation systems need to refocus the problem space in order to meet the information needs in the age of user-generated content in di↵erent formats and languages. Here we create a framework for flexible multimedia storytelling. Narratives, stories, and summaries carry a set of challenges in big data and dynamic multi-source media that give rise to new research in spatial-temporal representation, viewpoint generation, and explanation.",
"title": ""
},
{
"docid": "4a1559bd8a401d3273c34ab20931611d",
"text": "Spiking Neural Networks (SNNs) are widely regarded as the third generation of artificial neural networks, and are expected to drive new classes of recognition, data analytics and computer vision applications. However, large-scale SNNs (e.g., of the scale of the human visual cortex) are highly compute and data intensive, requiring new approaches to improve their efficiency. Complementary to prior efforts that focus on parallel software and the design of specialized hardware, we propose AxSNN, the first effort to apply approximate computing to improve the computational efficiency of evaluating SNNs. In SNNs, the inputs and outputs of neurons are encoded as a time series of spikes. A spike at a neuron's output triggers updates to the potentials (internal states) of neurons to which it is connected. AxSNN determines spike-triggered neuron updates that can be skipped with little or no impact on output quality and selectively skips them to improve both compute and memory energy. Neurons that can be approximated are identified by utilizing various static and dynamic parameters such as the average spiking rates and current potentials of neurons, and the weights of synaptic connections. Such a neuron is placed into one of many approximation modes, wherein the neuron is sensitive only to a subset of its inputs and sends spikes only to a subset of its outputs. A controller periodically updates the approximation modes of neurons in the network to achieve energy savings with minimal loss in quality. We apply AxSNN to both hardware and software implementations of SNNs. For hardware evaluation, we designed SNNAP, a Spiking Neural Network Approximate Processor that embodies the proposed approximation strategy, and synthesized it to 45nm technology. The software implementation of AxSNN was evaluated on a 2.7 GHz Intel Xeon server with 128 GB memory. Across a suite of 6 image recognition benchmarks, AxSNN achieves 1.4–5.5x reduction in scalar operations for network evaluation, which translates to 1.2–3.62x and 1.26–3.9x improvement in hardware and software energies respectively, for no loss in application quality. Progressively higher energy savings are achieved with modest reductions in output quality.",
"title": ""
},
{
"docid": "d6602271d7024f7d894b14da52299ccc",
"text": "BACKGROUND\nMost articles on face composite tissue allotransplantation have considered ethical and immunologic aspects. Few have dealt with the technical aspects of graft procurement. The authors report the technical difficulties involved in procuring a lower face graft for allotransplantation.\n\n\nMETHODS\nAfter a preclinical study of 20 fresh cadavers, the authors carried out an allotransplantation of the lower two-thirds of the face on a patient in January of 2007. The graft included all the perioral muscles, the facial nerves (VII, V2, and V3) and, for the first time, the parotid glands.\n\n\nRESULTS\nThe preclinical study and clinical results confirm that complete revascularization of a graft consisting of the lower two-thirds of the face is possible from a single facial pedicle. All dissections were completed within 3 hours. Graft procurement for the clinical study took 4 hours. The authors harvested the soft tissues of the face en bloc to save time and to prevent tissue injury. They restored the donor's face within approximately 4 hours, using a resin mask colored to resemble the donor's skin tone. All nerves were easily reattached. Voluntary activity was detected on clinical examination 5 months postoperatively, and electromyography confirmed nerve regrowth, with activity predominantly on the left side. The patient requested local anesthesia for biopsies performed in month 4.\n\n\nCONCLUSIONS\nPartial facial composite tissue allotransplantation of the lower two-thirds of the face is technically feasible, with a good cosmetic and functional outcome in selected clinical cases. Flaps of this type establish vascular and neurologic connections in a reliable manner and can be procured with a rapid, standardized procedure.",
"title": ""
},
{
"docid": "8385f72bd060eee8c59178bc0b74d1e3",
"text": "Gesture recognition plays an important role in human-computer interaction. However, most existing methods are complex and time-consuming, which limit the use of gesture recognition in real-time environments. In this paper, we propose a static gesture recognition system that combines depth information and skeleton data to classify gestures. Through feature fusion, hand digit gestures of 0-9 can be recognized accurately and efficiently. According to the experimental results, the proposed gesture recognition system is effective and robust, which is invariant to complex background, illumination changes, reversal, structural distortion, rotation etc. We have tested the system both online and offline which proved that our system is satisfactory to real-time requirements, and therefore it can be applied to gesture recognition in real-world human-computer interaction systems.",
"title": ""
},
{
"docid": "af49fef0867a951366cfb21288eeb3ed",
"text": "As a discriminative method of one-shot learning, Siamese deep network allows recognizing an object from a single exemplar with the same class label. However, it does not take the advantage of the underlying structure and relationship among a multitude of instances since it only relies on pairs of instances for training. In this paper, we propose a quadruplet deep network to examine the potential connections among the training instances, aiming to achieve a more powerful representation. We design four shared networks that receive multi-tuple of instances as inputs and are connected by a novel loss function consisting of pair-loss and tripletloss. According to the similarity metric, we select the most similar and the most dissimilar instances as the positive and negative inputs of triplet loss from each multi-tuple. We show that this scheme improves the training performance and convergence speed. Furthermore, we introduce a new weighted pair loss for an additional acceleration of the convergence. We demonstrate promising results for model-free tracking-by-detection of objects from a single initial exemplar in the Visual Object Tracking benchmark.",
"title": ""
},
{
"docid": "2dbffa465a1d0b9c7e2ae1044dd0cdcb",
"text": "Total variation denoising is a nonlinear filtering method well suited for the estimation of piecewise-constant signals observed in additive white Gaussian noise. The method is defined by the minimization of a particular nondifferentiable convex cost function. This letter describes a generalization of this cost function that can yield more accurate estimation of piecewise constant signals. The new cost function involves a nonconvex penalty (regularizer) designed to maintain the convexity of the cost function. The new penalty is based on the Moreau envelope. The proposed total variation denoising method can be implemented using forward–backward splitting.",
"title": ""
},
{
"docid": "9ff6d7a36646b2f9170bd46d14e25093",
"text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.",
"title": ""
},
{
"docid": "bde769df506e361bf374bd494fc5db6f",
"text": "Molded interconnect devices (MID) allow the realization of electronic circuits on injection molded thermoplastics. MID antennas can be manufactured as part of device casings without the need for additional printed circuit boards or attachment of antennas printed on foil. Baluns, matching networks, amplifiers and connectors can be placed on the polymer in the vicinity of the antenna. A MID dipole antenna for 1 GHz is designed, manufactured and measured. A prototype of the antenna is built with laser direct structuring (LDS) on a Xantar LDS 3720 substrate. Measured return loss and calibrated gain patterns are compared to simulation results.",
"title": ""
},
{
"docid": "7838934c12f00f987f6999460fc38ca1",
"text": "The Internet has fostered an unconventional and powerful style of collaboration: \"wiki\" web sites, where every visitor has the power to become an editor. In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki. We make three contributions. First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well. Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis. Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces. We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences.",
"title": ""
},
{
"docid": "d050730d7a5bd591b805f1b9729b0f2d",
"text": "In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.",
"title": ""
}
] |
scidocsrr
|
6008f27f55e7664c2ef40f703ad6b0a1
|
Efficient Object Identification with Passive RFID Tags
|
[
{
"docid": "1c7251c55cf0daea9891c8a522bbd3ec",
"text": "The role of computers in the modern office has divided ouractivities between virtual interactions in the realm of thecomputer and physical interactions with real objects within thetraditional office infrastructure. This paper extends previous workthat has attempted to bridge this gap, to connect physical objectswith virtual representations or computational functionality, viavarious types of tags. We discuss a variety of scenarios we haveimplemented using a novel combination of inexpensive, unobtrusiveand easy to use RFID tags, tag readers, portable computers andwireless networking. This novel combination demonstrates theutility of invisibly, seamlessly and portably linking physicalobjects to networked electronic services and actions that arenaturally associated with their form.",
"title": ""
}
] |
[
{
"docid": "b6d5849d7950438716e31880860f835c",
"text": "The promotion of reflective capacity within the teaching of clinical skills and professionalism is posited as fostering the development of competent health practitioners. An innovative approach combines structured reflective writing by medical students and individualized faculty feedback to those students to augment instruction on reflective practice. A course for preclinical students at the Warren Alpert Medical School of Brown University, entitled \"Doctoring,\" combined reflective writing assignments (field notes) with instruction in clinical skills and professionalism and early clinical exposure in a small-group format. Students generated multiple e-mail field notes in response to structured questions on course topics. Individualized feedback from a physician-behavioral scientist dyad supported the students' reflective process by fostering critical-thinking skills, highlighting appreciation of the affective domain, and providing concrete recommendations. The development and implementation of this innovation are presented, as is an analysis of the written evaluative comments of students taking the Doctoring course. Theoretical and clinical rationales for features of the innovation and supporting evidence of their effectiveness are presented. Qualitative analyses of students' evaluations yielded four themes of beneficial contributions to their learning experience: promoting deeper and more purposeful reflection, the value of (interdisciplinary) feedback, the enhancement of group process, and personal and professional development. Evaluation of the innovation was the fifth theme; some limitations are described, and suggestions for improvement are provided. Issues of the quality of the educational paradigm, generalizability, and sustainability are addressed.",
"title": ""
},
{
"docid": "4c596974ba7dde7525e028bd7f168e61",
"text": "In ranking with the pairwise classification approach, the loss associated to a predicted ranked list is the mean of the pairwise classification losses. This loss is inadequate for tasks like information retrieval where we prefer ranked lists with high precision on the top of the list. We propose to optimize a larger class of loss functions for ranking, based on an ordered weighted average (OWA) (Yager, 1988) of the classification losses. Convex OWA aggregation operators range from the max to the mean depending on their weights, and can be used to focus on the top ranked elements as they give more weight to the largest losses. When aggregating hinge losses, the optimization problem is similar to the SVM for interdependent output spaces. Moreover, we show that OWA aggregates of margin-based classification losses have good generalization properties. Experiments on the Letor 3.0 benchmark dataset for information retrieval validate our approach.",
"title": ""
},
{
"docid": "78d00cb1af094c91cc7877ba051f925e",
"text": "Neuropathic pain refers to pain that originates from pathology of the nervous system. Diabetes, infection (herpes zoster), nerve compression, nerve trauma, \"channelopathies,\" and autoimmune disease are examples of diseases that may cause neuropathic pain. The development of both animal models and newer pharmacological strategies has led to an explosion of interest in the underlying mechanisms. Neuropathic pain reflects both peripheral and central sensitization mechanisms. Abnormal signals arise not only from injured axons but also from the intact nociceptors that share the innervation territory of the injured nerve. This review focuses on how both human studies and animal models are helping to elucidate the mechanisms underlying these surprisingly common disorders. The rapid gain in knowledge about abnormal signaling promises breakthroughs in the treatment of these often debilitating disorders.",
"title": ""
},
{
"docid": "c2cd6967d28547139c4cfdb2468c6b2d",
"text": "Palletizing tasks are necessary to promote efficiency of storage and shipping. These tasks, however, involve some of the most monotonous and physically demanding labor in the factory. Thus, many types of robot palletizing systems have been developed, although many robot motion commands still depend on the teach pendent. That is, the operator inputs the motion command lines one by one. This is very troublesome and most importantly, the user must know how to type the code. We propose a new GUI for the palletizing system that can be used more conveniently. To do this, we used the PLP \"Fast Algorithm\" and 3-D auto-patterning visualization. The 3-D patterning process includes the following. First, an operator can identify the results of the task and edit them. Second, the operator passes the position values of objects to a robot simulator. Using those positions, a palletizing operation can be simulated. We used the wide used industrial model and analyzed the kinematics and dynamics to create a robot simulator. In this paper we propose a 3-D patterning algorithm, 3-D robot-palletizing simulator, and modified trajectory generation algorithm, \"Overlapped method\" to reduce the computing load.",
"title": ""
},
{
"docid": "1788963aacfe29886cf7ac5e34a68edd",
"text": "Collaborative filtering techniques aim at recommending products to users based on their historical feedback. And many algorithms focus on personalized ranking problem with implicit feedback due to the \"one-class\" nature of many real-world datasets in a variety of services. Most of the existing personalized ranking methods are confined to one domain of data source and the question of how to model users' preferences information across distinct domains is usually be ignored. There are some transfer learning approaches that try to transfer numerical ratings, auxiliary social relations and other information across different domains but they do not address how users' preferences information varies from one domain to another accordingly. And they mainly exploit rating prediction problem rather than personalized ranking problem. In this paper, we propose an algorithm called CroRank to address the question, \"How to bridge users' preferences information across different domains to promote better personalized ranking performance?\". There are two main steps in CroRank, we first present an algorithm called multiple binomial matrix factorization (MBMF) to bridge the gap between items from distinct sources and then we introduce transfer Bayesian personalized ranking (TBPR) to recommend items for each user in the target domain. In CroRank, users' inclinations can transfer from the auxiliary domain to the target domain to provide better personalized ranking results. We compare CroRank to the state-of-the-art non-transfer models to demonstrate the improvements in flexibility and effectiveness.",
"title": ""
},
{
"docid": "0472c8c606024aaf2700dee3ad020c07",
"text": "Any discussion on exchange rate movements and forecasting should include explanatory variables from both the current account and the capital account of the balance of payments. In this paper, we include such factors to forecast the value of the Indian rupee vis a vis the US Dollar. Further, factors reflecting political instability and lack of mechanism for enforcement of contracts that can affect both direct foreign investment and also portfolio investment, have been incorporated. The explanatory variables chosen are the 3 month Rupee Dollar futures exchange rate (FX4), NIFTY returns (NIFTYR), Dow Jones Industrial Average returns (DJIAR), Hang Seng returns (HSR), DAX returns (DR), crude oil price (COP), CBOE VIX (CV) and India VIX (IV). To forecast the exchange rate, we have used two different classes of frameworks namely, Artificial Neural Network (ANN) based models and Time Series Econometric models. Multilayer Feed Forward Neural Network (MLFFNN) and Nonlinear Autoregressive models with Exogenous Input (NARX) Neural Network are the approaches that we have used as ANN models. Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential Generalized Autoregressive Conditional Heteroskedastic (EGARCH) techniques are the ones that we have used as Time Series Econometric methods. Within our framework, our results indicate that, although the two different approaches are quite efficient in forecasting the exchange rate, MLFNN and NARX are the most efficient. Journal of Insurance and Financial Management ARTICLE INFO JEL Classification: C22 C45 C63 F31 F47",
"title": ""
},
{
"docid": "683b7d54f9cdc4ddd5485729889b804e",
"text": "The sub-grid-scale parameterization of clouds is one of the weakest aspects of weather and climate modeling today, and the explicit simulation of clouds will be one of the next major achievements in numerical weather prediction. Research cloud models have been in development over the last 45 years and they continue to be an important tool for investigating clouds, cloud-systems, and other small-scale atmospheric dynamics. The latest generation are now being used for weather prediction. The Advanced Research WRF (ARW) model, representative of this generation and of a class of models using explicit time-splitting integration techniques to efficiently integrate the Euler equations, is described in this paper. It is the first fully compressible conservative-form nonhydrostatic atmospheric model suitable for both research and weather prediction applications. Results are presented demonstrating its ability to resolve strongly nonlinear small-scale phenomena, clouds, and cloud systems. Kinetic energy spectra and other statistics show that the model is simulating small scales in numerical weather prediction applications, while necessarily removing energy at the gridscale but minimizing artificial dissipation at the resolved scales. Filtering requirements for atmospheric models and filters used in the ARW model are discussed. ! 2007 Elsevier Inc. All rights reserved. MCS: 65M06; 65M12; 76E06; 76R10; 76U05; 86A10",
"title": ""
},
{
"docid": "400c34e4d38d3b9e53469ae6c2b5bd85",
"text": "The paper gives futuristic challenges disscussed in the cvpaper.challenge. In 2015 and 2016, we thoroughly study 1,600+ papers in several conferences/journals such as CVPR/ICCV/ECCV/NIPS/PAMI/IJCV.",
"title": ""
},
{
"docid": "4fa25fd7088d9b624be75239d02cfc4b",
"text": "Intelligence is defined as that which produces successful behavior. Intelligence is assumed to result from natural selection. A model is proposed that integrates knowledge from research in both natural and artificial systems. The model consists of a hierarchical system architecture wherein: 1) control bandwidth decreases about an order of magnitude at each higher level, 2) perceptual resolution of spatial and temporal patterns contracts about an order-of-magnitude at each higher level, 3) goals expand in scope and planning horizons expand in space and time about an order-of-magnitude at each higher level, and 4) models of the world and memories of events expand their range in space and time by about an order-of-magnitude at each higher level. At each level, functional modules perform behavior generation (task decomposition planning and execution), world modeling, sensory processing, and value judgment. Sensory feedback control loops are closed at every level.",
"title": ""
},
{
"docid": "e8abf8e4cd087cf3b77ae6a024e95971",
"text": "Cloud computing has been emerged in the last decade to enable utility-based computing resource management without purchasing hardware equipment. Cloud providers run multiple data centers in various locations to manage and provision the Cloud resources to their customers. More recently, the introduction of Software-Defined Networking (SDN) and Network Function Virtualization (NFV) opens more opportunities in Clouds which enables dynamic and autonomic configuration and provisioning of the resources in Cloud data centers. This paper proposes architectural framework and principles for Programmable Network Clouds hosting SDNs and NFVs for geographically distributed MultiCloud computing environments. Cost and SLA-aware resource provisioning and scheduling that minimizes the operating cost without violating the negotiated SLAs are investigated and discussed in regards of techniques for autonomic and timely VNF composition, deployment and management across multiple Clouds. We also discuss open challenges and directions for creating auto-scaling solutions for performance optimization of VNFs using analytics and monitoring techniques, algorithms for SDN controller for scalable traffic and deployment management. The simulation platform and the proof-of-concept prototype are presented with initial evaluation results.",
"title": ""
},
{
"docid": "d36b557f8917f068f25defcf4c48f0fa",
"text": "This paper focuses on modeling ride requests and their variations over location and time, based on analyzing extensive real-world data from a ride-sharing service. We introduce a graph model that captures the spatial and temporal variability of ride requests and the potentials for ride pooling. We discover these ride request graphs exhibit a well known property called “densification power law” often found in real graphs modelling human behaviors. We show the pattern of ride requests and the potential of ride pooling for a city can be characterized by the densification factor of the ride request graphs. Previous works have shown that it is possible to automatically generate synthetic versions of these graphs that exhibit a given densification factor. We present an algorithm for automatic generation of synthetic ride request graphs that match quite well the densification factor of ride request graphs from actual ride request data.",
"title": ""
},
{
"docid": "9be069160bed1428ec4012492b451d70",
"text": "This paper presents the concept of vehicular cloud service network using IoT and Cloud together. Both these technologies (IoT and Cloud Computing) are able to solve real time problems faced by population. The tremendous growth of Internet of Thing(IoT) and Cloud Computing together have provided great solution to the increasing transportation issues. In this paper we propose, creating vehicular cloud service network using MQTT protocol. The main objective of this paper is to design a cloud vehicular service for parking purpose based on the basic communication principle of MQTT protocol. We propose an intelligent parking space services to make IoT more suitable for both small-sized and large-scale information retrieval by cloud. This paper briefs the most emerging paradigm of IoT in parking cloud services.",
"title": ""
},
{
"docid": "7e8feb5f8d816a0c0626f6fdc4db7c04",
"text": "In this paper, we analyze if cascade usage of the context encoder with increasing input can improve the results of the inpainting. For this purpose, we train context encoder for 64x64 pixels images in a standard way and use its resized output to fill in the missing input region of the 128x128 context encoder, both in training and evaluation phase. As the result, the inpainting is visibly more plausible. In order to thoroughly verify the results, we introduce normalized squared-distortion, a measure for quantitative inpainting evaluation, and we provide its mathematical explanation. This is the first attempt to formalize the inpainting measure, which is based on the properties of latent feature representation, instead of L2 reconstruction loss.",
"title": ""
},
{
"docid": "5167ba364ee2f3f5865654126f75771b",
"text": "Many commercial products and academic research activities are embracing behavior analysis as a technique for improving detection of attacks of many sorts-from retweet boosting, hashtag hijacking to link advertising. Traditional approaches focus on detecting dense blocks in the adjacency matrix of graph data, and recently, the tensors of multimodal data. No method gives a principled way to score the suspiciousness of dense blocks with different numbers of modes and rank them to draw human attention accordingly. In this paper, we first give a list of axioms that any metric of suspiciousness should satisfy; we propose an intuitive, principled metric that satisfies the axioms, and is fast to compute; moreover, we propose CrossSpot, an algorithm to spot dense blocks that are worth inspecting, typically indicating fraud or some other noteworthy deviation from the usual, and sort them in the order of importance (“suspiciousness”). Finally, we apply CrossSpot to the real data, where it improves the F1 score over previous techniques by 68 percent and finds suspicious behavioral patterns in social datasets spanning 0.3 billion posts.",
"title": ""
},
{
"docid": "7704eb15f3c576e2575e18613ce312df",
"text": "Objects for detection usually have distinct characteristics in different sub-regions and different aspect ratios. However, in prevalent two-stage object detection methods, Region-of-Interest (RoI) features are extracted by RoI pooling with little emphasis on these translation-variant feature components. We present feature selective networks to reform the feature representations of RoIs by exploiting their disparities among sub-regions and aspect ratios. Our network produces the sub-region attention bank and aspect ratio attention bank for the whole image. The RoI-based sub-region attention map and aspect ratio attention map are selectively pooled from the banks, and then used to refine the original RoI features for RoI classification. Equipped with a lightweight detection subnetwork, our network gets a consistent boost in detection performance based on general ConvNet backbones (ResNet-101, GoogLeNet and VGG-16). Without bells and whistles, our detectors equipped with ResNet-101 achieve more than 3% mAP improvement compared to counterparts on PASCAL VOC 2007, PASCAL VOC 2012 and MS COCO datasets.",
"title": ""
},
{
"docid": "f5ba54c76166eed39da96f86a8bbd2a1",
"text": "The digital divide refers to the separation between those who have access to digital information and communications technology (ICT) and those who do not. Many believe that universal access to ICT would bring about a global community of interaction, commerce, and learning resulting in higher standards of living and improved social welfare. However, the digital divide threatens this outcome, leading many public policy makers to debate the best way to bridge the divide. Much of the research on the digital divide focuses on first order effects regarding who has access to the technology, but some work addresses the second order effects of inequality in the ability to use the technology among those who do have access. In this paper, we examine both first and second order effects of the digital divide at three levels of analysis the individual level, the organizational level, and the global level. At each level, we survey the existing research noting the theoretical perspective taken in the work, the research methodology employed, and the key results that were obtained. We then suggest a series of research questions at each level of analysis to guide researchers seeking to further examine the digital divide and how it impacts citizens, managers, and economies.",
"title": ""
},
{
"docid": "698abf5788520934edfbee8f74154825",
"text": "A near-regular texture deviates geometrically and photometrically from a regular congruent tiling. Although near-regular textures are ubiquitous in the man-made and natural world, they present computational challenges for state of the art texture analysis and synthesis algorithms. Using regular tiling as our anchor point, and with user-assisted lattice extraction, we can explicitly model the deformation of a near-regular texture with respect to geometry, lighting and color. We treat a deformation field both as a function that acts on a texture and as a texture that is acted upon, and develop a multi-modal framework where each deformation field is subject to analysis, synthesis and manipulation. Using this formalization, we are able to construct simple parametric models to faithfully synthesize the appearance of a near-regular texture and purposefully control its regularity.",
"title": ""
}
] |
scidocsrr
|
e6d071e6e5af864fea7e80e4f0cde8a5
|
The Hidden Geometry of Deformed Grids
|
[
{
"docid": "cd068158b6bebadfb8242b6412ec5bbb",
"text": "artefacts, 65–67 built environments and, 67–69 object artefacts, 65–66 structuralism and, 66–67 See also Non–discursive technique Asymmetry, 88–89, 91 Asynchronous systems, 187 Autonomous architecture, 336–338",
"title": ""
}
] |
[
{
"docid": "d7bbccdf4b93cc9722b1efcbb8013024",
"text": "OBJECTIVE\nThe aim of the study was to develop and validate, by consensus, the construct and content of an observations chart for nurses incorporating a modified early warning scoring (MEWS) system for physiological parameters to be used for bedside monitoring on general wards in a public hospital in South Africa.\n\n\nMETHODS\nDelphi and modified face-to-face nominal group consensus methods were used to develop and validate a prototype observations chart that incorporated an existing UK MEWS. This informed the development of the Cape Town ward MEWS chart.\n\n\nPARTICIPANTS\nOne specialist anaesthesiologist, one emergency medicine specialist, two critical care nurses and eight senior ward nurses with expertise in bedside monitoring (N = 12) were purposively sampled for consensus development of the MEWS. One general surgeon declined and one neurosurgeon replaced the emergency medicine specialist in the final round.\n\n\nRESULTS\nFive consensus rounds achieved ≥70% agreement for cut points in five of seven physiological parameters respiratory and heart rates, systolic BP, temperature and urine output. For conscious level and oxygen saturation a relaxed rule of <70% agreement was applied. A reporting algorithm was established and incorporated in the MEWS chart representing decision rules determining the degree of urgency. Parameters and cut points differed from those in MEWS used in developed countries.\n\n\nCONCLUSIONS\nA MEWS for developing countries should record at least seven parameters. Experts from developing countries are best placed to stipulate cut points in physiological parameters. Further research is needed to explore the ability of the MEWS chart to identify physiological and clinical deterioration.",
"title": ""
},
{
"docid": "6cdd6ff86c085cad630ae278ca964ecd",
"text": "Parametric statistical models of continuous or discrete valued data are often not properly normalized, that is, they do not integrate or sum to unity. The normalization is essential for maximum likelihood estimation. While in principle, models can always be normalized by dividing them by their integral or sum (their partition function), this can in practice be extremely difficult. We have been developing methods for the estimation of unnormalized models which do not approximate the partition function using numerical integration. We review these methods, score matching and noise-contrastive estimation, point out extensions and connections both between them and methods by other authors, and discuss their pros and cons.",
"title": ""
},
{
"docid": "f7c4b71b970b7527cd2650ce1e05ab1b",
"text": "BACKGROUND\nPhysician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.\n\n\nMETHODS\nIn this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.\n\n\nFINDINGS\nWe identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5-14]; p<0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67-3·64]; p<0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15-1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11-18]; p<0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0-8]; p=0·04; I2=0%; 16 studies).\n\n\nINTERPRETATION\nThe literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.\n\n\nFUNDING\nArnold P Gold Foundation Research Institute.",
"title": ""
},
{
"docid": "299242a092512f0e9419ab6be13f9b93",
"text": "In this paper, we present ForeCache, a general-purpose tool for exploratory browsing of large datasets. ForeCache utilizes a client-server architecture, where the user interacts with a lightweight client-side interface to browse datasets, and the data to be browsed is retrieved from a DBMS running on a back-end server. We assume a detail-on-demand browsing paradigm, and optimize the back-end support for this paradigm by inserting a separate middleware layer in front of the DBMS. To improve response times, the middleware layer fetches data ahead of the user as she explores a dataset.\n We consider two different mechanisms for prefetching: (a) learning what to fetch from the user's recent movements, and (b) using data characteristics (e.g., histograms) to find data similar to what the user has viewed in the past. We incorporate these mechanisms into a single prediction engine that adjusts its prediction strategies over time, based on changes in the user's behavior. We evaluated our prediction engine with a user study, and found that our dynamic prefetching strategy provides: (1) significant improvements in overall latency when compared with non-prefetching systems (430% improvement); and (2) substantial improvements in both prediction accuracy (25% improvement) and latency (88% improvement) relative to existing prefetching techniques.",
"title": ""
},
{
"docid": "d4563e034ae0fb98f037625ca1b5b50a",
"text": "This book focuses on the super resolution of images and video. The authors’ use of the term super resolution (SR) is used to describe the process of obtaining a high resolution (HR) image, or a sequence of HR images, from a set of low resolution (LR) observations. This process has also been referred to in the literature as resolution enhancement (RE). SR has been applied primarily to spatial and temporal RE, but also to hyperspectral image enhancement. This book concentrates on motion based spatial RE, although the authors also describe motion free and hyperspectral image SR problems. Also examined is the very recent research area of SR for compression, which consists of the intentional downsampling, during pre-processing, of a video sequence to be compressed and the application of SR techniques, during post-processing, on the compressed sequence. It is clear that there is a strong interplay between the tools and techniques developed for SR and a number of other inverse problems encountered in signal processing (e.g., image restoration, motion estimation). SR techniques are being applied to a variety of fields, such as obtaining improved still images from video sequences (video printing), high definition television, high performance color Liquid Crystal Display (LCD) screens, improvement of the quality of color images taken by one CCD, video surveillance, remote sensing, and medical imaging. The authors believe that the SR/RE area has matured enough to develop a body of knowledge that can now start to provide useful and practical solutions to challenging real problems and that SR techniques can be an integral part of an image and video codec and can drive the development of new coder-decoders (codecs) and standards.",
"title": ""
},
{
"docid": "0619308f0a79fb33d91a3a8db2a0db14",
"text": "FPGA CAD tool parameters controlling synthesis optimizations, place and route effort, mapping criteria along with user-supplied physical constraints can affect timing results of the circuit by as much as 70% without any change in original source code. A correct selection of these parameters across a diverse set of benchmarks with varying characteristics and design goals is challenging. The sheer number of parameters and option values that can be selected is large (thousands of combinations for modern CAD tools) with often conflicting interactions. In this paper, we present InTime, a machine-learning approach supported by a cloud-based (or cluster-based) compilation infrastructure for automating the selection of these parameters effectively to minimize timing costs. InTime builds a database of results from a series of preliminary runs based on canned configurations of CAD options. It then learns from these runs to predict the next series of CAD tool options to improve timing results. Towards the end, we rely on a limited degree of statistical sampling of certain options like placer and synthesis seeds to further tighten results. Using our approach, we show 70% reduction in final timing results across industrial benchmark problems for the Altera CAD flow. This is 30% better than vendor-supplied design space exploration tools that attempts a similar optimization using canned heuristics.",
"title": ""
},
{
"docid": "508fb3c75f0d92ae27b9c735c02d66d6",
"text": "The remarkable developmental potential and replicative capacity of human embryonic stem (ES) cells promise an almost unlimited supply of specific cell types for transplantation therapies. Here we describe the in vitro differentiation, enrichment, and transplantation of neural precursor cells from human ES cells. Upon aggregation to embryoid bodies, differentiating ES cells formed large numbers of neural tube–like structures in the presence of fibroblast growth factor 2 (FGF-2). Neural precursors within these formations were isolated by selective enzymatic digestion and further purified on the basis of differential adhesion. Following withdrawal of FGF-2, they differentiated into neurons, astrocytes, and oligodendrocytes. After transplantation into the neonatal mouse brain, human ES cell–derived neural precursors were incorporated into a variety of brain regions, where they differentiated into both neurons and astrocytes. No teratoma formation was observed in the transplant recipients. These results depict human ES cells as a source of transplantable neural precursors for possible nervous system repair.",
"title": ""
},
{
"docid": "9b167e23bbe72f8ff0da12d43f55b33c",
"text": "Appropriately planned vegan diets can satisfy nutrient needs of infants. The American Dietetic Association and The American Academy of Pediatrics state that vegan diets can promote normal infant growth. It is important for parents to provide appropriate foods for vegan infants, using guidelines like those in this article. Key considerations when working with vegan families include composition of breast milk from vegan women, appropriate breast milk substitutes, supplements, type and amount of dietary fat, and solid food introduction. Growth of vegan infants appears adequate with post-weaning growth related to dietary adequacy. Breast milk composition is similar to that of non-vegetarians except for fat composition. For the first 4 to 6 months, breast milk should be the sole food with soy-based infant formula as an alternative. Commercial soymilk should not be the primary beverage until after age 1 year. Breastfed vegan infants may need supplements of vitamin B-12 if maternal diet is inadequate; older infants may need zinc supplements and reliable sources of iron and vitamins D and B-12. Timing of solid food introduction is similar to that recommended for non-vegetarians. Tofu, dried beans, and meat analogs are introduced as protein sources around 7-8 months. Vegan diets can be planned to be nutritionally adequate and support growth for infants.",
"title": ""
},
{
"docid": "dcece9a321b4483de7327de29a641fd2",
"text": "A class of optimal control problems for quasilinear elliptic equations is considered, where the coefficients of the elliptic differential operator depend on the state function. Firstand second-order optimality conditions are discussed for an associated control-constrained optimal control problem. In particular, the Pontryagin maximum principle and second-order sufficient optimality conditions are derived. One of the main difficulties is the non-monotone character of the state equation.",
"title": ""
},
{
"docid": "51ece87cfa463cd76c6fd60e2515c9f4",
"text": "In a 1998 speech before the California Science Center in Los Angeles, then US VicePresident Al Gore called for a global undertaking to build a multi-faceted computing system for education and research, which he termed “Digital Earth.” The vision was that of a system providing access to what is known about the planet and its inhabitants’ activities – currently and for any time in history – via responses to queries and exploratory tools. Furthermore, it would accommodate modeling extensions for predicting future conditions. Organized efforts towards realizing that vision have diminished significantly since 2001, but progress on key requisites has been made. As the 10 year anniversary of that influential speech approaches, we re-examine it from the perspective of a systematic software design process and find the envisioned system to be in many respects inclusive of concepts of distributed geolibraries and digital atlases. A preliminary definition for a particular digital earth system as: “a comprehensive, distributed geographic information and knowledge organization system,” is offered and discussed. We suggest that resumption of earlier design and focused research efforts can and should be undertaken, and may prove a worthwhile “Grand Challenge” for the GIScience community.",
"title": ""
},
{
"docid": "2019018e22e8ebc4c1546c87f36e31e2",
"text": "Many alternative modulation schemes have been investigated to replace OFDM for radio systems. But they all have some weak points. In this paper, we present a novel modulation scheme, which minimizes the predecessors' drawbacks, while still keeping their advantages.",
"title": ""
},
{
"docid": "260f7258c3739efec1910028ec429471",
"text": "Cryptography is considered to be a disciple of science of achieving security by converting sensitive information to an un-interpretable form such that it cannot be interpreted by anyone except the transmitter and intended recipient. An innumerable set of cryptographic schemes persist in which each of it has its own affirmative and feeble characteristics. In this paper we have we have developed a traditional or character oriented Polyalphabetic cipher by using a simple algebraic equation. In this we made use of iteration process and introduced a key K0 obtained by permuting the elements of a given key seed value. This key strengthens the cipher and it does not allow the cipher to be broken by the known plain text attack. The cryptanalysis performed clearly indicates that the cipher is a strong one.",
"title": ""
},
{
"docid": "46613dd249ed10d84b7be8c1b46bf5b4",
"text": "Today, a predictive controller becomes one of the state of the art in power electronics control techniques. The performance of this powerful control approach will be pushed forward by simplifying the main control criterion and objective function, and decreasing the number of calculations per sampling time. Recently, predictive control has been incorporated in the Z-source inverter (ZSI) family. For example, in quasi ZSI, the inverter capacitor voltage, inductor current, and output load currents are controlled to their setting points through deciding the required state; active or shoot through. The proposed algorithm reduces the number of calculations, where it decides the shoot-through (ST) case without checking the other possible states. The ST case is roughly optimized every two sampling periods. Through the proposed strategy, about 50% improvement in the computational power has been achieved as compared with the previous algorithm. Also, the objective function for the proposed algorithm consists of one weighting factor for the capacitor voltage without involving the inductor current term in the main objective function. The proposed algorithm is investigated with the simulation results based on MATLAB/SIMULINK software. A prototype of qZSI is constructed in the laboratory to obtain the experimental results using the Digital Signal Processor F28335.",
"title": ""
},
{
"docid": "ca834698dfca01d82e9ac4d0fd69eb59",
"text": "*Correspondence: Aryadeep Roychoudhury, Post Graduate Department of Biotechnology, St. Xavier’s College (Autonomous), 30, Mother Teresa Sarani, Kolkata 700016, West Bengal, India e-mail: [email protected] Reactive oxygen species (ROS) were initially recognized as toxic by-products of aerobic metabolism. In recent years, it has become apparent that ROS plays an important signaling role in plants, controlling processes such as growth, development and especially response to biotic and abiotic environmental stimuli. The major members of the ROS family include free radicals like O•− 2 , OH • and non-radicals like H2O2 and O2. The ROS production in plants is mainly localized in the chloroplast, mitochondria and peroxisomes. There are secondary sites as well like the endoplasmic reticulum, cell membrane, cell wall and the apoplast. The role of the ROS family is that of a double edged sword; while they act as secondary messengers in various key physiological phenomena, they also induce oxidative damages under several environmental stress conditions like salinity, drought, cold, heavy metals, UV irradiation etc., when the delicate balance between ROS production and elimination, necessary for normal cellular homeostasis, is disturbed. The cellular damages are manifested in the form of degradation of biomolecules like pigments, proteins, lipids, carbohydrates, and DNA, which ultimately amalgamate in plant cellular death. To ensure survival, plants have developed efficient antioxidant machinery having two arms, (i) enzymatic components like superoxide dismutase (SOD), catalase (CAT), ascorbate peroxidase (APX), guaiacol peroxidase (GPX), glutathione reductase (GR), monodehydroascorbate reductase (MDHAR), and dehydroascorbate reductase (DHAR); (ii) non-enzymatic antioxidants like ascorbic acid (AA), reduced glutathione (GSH), α-tocopherol, carotenoids, flavonoids, and the osmolyte proline. These two components work hand in hand to scavenge ROS. In this review, we emphasize on the different types of ROS, their cellular production sites, their targets, and their scavenging mechanism mediated by both the branches of the antioxidant systems, highlighting the potential role of antioxidants in abiotic stress tolerance and cellular survival. Such a comprehensive knowledge of ROS action and their regulation on antioxidants will enable us to develop strategies to genetically engineer stress-tolerant plants.",
"title": ""
},
{
"docid": "d676b25f9704fe89d5d8fe929c639829",
"text": "The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges thatwill need to be addressed for realising the potential of next generation cloud systems. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "52017fa7d6cf2e6a18304b121225fc6f",
"text": "In comparison to dense matrices multiplication, sparse matrices multiplication real performance for CPU is roughly 5–100 times lower when expressed in GFLOPs. For sparse matrices, microprocessors spend most of the time on comparing matrices indices rather than performing floating-point multiply and add operations. For 16-bit integer operations, like indices comparisons, computational power of the FPGA significantly surpasses that of CPU. Consequently, this paper presents a novel theoretical study how matrices sparsity factor influences the indices comparison to floating-point operation workload ratio. As a result, a novel FPGAs architecture for sparse matrix-matrix multiplication is presented for which indices comparison and floating-point operations are separated. We also verified our idea in practice, and the initial implementations results are very promising. To further decrease hardware resources required by the floating-point multiplier, a reduced width multiplication is proposed in the case when IEEE-754 standard compliance is not required.",
"title": ""
},
{
"docid": "10b857d497759f7b49d35155e79734f9",
"text": "Disclaimer Mention of any company or product does not constitute endorsement by the National Institute for Occupational Safety and Health (NIOSH). In addition, citations to Web sites external to NIOSH do not constitute NIOSH endorsement of the sponsoring organizations or their programs or products. Furthermore, NIOSH is not responsible for the content of these Web sites. All Web addresses referenced in this document were accessible as of the publication date. To receive documents or other information about occupational safety and health topics, contact NIOSH at ACPH air changes per hour ACGIH American Conference of Governmental Industrial Hygienists CT computed tomography HEPA high efficiency particulate air HVAC heating, ventilation, and air conditioning IARC International Agency for Research on Cancer LEV local exhaust ventilation LHD load-haul-dump MSHA Mine Safety and Health Administration NIOSH National Institute for Occupational Safety and Health OASIS overhead air supply island system PDM personal dust monitor pDR personal DataRAM PEL permissible exposure limit PMF progressive massive fibrosis PPE personal protective equipment PVC poly vinyl chloride TEOM tapered-element oscillating microbalance TMVS total mill ventilation system XRD X-ray diffraction UNIT OF MEASURE ABBREVIATIONS USED IN THIS REPORT cfm cubic foot per minute fpm foot per minute gpm gallon per minute in w.g. inches water gauge lpm liter per minute mg/m 3 milligram per cubic meter mm millimeter mph miles per hour µg/m 3 microgram per cubic meter psi pound-force per square inch INTRODUCTION Respirable silica dust exposure has long been known to be a serious health threat to workers in many industries. Overexposure to respirable silica dust can lead to the development of silicosis— a lung disease that can be disabling and fatal in its most severe form. Once contracted, there is no cure for silicosis so the goal must be to prevent development by limiting a worker's exposure to respirable silica dust. In addition, the International Agency for Research on Cancer (IARC) has concluded that there is sufficient evidence to classify silica as a human carcinogen.",
"title": ""
},
{
"docid": "f3cd5e9a47f5a693fa29c7f03afe8ecf",
"text": "Cloud computing provides a revolutionary model for the deployment of enterprise applications and Web services alike. In this new model, cloud users save on the cost of purchasing and managing base infrastructure, while the cloud providers save on the cost of maintaining underutilized CPU, memory, and network resources. In migrating to this new model, users face a variety of issues. Commercial clouds provide several support models to aide users in resolving the reported issues This paper arises from our quest to understand how to design IaaS support models for more efficient user troubleshooting. Using a data driven approach, we start our exploration into this issue with an investigation into the problems encountered by users and the methods utilized by the cloud support’s staff to resolve these problems. We examine message threads appearing in the forum of a large IaaS provider over a 3 year period. We argue that the lessons derived from this study point to a set of principles that future IaaS offerings can implement to provide users with a more efficient support model. This data driven approach enables us to propose a set of principles that are pertinent to the experiences of users and that we believe could vastly improve the SLA observed by the users.",
"title": ""
},
{
"docid": "c24e523997eac6d1be9e2a2f38150fc0",
"text": "We address the assessment and improvement of the software maintenance function by proposing improvements to the software maintenance standards and introducing a proposed maturity model for daily software maintenance activities: Software Maintenance Maturity Model (SM). The software maintenance function suffers from a scarcity of management models to facilitate its evaluation, management, and continuous improvement. The SM addresses the unique activities of software maintenance while preserving a structure similar to that of the CMMi4 maturity model. It is designed to be used as a complement to this model. The SM is based on practitioners experience, international standards, and the seminal literature on software maintenance. We present the models purpose, scope, foundation, and architecture, followed by its initial validation.",
"title": ""
},
{
"docid": "751e95c13346b18714c5ce5dcb4d1af2",
"text": "Purpose – The purpose of this paper is to propose how to minimize the risks of implementing business process reengineering (BPR) by measuring readiness. For this purpose, the paper proposes an assessment approach for readiness in BPR efforts based on the critical success and failure factors. Design/methodology/approach – A relevant literature review, which investigates success and failure indicators in BPR efforts is carried out and a new categorized list of indicators are proposed. This is a base for conducting a survey to measure the BPR readiness, which has been run in two companies and compared based on a diamond model. Findings – In this research, readiness indicators are determined based on critical success and failure factors. The readiness indicators include six categories. The first five categories, egalitarian leadership, collaborative working environment, top management commitment, supportive management, and use of information technology are positive indicators. The sixth category, resistance to change has a negative role. This paper reports survey results indicating BPR readiness in two Iranian companies. After comparing the position of the two cases, the paper offers several guidelines for amplifying the success points and decreasing failure points and hence, increasing the rate of success. Originality/value – High-failure rate of BPR has been introduced as a main barrier in reengineering processes. In addition, it makes a fear, which in turn can be a failure factor. This paper tries to fill the gap in the literature on decreasing risk in BPR projects by introducing a BPR readiness assessment approach. In addition, the proposed questionnaire is generic and can be utilized in a facilitated manner.",
"title": ""
}
] |
scidocsrr
|
c7b45f76faf47af8b3b846d7252be795
|
Strategic Human Resource Management: Insights from the International Hotel Industry
|
[
{
"docid": "4ab8913fff86d8a737ed62c56fe2b39d",
"text": "This paper draws on the social and behavioral sciences in an endeavor to specify the nature and microfoundations of the capabilities necessary to sustain superior enterprise performance in an open economy with rapid innovation and globally dispersed sources of invention, innovation, and manufacturing capability. Dynamic capabilities enable business enterprises to create, deploy, and protect the intangible assets that support superior longrun business performance. The microfoundations of dynamic capabilities—the distinct skills, processes, procedures, organizational structures, decision rules, and disciplines—which undergird enterprise-level sensing, seizing, and reconfiguring capacities are difficult to develop and deploy. Enterprises with strong dynamic capabilities are intensely entrepreneurial. They not only adapt to business ecosystems, but also shape them through innovation and through collaboration with other enterprises, entities, and institutions. The framework advanced can help scholars understand the foundations of long-run enterprise success while helping managers delineate relevant strategic considerations and the priorities they must adopt to enhance enterprise performance and escape the zero profit tendency associated with operating in markets open to global competition. Copyright 2007 John Wiley & Sons, Ltd.",
"title": ""
}
] |
[
{
"docid": "403369e9f07d6c963ab8f252e8035c3d",
"text": "Purpose – Business Process Management (BPM) requires a holistic perspective that includes managing the culture of an organization to achieve objectives of efficient and effective business processes. Still, the specifics of a BPM-supportive organizational culture have not been examined so far. Thus, the purpose of our paper is to identify the characteristics of a cultural setting supportive of BPM objectives. Design/methodology/approach – We examine the constituent values of a BPM-supportive cultural setting through a global Delphi study with BPM experts from academia and practice and explore these values in a cultural value framework. Findings – We empirically identify and define four key cultural values supporting BPM, viz., customer orientation, excellence, responsibility, and teamwork. We discuss the relationships between these values and identify a particular challenge in managing these seemingly competing values. Research implications – The identification and definition of these values represents a first step towards the operationalization (and empirical analysis) of what has been identified as the concept of BPM culture, i.e. a culture supportive of achieving BPM objectives. Practical implications – Identifying these cultural values provides the basis for developing an instrument that can measure how far an existing cultural context is supportive of BPM. This, in turn, is fundamental for identifying measures towards achieving a BPM culture as a necessary, yet not sufficient means to obtain BPM success. Originality/value – We examine which cultural values create an environment receptive for BPM and, thus, specify the important theoretical construct BPM culture. In addition, we raise awareness for realizing these values in a BPM context.",
"title": ""
},
{
"docid": "0c42c99a4d80edf11386909a2582459a",
"text": "Robustness or stability of feature selection techniques is a topic of recent interest, and is an important issue when selected feature subsets are subsequently analysed by domain experts to gain more insight into the problem modelled. In this work, we investigate the use of ensemble feature selection techniques, where multiple feature selection methods are combined to yield more robust results. We show that these techniques show great promise for high-dimensional domains with small sample sizes, and provide more robust feature subsets than a single feature selection technique. In addition, we also investigate the effect of ensemble feature selection techniques on classification performance, giving rise to a new model selection strategy.",
"title": ""
},
{
"docid": "45079629c4bc09cc8680b3d9ac325112",
"text": "Power consumption is of utmost concern in sensor networks. Researchers have several ways of measuring the power consumption of a complete sensor network, but they are typically either impractical or inaccurate. To meet the need for practical and scalable measurement of power consumption of sensor networks, we have developed a cycle-accurate simulator, called COOJA/MSPsim, that enables live power estimation of systems running on MSP430 processors. This demonstration shows the ease of use and the power measurement accuracy of COOJA/MSPsim. The demo setup consists of a small sensor network and a laptop. Beside gathering software-based power measurements from the motes, the laptop runs COOJA/MSPsim to simulate the same network.We visualize the power consumption of both the simulated and the real sensor network, and show that the simulator produces matching results.",
"title": ""
},
{
"docid": "9864bce09ff74218fb817aab62e70081",
"text": "Nowadays, sentiment analysis methods become more and more popular especially with the proliferation of social media platform users number. In the same context, this paper presents a sentiment analysis approach which can faithfully translate the sentimental orientation of Arabic Twitter posts, based on a novel data representation and machine learning techniques. The proposed approach applied a wide range of features: lexical, surface-form, syntactic, etc. We also made use of lexicon features inferred from two Arabic sentiment words lexicons. To build our supervised sentiment analysis system, we use several standard classification methods (Support Vector Machines, K-Nearest Neighbour, Naïve Bayes, Decision Trees, Random Forest) known by their effectiveness over such classification issues.\n In our study, Support Vector Machines classifier outperforms other supervised algorithms in Arabic Twitter sentiment analysis. Via an ablation experiments, we show the positive impact of lexicon based features on providing higher prediction performance.",
"title": ""
},
{
"docid": "a45b098b22e8d84b484617d276874601",
"text": "Subjectivity detection is the task of identifying objective and subjective sentences. Objective sentences are those which do not exhibit any sentiment. So, it is desired for a sentiment analysis engine to find and separate the objective sentences for further analysis, e.g., polarity detection. In subjective sentences, opinions can often be expressed on one or multiple topics. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about.",
"title": ""
},
{
"docid": "2ae7d7272c2cf82a3488e0b83b13f694",
"text": "Valgus extension osteotomy (VGEO) is a salvage procedure for 'hinge abduction' in Perthes' disease. The indications for its use are pain and fixed deformity. Our study shows the clinical results at maturity of VGEO carried out in 48 children (51 hips) and the factors which influence subsequent remodelling of the hip. After a mean follow-up of ten years, total hip replacement has been carried out in four patients and arthrodesis in one. The average Iowa Hip Score in the remainder was 86 (54 to 100). Favourable remodelling of the femoral head was seen in 12 hips. This was associated with three factors at surgery; younger age (p = 0.009), the phase of reossification (p = 0.05) and an open triradiate cartilage (p = 0.0007). Our study has shown that, in the short term, VGEO relieves pain and corrects deformity; as growth proceeds it may produce useful remodelling in this worst affected subgroup of children with Perthes' disease.",
"title": ""
},
{
"docid": "2353942ce5857a8d7163fce6cb00d509",
"text": "Here, we present a general framework for combining visual odometry and lidar odometry in a fundamental and first principle method. The method shows improvements in performance over the state of the art, particularly in robustness to aggressive motion and temporary lack of visual features. The proposed on-line method starts with visual odometry to estimate the ego-motion and to register point clouds from a scanning lidar at a high frequency but low fidelity. Then, scan matching based lidar odometry refines the motion estimation and point cloud registration simultaneously.We show results with datasets collected in our own experiments as well as using the KITTI odometry benchmark. Our proposed method is ranked #1 on the benchmark in terms of average translation and rotation errors, with a 0.75% of relative position drift. In addition to comparison of the motion estimation accuracy, we evaluate robustness of the method when the sensor suite moves at a high speed and is subject to significant ambient lighting changes.",
"title": ""
},
{
"docid": "220f945f6b7cf220a88aeeaf46b8b4f9",
"text": "AIM\nTo explore the efficacy of tear trough deformity treatment with the use of hyaluronic acid gel or autologous fat for soft tissue augmentation and fat repositioning via arcus marginalis release.\n\n\nMATERIAL AND METHODS\nSeventy-eight patients with the tear trough were divided into three groups. Class I has tear trough without bulging orbital fat or excess of the lower eyelid skin. Class II is associated with mild to moderate orbital fat bulging, without excess of the lower eyelid skin. Class III is associated with severe orbital fat bulging and excess of the lower eyelid skin. Class I or II was treated using hyaluronic acid gel or autologous fat injections. Class III was treated with fat repositioning via arcus marginalis release. The patients with a deep nasojugal groove of class III were treated with injecting autologous fat into the tear trough during fat repositioning lower blepharoplasty as a way of supplementing the volume added by the repositioned fat.\n\n\nRESULTS\nSeventy-eight patients with tear trough deformity were confirmed from photographs taken before and after surgery. There were some complications, but all had complete resolution.\n\n\nCONCLUSIONS\nPatients with mild to moderate peri-orbital volume loss without severe orbital fat bulging may be good candidates for hyaluronic acid filler or fat grafting alone. However, patients with more pronounced deformities, severe orbital fat bulging and excess of the lower eyelid skin are often better served by fat repositioning via arcus marginalis release and fat grafting.",
"title": ""
},
{
"docid": "05610fd0e6373291bdb4bc28cf1c691b",
"text": "In this work, we acknowledge the need for software engineers to devise specialized tools and techniques for blockchain-oriented software development. Ensuring effective testing activities, enhancing collaboration in large teams, and facilitating the development of smart contracts all appear as key factors in the future of blockchain-oriented software development.",
"title": ""
},
{
"docid": "821b1e60e936b3f56031fae450f22dc8",
"text": "Conventional methods for seismic retrofitting of concrete columns include reinforcement with steel plates or steel frame braces, as well as cross-sectional increments and in-filled walls. However, these methods have some disadvantages, such as the increase in mass and the need for precise construction. Fiber-reinforced polymer (FRP) sheets for seismic strengthening of concrete columns using new light-weight composite materials, such as carbon fiber or glass fiber, have been developed, have excellent durability and performance, and are being widely applied to overcome the shortcomings of conventional seismic strengthening methods. Nonetheless, the FRP-sheet reinforcement method also has some drawbacks, such as the need for prior surface treatment, problems at joints, and relatively expensive material costs. In the current research, the structural and material properties associated with a new method for seismic strengthening of concrete columns using FRP were investigated. The new technique is a sprayed FRP system, achieved by mixing chopped glass and carbon fibers with epoxy and vinyl ester resin in the open air and randomly spraying the resulting mixture onto the uneven surface of the concrete columns. This paper reports on the seismic resistance of reinforced concrete columns controlled by shear strengthening using the sprayed FRP system. Five shear column specimens were designed, and then strengthened with sprayed FRP by using different combinations of short carbon or glass fibers and epoxy or vinyl ester resins. There was also a non-strengthened control specimen. Cyclic loading tests were carried out, and the ultimate load carrying capacity and deformation were investigated, as well as hysteresis in the lateral load-drift relationship. The results showed that shear strengths and deformation capacities of shear columns strengthened using sprayed FRP improved markedly, compared with those of the control column. The spraying FRP technique developed in this study can be practically and effectively used for the seismic strengthening of existing concrete columns.",
"title": ""
},
{
"docid": "0737e99613b83104bc9390a46fbc4aeb",
"text": "Natural language text exhibits hierarchical structure in a variety of respects. Ideally, we could incorporate our prior knowledge of this hierarchical structure into unsupervised learning algorithms that work on text data. Recent work by Nickel and Kiela (2017) proposed using hyperbolic instead of Euclidean embedding spaces to represent hierarchical data and demonstrated encouraging results when embedding graphs. In this work, we extend their method with a re-parameterization technique that allows us to learn hyperbolic embeddings of arbitrarily parameterized objects. We apply this framework to learn word and sentence embeddings in hyperbolic space in an unsupervised manner from text corpora. The resulting embeddings seem to encode certain intuitive notions of hierarchy, such as wordcontext frequency and phrase constituency. However, the implicit continuous hierarchy in the learned hyperbolic space makes interrogating the model’s learned hierarchies more difficult than for models that learn explicit edges between items. The learned hyperbolic embeddings show improvements over Euclidean embeddings in some – but not all – downstream tasks, suggesting that hierarchical organization is more useful for some tasks than others.",
"title": ""
},
{
"docid": "efc4af51a92facff03e1009b039139fe",
"text": "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the β-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the β-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.",
"title": ""
},
{
"docid": "e49e74c4104116b54d49147028c3392d",
"text": "Defining hope as a cognitive set comprising agency (belief in one's capacity to initiate and sustain actions) and pathways (belief in one's capacity to generate routes) to reach goals, the Hope Scale was developed and validated previously as a dispositional self-report measure of hope (Snyder et al., 1991). The present 4 studies were designed to develop and validate a measure of state hope. The 6-item State Hope Scale is internally consistent and reflects the theorized agency and pathways components. The relationships of the State Hope Scale to other measures demonstrate concurrent and discriminant validity; moreover, the scale is responsive to events in the lives of people as evidenced by data gathered through both correlational and causal designs. The State Hope Scale offers a brief, internally consistent, and valid self-report measure of ongoing goal-directed thinking that may be useful to researchers and applied professionals.",
"title": ""
},
{
"docid": "f2ce432386b8f407c416ea3d95d58427",
"text": "The use of Computer Aided Design (CAD) in forensic science is not new. However CAD did not become a (quality) standard for crime scene sketching. If the crime scene sketch is an effective way to present measurements, it must respond to accuracy criteria to supplement the documentary work by note taking and crime scene photography. The forensic photography unit of the Zürich Police changed already some years ago from hand drawn crime scene sketches to CAD sketches. Meanwhile the technique is used regularly for all major crime scene work. Using the Rolleimetric MR-2 single-camera measuring system combined with commercial CAD-software, crime scene sketches of a high quality standard are obtained.",
"title": ""
},
{
"docid": "b98f653abda64241b3794427fbe03002",
"text": "Aims and Objectives. This paper provides an overview of the applicability of the PRECEDE-PROCEED Model to the development of targeted nursing led chronic illness interventions. Background. Changing health care practice is a complex and dynamic process that requires consideration of social, political, economic, and organisational factors. An understanding of the characteristics of the target population, health professionals, and organizations plus identification of the determinants for change are also required. Synthesizing this data to guide the development of an effective intervention is a challenging process. The PRECEDE-PROCEED Model has been used in global health care settings to guide the identification, planning, implementation, and evaluation of various health improvement initiatives. Design. Using a reflective case study approach, this paper examines the applicability of the PRECEDE-PROCEED Model to the development of targeted chronic care improvement interventions for two distinct Australian populations: a rapidly expanding and aging rural population with unmet palliative care needs and a disadvantaged urban community at higher risk of cardiovascular disease. Results. The PRECEDE-PROCEED Model approach demonstrated utility across diverse health settings in a systematic planning process. In environments characterized by increasing health care needs, limited resources, and growing community expectations, adopting planning tools such as PRECEDE-PROCEED Model at a local level can facilitate the development of the most effective interventions. Relevance to Clinical Practice. The PRECEDE-PROCEED Model is a strong theoretical model that guides the development of realistic nursing led interventions with the best chance of being successful in existing health care environments.",
"title": ""
},
{
"docid": "f7ba998d8f4eb51619673edb66f7b3e3",
"text": "We propose an extension of Convolutional Neural Networks (CNNs) to graph-structured data, including strided convolutions and data augmentation defined from inferred graph translations. Our method matches the accuracy of state-of-the-art CNNs when applied on images, without any prior about their 2D regular structure. On fMRI data, we obtain a significant gain in accuracy compared with existing graph-based alternatives.",
"title": ""
},
{
"docid": "694a4039dba2354177ecda2e01c027c1",
"text": "This paper presents two pole shoe shapes used in interior permanent magnet (IPM) machines to produce a sinusoidal air-gap field. The first shape has an air-gap length which varies with the inverse cosine of the angle from the pole shoe middle. The second shape uses an arc with its centre offset from the origin. Although both designs are documented in the literature, no design rules exist regarding their optimum geometry for use in IPM machines. This paper corrects this by developing optimum ratios of the q-axis to d-axis air-gap lengths. The centre-offset arc design is improved by introducing flux barriers into its geometry through developing an optimum ratio of pole shoe width to permanent magnet (PM) width. Consequent pole rotors are also investigated and a third optimum ratio, the soft magnetic pole shoe angle to pole pitch, is developed. The three ratios will aid machine designers in their design work.",
"title": ""
},
{
"docid": "7095bf529a060dd0cd7eeb2910998cf8",
"text": "The proliferation of internet along with the attractiveness of the web in recent years has made web mining as the research area of great magnitude. Web mining essentially has many advantages which makes this technology attractive to researchers. The analysis of web user’s navigational pattern within a web site can provide useful information for applications like, server performance enhancements, restructuring a web site, direct marketing in ecommerce etc. The navigation paths may be explored based on some similarity criteria, in order to get the useful inference about the usage of web. The objective of this paper is to propose an effective clustering technique to group users’ sessions by modifying K-means algorithm and suggest a method to compute the distance between sessions based on similarity of their web access path, which takes care of the issue of the user sessions that are of variable",
"title": ""
},
{
"docid": "30ae1d2d45e11c8f6212ff0a54abec7a",
"text": "This paper describes the fifth year of the Sentiment Analysis in Twitter task. SemEval-2017 Task 4 continues with a rerun of the subtasks of SemEval-2016 Task 4, which include identifying the overall sentiment of the tweet, sentiment towards a topic with classification on a twopoint and on a five-point ordinal scale, and quantification of the distribution of sentiment towards a topic across a number of tweets: again on a two-point and on a five-point ordinal scale. Compared to 2016, we made two changes: (i) we introduced a new language, Arabic, for all subtasks, and (ii) we made available information from the profiles of the Twitter users who posted the target tweets. The task continues to be very popular, with a total of 48 teams participating this year.",
"title": ""
},
{
"docid": "9110970e05ed5f5365d613f6f8f2c8ba",
"text": "Abstrak –The objective of this paper is a new MeanMedian filtering for denoising extremely corrupted images by impulsive noise. Whenever an image is converted from one form to another, some of degradation occurs at the output. Improvement in the quality of these degraded images can be achieved by the application of Restoration and /or Enhancement techniques. Noise removing is one of the categories of Enhancement. Removing noise from the original signal is still a challenging problem. Mean filtering fails to effectively remove heavy tailed noise & performance poorly in the presence of signal dependent noise. The successes of median filters are edge preservation and efficient attenuation of impulsive noise. An important shortcoming of the median filter is that the output is one of the samples in the input window. Based on this mixture distributions are proposed to effectively remove impulsive noise characteristics. Finally, the results of comparative analysis of mean-median algorithm with mean, median filters for impulsive noise removal show a high efficiency of this approach relatively to other ones.",
"title": ""
}
] |
scidocsrr
|
8c1e5063d774b846e6640cf96c2f012b
|
Measuring the quality of experience of HTTP video streaming
|
[
{
"docid": "220f19bb83b81862277ddf27b1c7d24c",
"text": "Many applications require fast data transfer over high speed and long distance networks. However, standard TCP fails to fully utilize the network capacity in high-speed and long distance networks due to its conservative congestion control (CC) algorithm. Some works have been proposed to improve the connection’s throughput by adopting more aggressive loss-based CC algorithms, which may severely decrease the throughput of regular TCP flows sharing the network path. On the other hand, pure delay-based approaches may not work well if they compete with loss-based flows. In this paper, we propose a novel Compound TCP (CTCP) approach, which is a synergy of delay-based and loss-based approach. More specifically, we add a scalable delay-based component into the standard TCP Reno congestion avoidance algorithm (a.k.a., the loss-based component). The sending rate of CTCP is controlled by both components. This new delay-based component can rapidly increase sending rate when the network path is under utilized, but gracefully retreat in a busy network when a bottleneck queue is built. Augmented with this delay-based component, CTCP provides very good bandwidth scalability and at the same time achieves good TCP-fairness. We conduct extensive packet level simulations and test our CTCP implementation on the Windows platform over a production high-speed network link in the Microsoft intranet. Our simulation and experiments results verify the properties of CTCP.",
"title": ""
}
] |
[
{
"docid": "528e16d5e3c4f5e7edc77d8e5960ba4f",
"text": "Nowadays, a large amount of documents is generated daily. These documents may contain some spelling errors which should be detected and corrected by using a proofreading tool. Therefore, the existence of automatic writing assistance tools such as spell-checkers/correctors could help to improve their quality. Spelling errors could be categorized into five categories. One of them is real-word errors, which are misspelled words that have been wrongly converted into another word in the language. Detection of such errors requires discourse analysis rather than just checking the word in a dictionary. We propose a discourse-aware discriminative model to improve the results of context-sensitive spell-checkers by reranking their resulted n-best list. We augment the proposed reranker into two existing context-sensitive spell-checker systems; one of them is based on statistical machine translation and the other one is based on language model. We choose the keywords of the whole document as contextual features of the model and improve the results of both systems by employing the features in a log-linear reranker system. We evaluated the system on two different languages: English and Persian. The results of the experiments in English language on the Wall street journal test set show improvements of 4.5% and 5.2% in detection and correction recall, respectively, in comparison to the baseline method. The mentioned improvement on recall metric was achieved with comparable precision. We also achieve state-of-the-art performance on the Persian language. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "5422a4e5a82d0636c8069ec58c2753a2",
"text": "In this talk, I will focus on the applications and the latest development of deep learning technologies at Alibaba. More specifically, I will discuss (a) how to handle high dimensional data in deep learning and its application to recommender system, (b) the development of deep learning models for transfer learning and its application to image classification, (c) the development of combinatorial optimization techniques for DNN model compression and its application to large-scale image classification and object detection, and (d) the exploration of deep learning technique for combinatorial optimization and its application to the packing problem in shipping industry. I will conclude my talk with a discussion of new directions for deep learning that are under development at Alibaba.",
"title": ""
},
{
"docid": "840555a134e7606f1f3caa24786c6550",
"text": "Psychological research results have confirmed that people can have different emotional reactions to different visual stimuli. Several papers have been published on the problem of visual emotion analysis. In particular, attempts have been made to analyze and predict people’s emotional reaction towards images. To this end, different kinds of hand-tuned features are proposed. The results reported on several carefully selected and labeled small image data sets have confirmed the promise of such features. While the recent successes of many computer vision related tasks are due to the adoption of Convolutional Neural Networks (CNNs), visual emotion analysis has not achieved the same level of success. This may be primarily due to the unavailability of confidently labeled and relatively large image data sets for visual emotion analysis. In this work, we introduce a new data set, which started from 3+ million weakly labeled images of different emotions and ended up 30 times as large as the current largest publicly available visual emotion data set. We hope that this data set encourages further research on visual emotion analysis. We also perform extensive benchmarking analyses on this large data set using the state of the art methods including CNNs.",
"title": ""
},
{
"docid": "6a602e4f48c0eb66161bce46d53f0409",
"text": "In this paper, we propose three metrics for detecting botnets through analyzing their behavior. Our social infrastructure (i.e., the Internet) is currently experiencing the danger of bots' malicious activities as the scale of botnets increases. Although it is imperative to detect botnet to help protect computers from attacks, effective metrics for botnet detection have not been adequately researched. In this work we measure enormous amounts of traffic passing through the Asian Internet Interconnection Initiatives (AIII) infrastructure. To validate the effectiveness of our proposed metrics, we analyze measured traffic in three experiments. The experimental results reveal that our metrics are applicable for detecting botnets, but further research is needed to refine their performance",
"title": ""
},
{
"docid": "83f14923970c83a55152464179e6bae9",
"text": "Urine drug screening can detect cases of drug abuse, promote workplace safety, and monitor drugtherapy compliance. Compliance testing is necessary for patients taking controlled drugs. To order and interpret these tests, it is required to know of testing modalities, kinetic of drugs, and different causes of false-positive and false-negative results. Standard immunoassay testing is fast, cheap, and the preferred primarily test for urine drug screening. This method reliably detects commonly drugs of abuse such as opiates, opioids, amphetamine/methamphetamine, cocaine, cannabinoids, phencyclidine, barbiturates, and benzodiazepines. Although immunoassays are sensitive and specific to the presence of drugs/drug metabolites, false negative and positive results may be created in some cases. Unexpected positive test results should be checked with a confirmatory method such as gas chromatography/mass spectrometry. Careful attention to urine collection methods and performing the specimen integrity tests can identify some attempts by patients to produce false-negative test results.",
"title": ""
},
{
"docid": "d44daf0c7f045ef388d8b435a705e0b2",
"text": "Mapping the relationship between gene expression and psychopathology is proving to be among the most promising new frontiers for advancing the understanding, treatment, and prevention of mental disorders. Each cell in the human body contains some 23,688 genes, yet only a tiny fraction of a cell’s genes are active or “expressed” at any given moment. The interactions of biochemical, psychological, and environmental factors influencing gene expression are complex, yet relatively accessible technologies for assessing gene expression have allowed the identification of specific genes implicated in a range of psychiatric disorders, including depression, anxiety, and schizophrenia. Moreover, successful psychotherapeutic interventions have been shown to shift patterns of gene expression. Five areas of biological change in successful psychotherapy that are dependent upon precise shifts in gene expression are identified in this paper. Psychotherapy ameliorates (a) exaggerated limbic system responses to innocuous stimuli, (b) distortions in learning and memory, (c) imbalances between sympathetic and parasympathetic nervous system activity, (d) elevated levels of cortisol and other stress hormones, and (e) impaired immune functioning. The thesis of this paper is that psychotherapies which utilize non-invasive somatic interventions may yield greater precision and power in bringing about therapeutically beneficial shifts in gene expression that control these biological markers. The paper examines the manual stimulation of acupuncture points during psychological exposure as an example of such a somatic intervention. For each of the five areas, a testable proposition is presented to encourage research that compares acupoint protocols with conventional therapies in catalyzing advantageous shifts in gene expression.",
"title": ""
},
{
"docid": "9e1998a0df3258b444212e22d610e72f",
"text": "PRIOR WORK We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, accessories, or hand-to-face gestures would result in significant visual artifacts and loss of tracking accuracy. The modeling of occlusions has been mostly avoided due to its immense space of appearance variability. To address this curse of high dimensionality, we perform tracking in unconstrained images assuming non-face regions can be fully masked out. Along with recent breakthroughs in deep learning, we demonstrate that pixel-level facial segmentation is possible in real-time by repurposing convolutional neural networks designed originally for general semantic segmentation. We develop an efficient architecture based on a two-stream deconvolution network with complementary characteristics, and introduce carefully designed training samples and data augmentation strategies for improved segmentation accuracy and robustness. We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views. Furthermore, the resulting segmentation can be directly used to composite partial 3D face models on the input images and enable seamless facial manipulation tasks, such as virtual make-up or face replacement. SEGMENTATION NETWORK pooling 5 output probability map convolution network deconvolution network DeconvNet VGG-16 FCN-8s fusion input frame + + pooling 3 pooling 4 + facial performance capture semantical segmentation RGB input [Cao et al. 2014] RGB-D input [Hsieh et al. 2015] FCN [Long et al. 2015] DeconvNet [Noh et al. 2015] RESULTS face data hand data cropping / occlusion negative samples input image labeled segmentation input image with occlusion augmentation",
"title": ""
},
{
"docid": "8dc3bcecacd940036090a08d942596ab",
"text": "Pregnancy-related pelvic girdle pain (PRPGP) has a prevalence of approximately 45% during pregnancy and 20-25% in the early postpartum period. Most women become pain free in the first 12 weeks after delivery, however, 5-7% do not. In a large postpartum study of prevalence for urinary incontinence (UI) [Wilson, P.D., Herbison, P., Glazener, C., McGee, M., MacArthur, C., 2002. Obstetric practice and urinary incontinence 5-7 years after delivery. ICS Proceedings of the Neurourology and Urodynamics, vol. 21(4), pp. 284-300] found that 45% of women experienced UI at 7 years postpartum and that 27% who were initially incontinent in the early postpartum period regained continence, while 31% who were continent became incontinent. It is apparent that for some women, something happens during pregnancy and delivery that impacts the function of the abdominal canister either immediately, or over time. Current evidence suggests that the muscles and fascia of the lumbopelvic region play a significant role in musculoskeletal function as well as continence and respiration. The combined prevalence of lumbopelvic pain, incontinence and breathing disorders is slowly being understood. It is also clear that synergistic function of all trunk muscles is required for loads to be transferred effectively through the lumbopelvic region during multiple tasks of varying load, predictability and perceived threat. Optimal strategies for transferring loads will balance control of movement while maintaining optimal joint axes, maintain sufficient intra-abdominal pressure without compromising the organs (preserve continence, prevent prolapse or herniation) and support efficient respiration. Non-optimal strategies for posture, movement and/or breathing create failed load transfer which can lead to pain, incontinence and/or breathing disorders. Individual or combined impairments in multiple systems including the articular, neural, myofascial and/or visceral can lead to non-optimal strategies during single or multiple tasks. Biomechanical aspects of the myofascial piece of the clinical puzzle as it pertains to the abdominal canister during pregnancy and delivery, in particular trauma to the linea alba and endopelvic fascia and/or the consequence of postpartum non-optimal strategies for load transfer, is the focus of the first two parts of this paper. A possible physiological explanation for fascial changes secondary to altered breathing behaviour during pregnancy is presented in the third part. A case study will be presented at the end of this paper to illustrate the clinical reasoning necessary to discern whether conservative treatment or surgery is necessary for restoration of function of the abdominal canister in a woman with postpartum diastasis rectus abdominis (DRA).",
"title": ""
},
{
"docid": "ff5700d97ad00fcfb908d90b56f6033f",
"text": "How to design a secure steganography method is the problem that researchers have always been concerned about. Traditionally, the steganography method is designed in a heuristic way which does not take into account the detection side (steganalysis) fully and automatically. In this paper, we propose a new strategy that generates more suitable and secure covers for steganography with adversarial learning scheme, named SSGAN. The proposed architecture has one generative network called G, and two discriminative networks called D and S, among which the former evaluates the visual quality of the generated images for steganography and the latter assesses their suitableness for information hiding. Different from the existing work, we use WGAN instead of GAN for the sake of faster convergence speed, more stable training, and higher quality images, and also re-design the S net with more sophisticated steganalysis network. The experimental results prove the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "86d8b5fd2998557858205a6e6e1ed046",
"text": "Advances in the information and communication technologies have led to the emergence of Internet of Thing (IoT). IoT allows many physical devices to capture transmit data, through the internet, providing more data interoperability methods. Nowadays IoT plays an important role not only in communication, but also in monitoring, recording, storage and display. Hence the latest trend in Healthcare communication method using IoT is adapted. Monitored on a continual basis, aggregated and effectively analyzed-such information can bring about a massive positive transformation in the field of healthcare. Our matter of concern in this project is to focus on the development and implementation of an effective healthcare monitoring system based on IoT. The proposed system monitors the vital health parameters and transmits the data through a wireless communication, which is further transferred to a network via a Wi-Fi module. The data can be accessed anytime promoting the reception of the current status of the patient. In case any abnormal behavior or any vital signs are recognized, the caretaker, as well as the doctors are notified immediately through a message service or an audio signaling device (buzzer). In order to design an efficient remote monitoring system, security plays an important part. Cloud computing and password protected Wi-Fi module handles authentication, privacy and security of patient details by allowing restricted access to the database. Hence the system provides quality healthcare to all. This paper is a review of Healthcare Monitoring system using IoT.",
"title": ""
},
{
"docid": "9188a5da5d00592299b5a5268ed579ac",
"text": "We introduce word vectors for the construction domain. Our vectors were obtained by running word2vec on an 11M-word corpus that we created from scratch by leveraging freely-accessible online sources of construction-related text. We first explore the embedding space and show that our vectors capture meaningful constructionspecific concepts. We then evaluate the performance of our vectors against that of ones trained on a 100B-word corpus (Google News) within the framework of an injury report classification task. Without any parameter tuning, our embeddings give competitive results, and outperform the Google News vectors in many cases. Using a keyword-based compression of the reports also leads to a significant speed-up with only a limited loss in performance. We release our corpus and the data set we created for the classification task as publicly available, in the hope that they will be used by future studies for benchmarking and building on our work.",
"title": ""
},
{
"docid": "e041d7f54e1298d4aa55edbfcbda71ad",
"text": "Charts are common graphic representation for scientific data in technical and business papers. We present a robust system for detecting and recognizing bar charts. The system includes three stages, preprocessing, detection and recognition. The kernel algorithm in detection is newly developed Modified Probabilistic Hough Transform algorithm for parallel lines clusters detection. The main algorithms in recognition are bar pattern reconstruction and text primitives grouping in the Hough space which are also original. The Experiments show the system can also recognize slant bar charts, or even hand-drawn charts.",
"title": ""
},
{
"docid": "1127b964ad114909a2aa8d78eb134a78",
"text": "RFID technology is gaining adoption on an increasin g scale for tracking and monitoring purposes. Wide deployments of RFID devices will soon generate an unprecedented volume of data. Emerging applications require the RFID data to be f ilt red and correlated for complex pattern detection and transf ormed to events that provide meaningful, actionable informat ion to end applications. In this work, we design and develop S ASE, a complex event processing system that performs such dat ainformation transformation over real-time streams. We design a complex event language for specifying application l gic for such transformation, devise new query processing techniq ues to efficiently implement the language, and develop a comp rehensive system that collects, cleans, and processes RFID da ta for delivery of relevant, timely information as well as stor ing necessary data for future querying. We demonstrate an initial prototype of SASE through a real-world retail management scenari o.",
"title": ""
},
{
"docid": "f65c3e60dbf409fa2c6e58046aad1e1c",
"text": "The gut microbiota is essential for the development and regulation of the immune system and the metabolism of the host. Germ-free animals have altered immunity with increased susceptibility to immunologic diseases and show metabolic alterations. Here, we focus on two of the major immune-mediated microbiota-influenced components that signal far beyond their local environment. First, the activation or suppression of the toll-like receptors (TLRs) by microbial signals can dictate the tone of the immune response, and they are implicated in regulation of the energy homeostasis. Second, we discuss the intestinal mucosal surface is an immunologic component that protects the host from pathogenic invasion, is tightly regulated with regard to its permeability and can influence the systemic energy balance. The short chain fatty acids are a group of molecules that can both modulate the intestinal barrier and escape the gut to influence systemic health. As modulators of the immune response, the microbiota-derived signals influence functions of distant organs and can change susceptibility to metabolic diseases.",
"title": ""
},
{
"docid": "051188b0b4a6bdc31a0130a16527ce86",
"text": "Considerations of microalgae as a source offood and biochemicals began in the early 1940's, and in 1952 the first Algae Mass-Culture Symposium was held (Burlew, 1953). Since then, a number of microalgae have been suggested and evaluated for their suitability for commercial exploitation. These include Chlorella, Scenedesmus and Spirulina (e.g., Soeder, 1976; Kawaguchi, 1980; Becker & Venkataraman, 1980) and small commercial operations culturing some of these algae for food are underway in various parts of the world. The extremely halophilic unicellular green alga Dunaliella salina (Chlorophyta, Volvocales) has been proposed as a source of its osmoregulatory solute, glycerol and the pigment f3-carotene (Masyuk, 1968; Aasen, et a11969; Ben-Amotz & A vron, 1980). Much research on the commercial potential of this algae and its products has been undertaken (e.g., Williams, et al. 1978; Chen & Chi, 1981) and trial operations have been established in the USSR (Masyuk, 1968) and in Israel (Ben-Amotz & A vron, 1980). Since 1978, we in Australia have been working also, to examine the feasibility of using large-scale culture of Dunaliella salina as a commercial source",
"title": ""
},
{
"docid": "77c2843058856b8d7a582d3b0349b856",
"text": "In this paper, an S-band dual circular polarized (CP) spherical conformal phased array antenna (SPAA) is designed. It has the ability to scan a beam within the hemisphere coverage. There are 23 elements uniformly arranged on the hemispherical dome. The design process of the SPAA is presented in detail. Three different kinds of antenna elements are compared. The gain of the SPAA is more than 13 dBi and the gain flatness is less than 1 dB within the scanning range. The measured result is consistent well with the simulated one.",
"title": ""
},
{
"docid": "2fc0779078bc5be4ed21f87ead97458c",
"text": "This paper presents for the first time an X-band antenna array with integrated silicon germanium low noise amplifiers (LNA) and 3-bit phase shifters (PS). LNAs and PSs were successfully integrated onto an 8 × 2 lightweight antenna utilizing a multilayer liquid crystal polymer (LCP) feed substrate laminated with a duroid antenna layer. A baseline passive 8×2 antenna is measured along with a SiGe integrated 8×2 receive antenna for comparison of results. The active antenna array weighs only 3.5 ounces and consumes 53 mW of dc power. Successful comparisons of the measured and simulated results verify a working phased array with a return loss better than 10 dB across the frequency band of 9.25 GHz-9.75 GHz. A comparison of radiation patterns for the 8×2 baseline antenna and the 8×2 SiGe integrated antenna show a 25 dB increase in gain (ΔG). The SiGe integrated antenna demonstrated a predictable beam steering capability of ±41°. Combined antenna and receiver performance yielded a merit G/T of -9.1 dB/K and noise figure of 5.6 dB.",
"title": ""
},
{
"docid": "3e94030eb03806d79c5e66aa90408fbb",
"text": "The sampling rate of the sensors in wireless sensor networks (WSNs) determines the rate of its energy consumption since most of the energy is used in sampling and transmission. To save the energy in WSNs and thus prolong the network lifetime, we present a novel approach based on the compressive sensing (CS) framework to monitor 1-D environmental information in WSNs. The proposed technique is based on CS theory to minimize the number of samples taken by sensor nodes. An innovative feature of our approach is a new random sampling scheme that considers the causality of sampling, hardware limitations and the trade-off between the randomization scheme and computational complexity. In addition, a sampling rate indicator (SRI) feedback scheme is proposed to enable the sensor to adjust its sampling rate to maintain an acceptable reconstruction performance while minimizing the number of samples. A significant reduction in the number of samples required to achieve acceptable reconstruction error is demonstrated using real data gathered by a WSN located in the Hessle Anchorage of the Humber Bridge.",
"title": ""
},
{
"docid": "32b4d99238f6777399909e35f501a5d3",
"text": "BACKGROUND\nRecent technical developments have focused on the full automation of urinalyses, however the manual microscopic analysis of urine sediment is considered the reference method. The aim of this study was to compare the performances of the LabUMat-UriSed and the H800-FUS100 with manual microscopy, and with each other.\n\n\nMETHODS\nThe urine sediments of 332 urine samples were examined by these two devices (LabUMat-UriSed, H800-FUS100) and manual microscopy.\n\n\nRESULTS\nThe reproducibility of the analyzers, UriSed and Fus100 (4.1-28.5% and 4.7-21.2%, respectively), was better than that with manual microscopy (8.5-33.3%). The UriSed was more sensitive for leukocytes (82%), while the Fus-100 was more sensitive for erythrocyte cell counting (73%). There were moderate correlations between manual microscopy and the two devices, UriSed and Fus100, for erythrocyte (r = 0.496 and 0.498, respectively) and leukocyte (r = 0.597 and 0.599, respectively) cell counting however the correlation between the two devices was much better for erythrocyte (r = 0.643) and for leukocyte (r = 0.767) cell counting.\n\n\nCONCLUSION\nIt can be concluded that these two devices showed similar performances. They were time-saving and standardized techniques, especially for reducing preanalytical errors such as the study time, centrifugation, and specimen volume for sedimentary analysis; however, the automated systems are still inadequate for classifying the cells that are present in pathological urine specimens.",
"title": ""
},
{
"docid": "414bb4a869a900066806fa75edc38bd6",
"text": "For nearly a century, scholars have sought to understand, measure, and explain giftedness. Succeeding theories and empirical investigations have often built on earlier work, complementing or sometimes clashing over conceptions of talent or contesting the mechanisms of talent development. Some have even suggested that giftedness itself is a misnomer, mistaken for the results of endless practice or social advantage. In surveying the landscape of current knowledge about giftedness and gifted education, this monograph will advance a set of interrelated arguments: The abilities of individuals do matter, particularly their abilities in specific talent domains; different talent domains have different developmental trajectories that vary as to when they start, peak, and end; and opportunities provided by society are crucial at every point in the talent-development process. We argue that society must strive to promote these opportunities but that individuals with talent also have some responsibility for their own growth and development. Furthermore, the research knowledge base indicates that psychosocial variables are determining influences in the successful development of talent. Finally, outstanding achievement or eminence ought to be the chief goal of gifted education. We assert that aspiring to fulfill one's talents and abilities in the form of transcendent creative contributions will lead to high levels of personal satisfaction and self-actualization as well as produce yet unimaginable scientific, aesthetic, and practical benefits to society. To frame our discussion, we propose a definition of giftedness that we intend to be comprehensive. Giftedness is the manifestation of performance that is clearly at the upper end of the distribution in a talent domain even relative to other high-functioning individuals in that domain. Further, giftedness can be viewed as developmental in that in the beginning stages, potential is the key variable; in later stages, achievement is the measure of giftedness; and in fully developed talents, eminence is the basis on which this label is granted. Psychosocial variables play an essential role in the manifestation of giftedness at every developmental stage. Both cognitive and psychosocial variables are malleable and need to be deliberately cultivated. Our goal here is to provide a definition that is useful across all domains of endeavor and acknowledges several perspectives about giftedness on which there is a fairly broad scientific consensus. Giftedness (a) reflects the values of society; (b) is typically manifested in actual outcomes, especially in adulthood; (c) is specific to domains of endeavor; (d) is the result of the coalescing of biological, pedagogical, psychological, and psychosocial factors; and (e) is relative not just to the ordinary (e.g., a child with exceptional art ability compared to peers) but to the extraordinary (e.g., an artist who revolutionizes a field of art). In this monograph, our goal is to review and summarize what we have learned about giftedness from the literature in psychological science and suggest some directions for the field of gifted education. We begin with a discussion of how giftedness is defined (see above). In the second section, we review the reasons why giftedness is often excluded from major conversations on educational policy, and then offer rebuttals to these arguments. In spite of concerns for the future of innovation in the United States, the education research and policy communities have been generally resistant to addressing academic giftedness in research, policy, and practice. The resistance is derived from the assumption that academically gifted children will be successful no matter what educational environment they are placed in, and because their families are believed to be more highly educated and hold above-average access to human capital wealth. These arguments run counter to psychological science indicating the need for all students to be challenged in their schoolwork and that effort and appropriate educational programing, training and support are required to develop a student's talents and abilities. In fact, high-ability students in the United States are not faring well on international comparisons. The scores of advanced students in the United States with at least one college-educated parent were lower than the scores of students in 16 other developed countries regardless of parental education level. In the third section, we summarize areas of consensus and controversy in gifted education, using the extant psychological literature to evaluate these positions. Psychological science points to several variables associated with outstanding achievement. The most important of these include general and domain-specific ability, creativity, motivation and mindset, task commitment, passion, interest, opportunity, and chance. Consensus has not been achieved in the field however in four main areas: What are the most important factors that contribute to the acuities or propensities that can serve as signs of potential talent? What are potential barriers to acquiring the \"gifted\" label? What are the expected outcomes of gifted education? And how should gifted students be educated? In the fourth section, we provide an overview of the major models of giftedness from the giftedness literature. Four models have served as the foundation for programs used in schools in the United States and in other countries. Most of the research associated with these models focuses on the precollegiate and early university years. Other talent-development models described are designed to explain the evolution of talent over time, going beyond the school years into adult eminence (but these have been applied only by out-of-school programs as the basis for educating gifted students). In the fifth section we present methodological challenges to conducting research on gifted populations, including definitions of giftedness and talent that are not standardized, test ceilings that are too low to measure progress or growth, comparison groups that are hard to find for extraordinary individuals, and insufficient training in the use of statistical methods that can address some of these challenges. In the sixth section, we propose a comprehensive model of trajectories of gifted performance from novice to eminence using examples from several domains. This model takes into account when a domain can first be expressed meaningfully-whether in childhood, adolescence, or adulthood. It also takes into account what we currently know about the acuities or propensities that can serve as signs of potential talent. Budding talents are usually recognized, developed, and supported by parents, teachers, and mentors. Those individuals may or may not offer guidance for the talented individual in the psychological strengths and social skills needed to move from one stage of development to the next. We developed the model with the following principles in mind: Abilities matter, domains of talent have varying developmental trajectories, opportunities need to be provided to young people and taken by them as well, psychosocial variables are determining factors in the successful development of talent, and eminence is the aspired outcome of gifted education. In the seventh section, we outline a research agenda for the field. This agenda, presented in the form of research questions, focuses on two central variables associated with the development of talent-opportunity and motivation-and is organized according to the degree to which access to talent development is high or low and whether an individual is highly motivated or not. Finally, in the eighth section, we summarize implications for the field in undertaking our proposed perspectives. These include a shift toward identification of talent within domains, the creation of identification processes based on the developmental trajectories of talent domains, the provision of opportunities along with monitoring for response and commitment on the part of participants, provision of coaching in psychosocial skills, and organization of programs around the tools needed to reach the highest possible levels of creative performance or productivity.",
"title": ""
}
] |
scidocsrr
|
99688d670ff4c80887083deed8bbe3c7
|
Cuckoo: A Computation Offloading Framework for Smartphones
|
[
{
"docid": "8836fddeb496972fa38005fd2f8a4ed4",
"text": "Energy harvesting has grown from long-established concepts into devices for powering ubiquitously deployed sensor networks and mobile electronics. Systems can scavenge power from human activity or derive limited energy from ambient heat, light, radio, or vibrations. Ongoing power management developments enable battery-powered electronics to live longer. Such advances include dynamic optimization of voltage and clock rate, hybrid analog-digital designs, and clever wake-up procedures that keep the electronics mostly inactive. Exploiting renewable energy resources in the device's environment, however, offers a power source limited by the device's physical survival rather than an adjunct energy store. Energy harvesting's true legacy dates to the water wheel and windmill, and credible approaches that scavenge energy from waste heat or vibration have been around for many decades. Nonetheless, the field has encountered renewed interest as low-power electronics, wireless standards, and miniaturization conspire to populate the world with sensor networks and mobile devices. This article presents a whirlwind survey through energy harvesting, spanning historic and current developments.",
"title": ""
}
] |
[
{
"docid": "e7475c3fd58141c496e8b430a2db24d3",
"text": "This study concerns the quality of life of patients after stroke and how this is influenced by disablement and emotional factors. Ninety-six consecutive patients of mean age 71 years were followed for two years. At the end of that time 23% had experienced a recurrence of stroke and 27% were deceased. Of the survivors 76% were independent as regards activities of daily life (ADL) and lived in their own homes. Age as well as initial function were prognostically important factors. Patients who could participate in interviews marked on a visual analogue scale their evaluation of quality of life before and after stroke. Most of them had experienced a decrease and no improvement was observed during the two years. The deterioration was more pronounced in ADL dependent patients than among the independent. However, depression and anxiety were found to be of similar importance for quality of life as was physical disablement. These findings call for a greater emphasis on psychological support in the care of post stroke patients. The visual analogue scale can be a useful tool for detecting special needs.",
"title": ""
},
{
"docid": "73160df16943b2f788750b8f7141d290",
"text": "This letter proposes a double-sided printed bow-tie antenna for ultra wide band (UWB) applications. The frequency band considered is 3.1-10.6 GHz, which has been approved by the Federal Communications Commission as a commercial UWB band. The proposed antenna has a return loss less than 10 dB, phase linearity, and gain flatness over the above frequency band.",
"title": ""
},
{
"docid": "7bbb9fed03444841fb66ec7f3820b9cb",
"text": "In this paper, novel n- and p-type tunnel field-effect transistors (T-FETs) based on heterostructure Si/intrinsic-SiGe channel layer are proposed, which exhibit very small subthreshold swings, as well as low threshold voltages. The design parameters for improvement of the characteristics of the devices are studied and optimized based on the theoretical principles and simulation results. The proposed devices are designed to have extremely low off currents on the order of 1 fA/mum and engineered to exhibit substantially higher on currents compared with previously reported T-FET devices. Subthreshold swings as low as 15 mV/dec and threshold voltages as low as 0.13 V are achieved in these devices. Moreover, the T-FETs are designed to exhibit input and output characteristics compatible with CMOS-type digital-circuit applications. Using the proposed n- and p-type devices, the implementation of an inverter circuit based on T-FETs is reported. The performance of the T-FET-based inverter is compared with the 65-nm low-power CMOS-based inverter, and a gain of ~104 is achieved in static power consumption for the T-FET-based inverter with smaller gate delay.",
"title": ""
},
{
"docid": "fa8e732d89f22704167be5f51f75ecb6",
"text": "By studying trouble tickets from small enterprise networks, we conclude that their operators need detailed fault diagnosis. That is, the diagnostic system should be able to diagnose not only generic faults (e.g., performance-related) but also application specific faults (e.g., error codes). It should also identify culprits at a fine granularity such as a process or firewall configuration. We build a system, called NetMedic, that enables detailed diagnosis by harnessing the rich information exposed by modern operating systems and applications. It formulates detailed diagnosis as an inference problem that more faithfully captures the behaviors and interactions of fine-grained network components such as processes. The primary challenge in solving this problem is inferring when a component might be impacting another. Our solution is based on an intuitive technique that uses the joint behavior of two components in the past to estimate the likelihood of them impacting one another in the present. We find that our deployed prototype is effective at diagnosing faults that we inject in a live environment. The faulty component is correctly identified as the most likely culprit in 80% of the cases and is almost always in the list of top five culprits.",
"title": ""
},
{
"docid": "2efe399d3896f78c6f152d98aa6d33a0",
"text": "We consider the problem of verifying the identity of a distribution: Given the description of a distribution over a discrete support p = (p<sub>1</sub>, p<sub>2</sub>, ... , p<sub>n</sub>), how many samples (independent draws) must one obtain from an unknown distribution, q, to distinguish, with high probability, the case that p = q from the case that the total variation distance (L<sub>1</sub> distance) ||p - q||1≥ ϵ? We resolve this question, up to constant factors, on an instance by instance basis: there exist universal constants c, c' and a function f(p, ϵ) on distributions and error parameters, such that our tester distinguishes p = q from ||p-q||1≥ ϵ using f(p, ϵ) samples with success probability > 2/3, but no tester can distinguish p = q from ||p - q||1≥ c · ϵ when given c' · f(p, ϵ) samples. The function f(p, ϵ) is upperbounded by a multiple of ||p||2/3/ϵ<sup>2</sup>, but is more complicated, and is significantly smaller in some cases when p has many small domain elements, or a single large one. This result significantly generalizes and tightens previous results: since distributions of support at most n have L<sub>2/3</sub> norm bounded by √n, this result immediately shows that for such distributions, O(√n/ϵ<sup>2</sup>) samples suffice, tightening the previous bound of O(√npolylog/n<sup>4</sup>) for this class of distributions, and matching the (tight) known results for the case that p is the uniform distribution over support n. The analysis of our very simple testing algorithm involves several hairy inequalities. To facilitate this analysis, we give a complete characterization of a general class of inequalities- generalizing Cauchy-Schwarz, Holder's inequality, and the monotonicity of L<sub>p</sub> norms. Specifically, we characterize the set of sequences (a)<sub>i</sub> = a<sub>1</sub>, . . . , ar, (b)i = b<sub>1</sub>, . . . , br, (c)i = c<sub>1</sub>, ... , cr, for which it holds that for all finite sequences of positive numbers (x)<sub>j</sub> = x<sub>1</sub>,... and (y)<sub>j</sub> = y<sub>1</sub>,...,Π<sub>i=1</sub><sup>r</sup> (Σ<sub>j</sub>x<sup>a</sup><sub>j</sub><sup>i</sup><sub>y</sub><sub>i</sub><sup>b</sup><sup>i</sup>)<sup>ci</sup>≥1. For example, the standard Cauchy-Schwarz inequality corresponds to the sequences a = (1, 0, 1/2), b = (0,1, 1/2), c = (1/2 , 1/2 , -1). Our characterization is of a non-traditional nature in that it uses linear programming to compute a derivation that may otherwise have to be sought throu.gh trial and error, by hand. We do not believe such a characterization has appeared in the literature, and hope its computational nature will be useful to others, and facilitate analyses like the one here.",
"title": ""
},
{
"docid": "42e198a383c240beb0aea6116bfedeaa",
"text": "Cognitive radio (CR) is considered as a key enabling technology for dynamic spectrum access to improve spectrum efficiency. Although the CR concept was invented with the core idea of realizing \"cognition\", the research on measuring CR cognition capabilities and intelligence is largely open. Deriving the intelligence capabilities of CR not only can lead to the development of new CR technologies, but also makes it possible to better configure the networks by integrating CRs with different intelligence capabilities in a more cost- efficient way. In this paper, for the first time, we propose a data-driven methodology to quantitatively analyze the intelligence factors of the CR with learning capabilities. The basic idea of our methodology is to run various tests on the CR in different spectrum environments under different settings and obtain various performance results on different metrics. Then we apply factor analysis on the performance results to identify and quantize the intelligence capabilities of the CR. More specifically, we present a case study consisting of sixty three different types of CRs. CRs are different in terms of learning-based dynamic spectrum access strategies, number of sensors, sensing accuracy, and processing speed. Based on our methodology, we analyze the intelligence capabilities of the CRs through extensive simulations. Four intelligence capabilities are identified for the CRs through our analysis, which comply with the nature of the tested algorithms.",
"title": ""
},
{
"docid": "c2f46b2ed4e4306c26585f0aab275c66",
"text": "We developed a crawler that can crawl YouTube and filter videos with only one person in front of the camera. This filter is implemented by extracting a number of frames from each video, and then using OpenCV’s (Itseez, 2015) Haar cascades to estimate how many faces are in each video. The crawler is supplied a search term which it then forwards to the YouTube Data API. The search terms provide a rough estimate of topics in the datasets, since they are directly connected to meta-data provided by the uploader. Figure 1 shows the distribution of the video topics used in CMU-MOSEI. The diversity of the video topics brings the following generalizability advantages: 1) the models trained on CMU-MOSEI will be generalizable across different topics and the notion of dataset domain is marginalized, 2) the diversity of topics bring variety of speakers, which allows the trained models to be generalizable across different speakers, and 3) the diversity in topics furthermore brings diversity in recording setups which allows the trained models to be generalizable across microphones and cameras with different intrinsic parameters. This diversity makes CMU-MOSEI a one-of-a-kind dataset for sentiment analysis and emotion recognition. Figure 1: The topics of videos in CMU-MOSEI, displayed as a Venn-style word cloud (Coppersmith and Kelly, 2014). Larger words indicate more videos from that topic.",
"title": ""
},
{
"docid": "0ab220829ea6667549ca274eaedb2a9e",
"text": "In a culture where collectivism is pervasive such as China, social norms can be one of the most powerful tools to influence consumers’ behavior. Individuals are driven to meet social expectations and fulfill social roles in collectivist cultures. Therefore, this study was designed to investigate how Chinese consumers’ concern with saving face affects sustainable fashion product purchase intention and how it also moderates consumers’ commitment to sustainable fashion. An empirical data set of 469 undergraduate students in Beijing and Shanghai was used to test our hypotheses. Results confirmed that face-saving is an important motivation for Chinese consumers’ purchase of sustainable fashion items, and it also attenuated the effect of general product value while enhancing the effect of products’ green value in predicting purchasing trends. The findings contribute to the knowledge of sustainable consumption in Confucian culture, and thus their managerial implications were also discussed.",
"title": ""
},
{
"docid": "229605eada4ca390d17c5ff168c6199a",
"text": "The sharing economy is a new online community that has important implications for offline behavior. This study evaluates whether engagement in the sharing economy is associated with an actor’s aversion to risk. Using a web-based survey and a field experiment, we apply an adaptation of Holt and Laury’s (2002) risk lottery game to a representative sample of sharing economy participants. We find that frequency of activity in the sharing economy predicts risk aversion, but only in interaction with satisfaction. While greater satisfaction with sharing economy websites is associated with a decrease in risk aversion, greater frequency of usage is associated with greater risk aversion. This analysis shows the limitations of a static perspective on how risk attitudes relate to participation in the sharing economy.",
"title": ""
},
{
"docid": "165fcc5242321f6fed9c353cc12216ff",
"text": "Fingerprint alteration represents one of the newest challenges in biometric identification. The aim of fingerprint mutilation is to destroy the structure of the papillary ridges so that the identity of the offender cannot be recognized by the biometric system. The problem has received little attention and there is a lack of a real world altered fingerprints database that would allow researchers to develop new algorithms and techniques for altered fingerprints detection. The major contribution of this paper is that it provides a new public database of synthetically altered fingerprints. Starting from the cases described in the literature, three methods for generating simulated altered fingerprints are proposed.",
"title": ""
},
{
"docid": "eb3d82a85c8a9c3f815f0f62b6ae55cd",
"text": "In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.",
"title": ""
},
{
"docid": "42ecca95c15cd1f92d6e5795f99b414a",
"text": "Personalized tag recommendation systems recommend a list of tags to a user when he is about to annotate an item. It exploits the individual preference and the characteristic of the items. Tensor factorization techniques have been applied to many applications, such as tag recommendation. Models based on Tucker Decomposition can achieve good performance but require a lot of computation power. On the other hand, models based on Canonical Decomposition can run in linear time and are more feasible for online recommendation. In this paper, we propose a novel method for personalized tag recommendation, which can be considered as a nonlinear extension of Canonical Decomposition. Different from linear tensor factorization, we exploit Gaussian radial basis function to increase the model’s capacity. The experimental results show that our proposed method outperforms the state-of-the-art methods for tag recommendation on real datasets and perform well even with a small number of features, which verifies that our models can make better use of features.",
"title": ""
},
{
"docid": "c539b8957e4c131318ef0a807326b353",
"text": "A large body of research has shown spatial distortions in the perception of tactile distances on the skin. For example, perceived tactile distance is increased on sensitive compared to less sensitive skin regions, and larger for stimuli oriented along the medio-lateral axis than the proximo-distal axis of the limbs. In this study we aimed to investigate the spatial coherence of these distortions by reconstructing the internal geometry of tactile space using multidimensional scaling (MDS). Participants made verbal estimates of the perceived distance between 2 touches applied sequentially to locations on their left hand. In Experiment 1 we constructed perceptual maps of the dorsum of the left hand, which showed a good fit to the actual configuration of stimulus locations. Critically, these maps also showed clear evidence of spatial distortion, being stretched along the medio-lateral hand axis. Experiment 2 replicated this result and showed that no such distortion is apparent on the palmar surface of the hand. These results show that distortions in perceived tactile distance can be characterized by geometrically simple and coherent deformations of tactile space. We suggest that the internal geometry of tactile space is shaped by the geometry of receptive fields in somatosensory cortex. (PsycINFO Database Record",
"title": ""
},
{
"docid": "ef64da59880750872e056822c17ab00e",
"text": "The efficient cooling is very important for a light emitting diode (LED) module because both the energy efficiency and lifespan decrease significantly as the junction temperature increases. The fin heat sink is commonly used for cooling LED modules with natural convection conditions. This work proposed a new design method for high-power LED lamp cooling by combining plate fins with pin fins and oblique fins. Two new types of fin heat sinks called the pin-plate fin heat sink (PPF) and the oblique-plate fin heat sink (OPF) were designed and their heat dissipation performances were compared with three conventional fin heat sinks, the plate fin heat sink, the pin fin heat sink and the oblique fin heat sink. The LED module was assumed to be operated under 1 atmospheric pressure and its heat input is set to 4 watts. The PPF and OPF models show lower junction temperatures by about 6°C ~ 12°C than those of three conventional models. The PPF with 8 plate fins inside (PPF-8) and the OPF with 7 plate fins inside (OPF-7) showed the best thermal performance among all the PPF and OPF designs, respectively. The total thermal resistances of the PPF-8 and OPF-7 models decreased by 9.0% ~ 15.6% compared to those of three conventional models.",
"title": ""
},
{
"docid": "3ef7fab93c345317209e3a6466fc8cce",
"text": "Many commercial video players rely on bitrate adaptation algorithm to adapt video bitrate to dynamic network condition. To achieve a high quality of experience, bitrate adaptation algorithm is required to strike a balance between response agility and video quality stability. Existing online algorithms select bitrates according to instantaneous throughput and buffer occupancy, achieving an agile reaction to changes but inducing video quality fluctuations due to the high dynamic of reference signals. In this paper, the idea of multi-step prediction is proposed to guide a better tradeoff, and the bitrate selection is formulated as a predictive control problem. With it, a generalized predictive control based approach is developed to calculate the optimal bitrate by minimizing the cost function over a moving look-ahead horizon. Finally, the proposed algorithm is implemented on a reference video player with performance evaluations conducted using realistic bandwidth traces. Experimental results show that the multi-step predictive control adaptation algorithm can achieve zero rebuffer event and 63.3% of reduction in bitrate switch.",
"title": ""
},
{
"docid": "d70946cd43b73be4c68d1858bebc91fe",
"text": "A truly autonomous mobile robot have to solve the SLAM problem (i.e. simultaneous map building and pose estimation) in order to navigate in an unknown environment. Unfortunately, a universal solution for the problem hasn't been proposed yet. The tinySLAM algorithm that has a compact and clear code was designed to solve SLAM in an indoor environment using a noisy laser scanner. This paper introduces the vinySLAM method that enhances tinySLAM with the Transferable Belief Model to improve its robustness and accuracy. Proposed enhancements affect scan matching and occupancy tracking keeping simplicity and clearness of the original code. The evaluation on publicly available datasets shows significant robustness and accuracy improvements.",
"title": ""
},
{
"docid": "b0bb9c4bcf666dca927d4f747bfb1ca1",
"text": "Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.",
"title": ""
},
{
"docid": "36684d4ea27b940036e179fe967e949c",
"text": "In this letter, we propose a miniaturized and wideband electromagnetic bandgap (EBG) structure with a meander-perforated plane (MPP) for power/ground noise suppression in multilayer printed circuit boards. The proposed MPP enhances the characteristic impedance of the EBG unit cell and improves the slow-wave effect, thus achieving the significant size reduction and the stopband enhancement. To explain the prominent results, a dispersion analysis for the proposed MPP-EBG structure is developed. Compared to a mushroom-type EBG structure, it is experimentally demonstrated that the MPP-EBG structure presents a 57% reduction in the start frequency of the bandgap, which leads to a 74% reduction in a unit cell size. In addition, the MPP-EBG structure considerably improves the noise suppression bandwidth (-40 dB) from 0.8 to 4.9 GHz compared to the mushroom-type EBG structure.",
"title": ""
},
{
"docid": "8705415b41d8b3c2e7cb4f7523e0f958",
"text": "Research in the field of Computer Supported Collaborative Learning (CSCL) is based on a wide variety of methodologies. In this paper, we focus upon content analysis, which is a technique often used to analyze transcripts of asynchronous, computer mediated discussion groups in formal educational settings. Although this research technique is often used, standards are not yet established. The applied instruments reflect a wide variety of approaches and differ in their level of detail and the type of analysis categories used. Further differences are related to a diversity in their theoretical base, the amount of information about validity and reliability, and the choice for the unit of analysis. This article presents an overview of different content analysis instruments, building on a sample of models commonly used in the CSCL-literature. The discussion of 15 instruments results in a number of critical conclusions. There are questions about the coherence between the theoretical base and the operational translation of the theory in the instruments. Instruments are hardly compared or contrasted with one another. As a consequence the empirical base of the validity of the instruments is limited. The analysis is rather critical when it comes to the issue of reliability. The authors put forward the need to improve the theoretical and empirical base of the existing instruments in order to promote the overall quality of CSCL-research. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f5168565306f6e7f2b36ef797a6c9de8",
"text": "We study the problem of clustering data objects whose locations are uncertain. A data object is represented by an uncertainty region over which a probability density function (pdf) is defined. One method to cluster uncertain objects of this sort is to apply the UK-means algorithm, which is based on the traditional K-means algorithm. In UK-means, an object is assigned to the cluster whose representative has the smallest expected distance to the object. For arbitrary pdf, calculating the expected distance between an object and a cluster representative requires expensive integration computation. We study various pruning methods to avoid such expensive expected distance calculation.",
"title": ""
}
] |
scidocsrr
|
73ecd876e133b841d730791874b3f323
|
A Wearable Reflectance Pulse Oximeter for Remote Physiological Monitoring
|
[
{
"docid": "3a3a2261e1063770a9ccbd0d594aa561",
"text": "This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation.",
"title": ""
}
] |
[
{
"docid": "e63e272f3ca07e1e7e90e53f6008e675",
"text": "Energy management in microgrids is typically formulated as an offline optimization problem for day-ahead scheduling by previous studies. Most of these offline approaches assume perfect forecasting of the renewables, the demands, and the market, which is difficult to achieve in practice. Existing online algorithms, on the other hand, oversimplify the microgrid model by only considering the aggregate supply-demand balance while omitting the underlying power distribution network and the associated power flow and system operational constraints. Consequently, such approaches may result in control decisions that violate the real-world constraints. This paper focuses on developing an online energy management strategy (EMS) for real-time operation of microgrids that takes into account the power flow and system operational constraints on a distribution network. We model the online energy management as a stochastic optimal power flow problem and propose an online EMS based on Lyapunov optimization. The proposed online EMS is subsequently applied to a real-microgrid system. The simulation results demonstrate that the performance of the proposed EMS exceeds a greedy algorithm and is close to an optimal offline algorithm. Lastly, the effect of the underlying network structure on energy management is observed and analyzed.",
"title": ""
},
{
"docid": "03abab0bc882ada2c7ba4d512ac98d0e",
"text": "The main goal of this project is to use the solar or AC power to charge all kind of regulated and unregulated battery like electric vehicle’s battery. Besides that, it will charge Lithium-ion (Li-ion) batteries of different voltage level. A standard pulse width modulation (PWM) which is controlled by duty cycle is used to build the solar or AC fed battery charger. A microcontroller unit and Buck/Boost converters are also used to build the charger. This charger changes the output voltages from variable input voltages with fixed amplitude in PWM. It gives regulated voltages for charging sensitive batteries. An unregulated output voltage can be obtained for electric vehicle’s battery. The battery charger is tested and the obtained result allowed to conclude the conditions of permanent control on the battery charger.",
"title": ""
},
{
"docid": "d175a51376883c1b563633d67dde6b8c",
"text": "Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes (e.g., AMR, UCCA, GMB, UDS) have been put forth. Yet, little has been done to assess the achievements and the shortcomings of these new contenders, compare them with syntactic schemes, and clarify the general goals of research on semantic representation. We address these gaps by critically surveying the state of the art in the field.1",
"title": ""
},
{
"docid": "8e180c13b925188f1925fee03c641669",
"text": "“Web applications have become increasingly complex and highly vulnerable,” says Peter Wood, member of the ISACA Security Advisory Group and CEO of First Base Technologies. “Social networking sites, consumer technologies – smartphones, tablets etc – and cloud services are all game changers this year. More enterprises are now requesting social engineering tests, which shows an increased awareness of threats beyond website attacks.”",
"title": ""
},
{
"docid": "1004cd19681bbebfabf51396c6b78e34",
"text": "OBJECTIVE\nThe objectives of this study were to develop a coronary heart disease (CHD) risk model among the Korean Heart Study (KHS) population and compare it with the Framingham CHD risk score.\n\n\nDESIGN\nA prospective cohort study within a national insurance system.\n\n\nSETTING\n18 health promotion centres nationwide between 1996 and 2001 in Korea.\n\n\nPARTICIPANTS\n268 315 Koreans between the ages of 30 and 74 years without CHD at baseline.\n\n\nOUTCOME MEASURE\nNon-fatal or fatal CHD events between 1997 and 2011. During an 11.6-year median follow-up, 2596 CHD events (1903 non-fatal and 693 fatal) occurred in the cohort. The optimal CHD model was created by adding high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol and triglycerides to the basic CHD model, evaluating using the area under the receiver operating characteristic curve (ROC) and continuous net reclassification index (NRI).\n\n\nRESULTS\nThe optimal CHD models for men and women included HDL-cholesterol (NRI=0.284) and triglycerides (NRI=0.207) from the basic CHD model, respectively. The discrimination using the CHD model in the Korean cohort was high: the areas under ROC were 0.764 (95% CI 0.752 to 0.774) for men and 0.815 (95% CI 0.795 to 0.835) for women. The Framingham risk function predicted 3-6 times as many CHD events than observed. Recalibration of the Framingham function using the mean values of risk factors and mean CHD incidence rates of the KHS cohort substantially improved the performance of the Framingham functions in the KHS cohort.\n\n\nCONCLUSIONS\nThe present study provides the first evidence that the Framingham risk function overestimates the risk of CHD in the Korean population where CHD incidence is low. The Korean CHD risk model is well-calculated alternations which can be used to predict an individual's risk of CHD and provides a useful guide to identify the groups at high risk for CHD among Koreans.",
"title": ""
},
{
"docid": "f2a2f1e8548cc6fcff6f1d565dfa26c9",
"text": "Cabbage contains the glucosinolate sinigrin, which is hydrolyzed by myrosinase to allyl isothiocyanate. Isothiocyanates are thought to inhibit the development of cancer cells by a number of mechanisms. The effect of cooking cabbage on isothiocyanate production from glucosinolates during and after their ingestion was examined in human subjects. Each of 12 healthy human volunteers consumed three meals, at 48-h intervals, containing either raw cabbage, cooked cabbage, or mustard according to a cross-over design. At each meal, watercress juice, which is rich in phenethyl isothiocyanate, was also consumed to allow individual and temporal variation in postabsorptive isothiocyanate recovery to be measured. Volunteers recorded the time and volume of each urination for 24 h after each meal. Samples of each urination were analyzed for N-acetyl cysteine conjugates of isothiocyanates as a measure of entry of isothiocyanates into the peripheral circulation. Excretion of isothiocyanates was rapid and substantial after ingestion of mustard, a source of preformed allyl isothiocyanate. After raw cabbage consumption, allyl isothiocyanate was again rapidly excreted, although to a lesser extent than when mustard was consumed. On the cooked cabbage treatment, excretion of allyl isothiocyanate was considerably less than for raw cabbage, and the excretion was delayed. The results indicate that isothiocyanate production is more extensive after consumption of raw vegetables but that isothiocyanates still arise, albeit to a lesser degree, when cooked vegetables are consumed. The lag in excretion on the cooked cabbage treatment suggests that the colon microflora catalyze glucosinolate hydrolysis in this case.",
"title": ""
},
{
"docid": "b8d41b4b440641d769f58189db8eaf91",
"text": "Differential diagnosis of trichotillomania is often difficult in clinical practice. Trichoscopy (hair and scalp dermoscopy) effectively supports differential diagnosis of various hair and scalp diseases. The aim of this study was to assess the usefulness of trichoscopy in diagnosing trichotillomania. The study included 370 patients (44 with trichotillomania, 314 with alopecia areata and 12 with tinea capitis). Statistical analysis revealed that the main and most characteristic trichoscopic findings of trichotillomania are: irregularly broken hairs (44/44; 100% of patients), v-sign (24/44; 57%), flame hairs (11/44; 25%), hair powder (7/44; 16%) and coiled hairs (17/44; 39%). Flame hairs, v-sign, tulip hairs, and hair powder were newly identified in this study. In conclusion, we describe here specific trichoscopy features, which may be applied in quick, non-invasive, in-office differential diagnosis of trichotillomania.",
"title": ""
},
{
"docid": "a0d6536cd8c85fe87cb316f92b489d32",
"text": "As a design of information-centric network architecture, Named Data Networking (NDN) provides content-based security. The signature binding the name with the content is the key point of content-based security in NDN. However, signing a content will introduce a significant computation overhead, especially for dynamically generated content. Adversaries can take advantages of such computation overhead to deplete the resources of the content provider. In this paper, we propose Interest Cash, an application-based countermeasure against Interest Flooding for dynamic content. Interest Cash requires a content consumer to solve a puzzle before it sends an Interest. The content consumer should provide a solution to this puzzle as cash to get the signing service from the content provider. The experiment shows that an adversary has to use more than 300 times computation resources of the content provider to commit a successful attack when Interest Cash is used.",
"title": ""
},
{
"docid": "dd15c51d3f5f25d43169c927ac753013",
"text": "After completing this article, readers should be able to: 1. List the risk factors for severe hyperbilirubinemia. 2. Distinguish between physiologic jaundice and pathologic jaundice of the newborn. 3. Recognize the clinical manifestations of acute bilirubin encephalopathy and the permanent clinical sequelae of kernicterus.4. Describe the evaluation of hyperbilirubinemia from birth through 3 months of age. 5. Manage neonatal hyperbilirubinemia, including referral to the neonatal intensive care unit for exchange transfusion.",
"title": ""
},
{
"docid": "1203822bf82dcd890e7a7a60fb282ce5",
"text": "Individuals with psychosocial problems such as social phobia or feelings of loneliness might be vulnerable to excessive use of cyber-technological devices, such as smartphones. We aimed to determine the relationship of smartphone addiction with social phobia and loneliness in a sample of university students in Istanbul, Turkey. Three hundred and sixty-seven students who owned smartphones were given the Smartphone Addiction Scale (SAS), UCLA Loneliness Scale (UCLA-LS), and Brief Social Phobia Scale (BSPS). A significant difference was found in the mean SAS scores (p < .001) between users who declared that their main purpose for smartphone use was to access social networking sites. The BSPS scores showed positive correlations with all six subscales and with the total SAS scores. The total UCLA-LS scores were positively correlated with daily life disturbance, positive anticipation, cyber-oriented relationship, and total scores on the SAS. In regression analyses, total BSPS scores were significant predictors for SAS total scores (β = 0.313, t = 5.992, p < .001). In addition, BSPS scores were significant predictors for all six SAS subscales, whereas UCLA-LS scores were significant predictors for only cyber-oriented relationship subscale scores on the SAS (β = 0.130, t = 2.416, p < .05). The results of this study indicate that social phobia was associated with the risk for smartphone addiction in young people. Younger individuals who primarily use their smartphones to access social networking sites also have an excessive pattern of smartphone use. ARTICLE HISTORY Received 12 January 2016 Accepted 19 February 2016",
"title": ""
},
{
"docid": "fcd9a80d35a24c7222392c11d3376c72",
"text": "A dual-band coplanar waveguide (CPW)-fed hybrid antenna consisting of a 5.4 GHz high-band CPW-fed inductive slot antenna and a 2.4 GHz low-band bifurcated F-shaped monopole antenna is proposed and investigated experimentally. This antenna possesses an appealing characteristic that the CPW-fed inductive slot antenna reinforces and thus improves the radiation efficiency of the bifurcated monopole antenna. Moreover, due to field orthogonality, one band resonant frequency and return loss bandwidth of the proposed hybrid antenna allows almost independent optimization without noticeably affecting those of the other band.",
"title": ""
},
{
"docid": "b3874f8390e284c119635e7619e7d952",
"text": "Since a vehicle logo is the clearest indicator of a vehicle manufacturer, most vehicle manufacturer recognition (VMR) methods are based on vehicle logo recognition. Logo recognition can be still a challenge due to difficulties in precisely segmenting the vehicle logo in an image and the requirement for robustness against various imaging situations simultaneously. In this paper, a convolutional neural network (CNN) system has been proposed for VMR that removes the requirement for precise logo detection and segmentation. In addition, an efficient pretraining strategy has been introduced to reduce the high computational cost of kernel training in CNN-based systems to enable improved real-world applications. A data set containing 11 500 logo images belonging to 10 manufacturers, with 10 000 for training and 1500 for testing, is generated and employed to assess the suitability of the proposed system. An average accuracy of 99.07% is obtained, demonstrating the high classification potential and robustness against various poor imaging situations.",
"title": ""
},
{
"docid": "56c7c065c390d1ed5f454f663289788d",
"text": "This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76% and the accuracy of 95.30% for character identification.",
"title": ""
},
{
"docid": "265421a07efc8ab26a6766f90bf53245",
"text": "Recently, there has been much excitement in the research community over using social networks to mitigate multiple identity, or Sybil, attacks. A number of schemes have been proposed, but they differ greatly in the algorithms they use and in the networks upon which they are evaluated. As a result, the research community lacks a clear understanding of how these schemes compare against each other, how well they would work on real-world social networks with different structural properties, or whether there exist other (potentially better) ways of Sybil defense.\n In this paper, we show that, despite their considerable differences, existing Sybil defense schemes work by detecting local communities (i.e., clusters of nodes more tightly knit than the rest of the graph) around a trusted node. Our finding has important implications for both existing and future designs of Sybil defense schemes. First, we show that there is an opportunity to leverage the substantial amount of prior work on general community detection algorithms in order to defend against Sybils. Second, our analysis reveals the fundamental limits of current social network-based Sybil defenses: We demonstrate that networks with well-defined community structure are inherently more vulnerable to Sybil attacks, and that, in such networks, Sybils can carefully target their links in order make their attacks more effective.",
"title": ""
},
{
"docid": "4ad106897a19830c80a40e059428f039",
"text": "In 1972, and later in 1979, at the peak of the golden era of Good Old Fashioned Artificial Intelligence (GOFAI), the voice of philosopher Hubert Dreyfus made itself heard as one of the few calls against the hubristic programme of modelling the human mind as a mechanism of symbolic information processing (Dreyfus, 1979). He did not criticise particular solutions to specific problems; instead his deep concern was with the very foundations of the programme. His critical stance was unusual, at least for most GOFAI practitioners, in that it did not rely on technical issues, but on a philosophical position emanating from phenomenology and existentialism, a fact contributing to his claims being largely ignored or dismissed for a long time by the AI community. But, for the most part, he was eventually proven right. AI’s over-reliance on worldmodelling and planning went against the evidence provided by phenomenology of human activity as situated and with a clear and ever-present focus of practical concern – the body and not some algorithm is the originating locus of intelligent activity (if by intelligent we understand intentional, directed and flexible), and the world is not the sum total of all available facts, but the world-as-it-is-for-this-body. Such concerns were later vindicated by the Brooksian revolution in autonomous robotics with its foundations on embodiment, situatedness and de-centralised mechanisms (Brooks, 1991). Brooks’ practical and methodological preoccupations – building robots largely based on biologically plausible principles and capable of acting in the real world – proved parallel, despite his claim that his approach was not “German philosophy”, to issues raised by Dreyfus. Putting robotics back as the acid test of AI, as oppossed to playing chess and proving theorems, is now often seen as a positive response to Dreyfus’ point that AI was unable to capture true meaning by the summing of meaningless processes. This criticism was later devastatingly recast in Searle’s Chinese Room argument (1980), and extended by Harnad’s Symbol Grounding Problem (1990). Meaningful activity – that is, meaningful for the agent and not only for the designer – must obtain through sensorimotor grounding in the agent’s world, and for this both a body and world are needed. Following these developments, work in autonomous robotics and new AI since the 1990s rebelled against pure connectionism because of its lack of biological plausibility and also because most of connectionist research was carried out in vacuo – it was compellingly argued that neural network models as simple input/output processing units are meaningless for modelling the cognitive capabilities of insects, let alone humans, unless they are embedded in a closed sensorimotor loop of interaction with a world (Cliff, 1991). Objective meaning, that is meaningful internal states and states of the world, can only obtain in an embodied agent whose effector and sensor activities become coordinated",
"title": ""
},
{
"docid": "87e732240f00b112bf2bb44af0ff8ca1",
"text": "Spoken Dialogue Systems (SDS) are man-machine interfaces which use natural language as the medium of interaction. Dialogue corpora collection for the purpose of training and evaluating dialogue systems is an expensive process. User simulators aim at simulating human users in order to generate synthetic data. Existing methods for user simulation mainly focus on generating data with the same statistical consistency as in some reference dialogue corpus. This paper outlines a novel approach for user simulation based on Inverse Reinforcement Learning (IRL). The task of building the user simulator is perceived as a task of imitation learning.",
"title": ""
},
{
"docid": "0cf7ebc02a8396a615064892d9ee6f22",
"text": "With the wider use of ontologies in the Semantic Web and as part of production systems, multiple scenarios for ontology maintenance and evolution are emerging. For example, successive ontology versions can be posted on the (Semantic) Web, with users discovering the new versions serendipitously; ontology-development in a collaborative environment can be synchronous or asynchronous; managers of projects may exercise quality control, examining changes from previous baseline versions and accepting or rejecting them before a new baseline is published, and so on. In this paper, we present different scenarios for ontology maintenance and evolution that we have encountered in our own projects and in those of our collaborators. We define several features that categorize these scenarios. For each scenario, we discuss the high-level tasks that an editing environment must support. We then present a unified comprehensive set of tools to support different scenarios in a single framework, allowing users to switch between different modes easily. 1 Evolution of Ontology Evolution Acceptance of ontologies as an integral part of knowledge-intensive applications has been growing steadily. The word ontology became a recognized substrate in fields outside the computer science, from bioinformatics to intelligence analysis. With such acceptance, came the use of ontologies in industrial systems and active publishing of ontologies on the (Semantic) Web. More and more often, developing an ontology is not a project undertaken by a single person or a small group of people in a research laboratory, but rather it is a large project with numerous participants, who are often geographically distributed, where the resulting ontologies are used in production environments with paying customers counting on robustness and reliability of the system. The Protégé ontology-development environment1 has become a widely used tool for developing ontologies, with more than 50,000 registered users. The Protégé group works closely with some of the tool’s users and we have a continuous stream of requests from them on the features that they would like to have supported in terms of managing and developing ontologies collaboratively. The configurations for collaborative development differ significantly however. For instance, Perot Systems2 uses a client–server mode of Protégé with multiple users simultaneously accessing the same copy of the ontology on the server. The NCI Center for Bioinformatics, which develops the NCI The1 http://protege.stanford.edu 2 http://www.perotsystems.com saurus3 has a different configuration: a baseline version of the Thesaurus is published regularly and between the baselines, multiple editors work asynchronously on their own versions. At the end of the cycle, the changes are reconciled. In the OBO project,4 ontology developers post their ontologies on a sourceforge site, using the sourceforge version-control system to publish successive versions. In addition to specific requirements to support each of these collaboration models, users universally request the ability to annotate their changes, to hold discussions about the changes, to see the change history with respective annotations, and so on. When developing tool support for all the different modes and tasks in the process of ontology evolution, we started with separate and unrelated sets of Protégé plugins that supported each of the collaborative editing modes. This approach, however, was difficult to maintain; besides, we saw that tools developed for one mode (such as change annotation) will be useful in other modes. Therefore, we have developed a single unified framework that is flexible enough to work in either synchronous or asynchronous mode, in those environments where Protégé and our plugins are used to track changes and in those environments where there is no record of the change steps. At the center of the system is a Change and Annotation Ontology (CHAO) with instances recording specific changes and meta-information about them (author, timestamp, annotations, acceptance status, etc.). When Protégé and its change-management plugins are used for ontology editing, these tools create CHAO instances as a side product of the editing process. Otherwise, the CHAO instances are created from a structural diff produced by comparing two versions. The CHAO instances then drive the user interface that displays changes between versions to a user, allows him to accept and reject changes, to view concept history, to generate a new baseline, to publish a history of changes that other applications can use, and so on. This paper makes the following contributions: – analysis and categorization of different scenarios for ontology maintenance and evolution and their functional requirements (Section 2) – development of a comprehensive solution that addresses most of the functional requirements from the different scenarios in a single unified framework (Section 3) – implementation of the solution as a set of open-source Protégé plugins (Section 4) 2 Ontology-Evolution Scenarios and Tasks We will now discuss different scenarios for ontology maintenance and evolution, their attributes, and functional requirements.",
"title": ""
},
{
"docid": "6fa90d1212c53f4bf5da7c49c63a4248",
"text": "Social coding paradigm is reshaping the distributed software development with a surprising speed in recent years. Github, a remarkable social coding community, attracts a huge number of developers in a short time. Various kinds of social networks are formed based on social activities among developers. Why this new paradigm can achieve such a great success in attracting external developers, and how they are connected in such a massive community, are interesting questions for revealing power of social coding paradigm. In this paper, we firstly compare the growth curves of project and user in GitHub with three traditional open source software communities to explore differences of their growth modes. We find an explosive growth of the users in GitHub and introduce the Diffusion of Innovation theory to illustrate intrinsic sociological basis of this phenomenon. Secondly, we construct follow-networks according to the follow behaviors among developers in GitHub. Finally, we present four typical social behavior patterns by mining follow-networks containing independence-pattern, group-pattern, star-pattern and hub-pattern. This study can provide several instructions of crowd collaboration to newcomers. According to the typical behavior patterns, the community manager could design corresponding assistive tools for developers.",
"title": ""
},
{
"docid": "5c129341d3b250dcbd5732a61ae28d53",
"text": "Circadian rhythms govern a remarkable variety of metabolic and physiological functions. Accumulating epidemiological and genetic evidence indicates that the disruption of circadian rhythms might be directly linked to cancer. Intriguingly, several molecular gears constituting the clock machinery have been found to establish functional interplays with regulators of the cell cycle, and alterations in clock function could lead to aberrant cellular proliferation. In addition, connections between the circadian clock and cellular metabolism have been identified that are regulated by chromatin remodelling. This suggests that abnormal metabolism in cancer could also be a consequence of a disrupted circadian clock. Therefore, a comprehensive understanding of the molecular links that connect the circadian clock to the cell cycle and metabolism could provide therapeutic benefit against certain human neoplasias.",
"title": ""
},
{
"docid": "a0b862a758c659b62da2114143bf7687",
"text": "The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevel-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.",
"title": ""
}
] |
scidocsrr
|
1abfc6f4050ab756d113f7310f3be113
|
Adoption of Big Data Solutions: A study on its security determinants using Sec-TOE Framework
|
[
{
"docid": "f98045c0401c7d492a5b1ea449f2fbf7",
"text": "Today, information technology (IT) is universally regarded as an essential tool in enhancing the competitiveness of the economy of a country. There is consensus that IT has significant effects on the productivity of firms. These effects will only be realized if, and when, IT are widely spread and used. It is essential to understand the determinants of IT adoption. Consequently it is necessary to know the theoretical models. There are few reviews in the literature about the comparison of IT adoption models at the individual level, and to the best of our knowledge there are even fewer at the firm level. This review will fill this gap. In this study, we review theories for adoption models at the firm level used in information systems literature and discuss two prominent models: diffusion on innovation (DOI) theory, and the technology, organization, and environment (TOE) framework. The DOI found that individual characteristics, internal characteristics of organizational structure, and external characteristics of the organization are important antecedents to organizational innovativeness. The TOE framework identifies three aspects of an enterprise's context that influence the process by which it adopts and implements a technological innovation: technological context, organizational context, and environmental context. We made a thorough analysis of the TOE framework, analysing the studies that used only this theory and the studies that combine the TOE framework with other theories such as: DOI, institutional theory, and the Iacovou, Benbasat, and Dexter model. The institutional theory helps us to understand the factors that influence the adoption of interorganizational systems (IOSs); it postulates that mimetic, coercive, and normative institutional pressures existing in an institutionalized environment may influence the organization’s predisposition toward an IT-based interorganizational system. The Iacovou, Benbasat, and Dexter model, analyses IOSs characteristics that influence firms to adopt IT innovations. It is based on three contexts: perceived benefits, organizational readiness, and external pressure. The analysis of these models takes into account the empirical literature, and the difference between independent and dependent variables. The paper also makes recommendations for future research.",
"title": ""
},
{
"docid": "7438ff346fa26661822a3a96c13c6d6e",
"text": "As in any new technology adoption in organizations, big data solutions (BDS) also presents some security threat and challenges, especially due to the characteristics of big data itself the volume, velocity and variety of data. Even though many security considerations associated to the adoption of BDS have been publicized, it remains unclear whether these publicized facts have any actual impact on the adoption of the solutions. Hence, it is the intent of this research-in-progress to examine the security determinants by focusing on the influence that various technological factors in security, organizational security view and security related environmental factors have on BDS adoption. One technology adoption framework, the TOE (technological-organizational-environmental) framework is adopted as the main conceptual research framework. This research will be conducted using a Sequential Explanatory Mixed Method approach. Quantitative method will be used for the first part of the research, specifically using an online questionnaire survey. The result of this first quantitative process will then be further explored and complemented with a case study. Results generated from both quantitative and qualitative phases will then be triangulated and a cross-study synthesis will be conducted to form the final result and discussion.",
"title": ""
}
] |
[
{
"docid": "3ede320df9b96c7b9a5806813e4a42c4",
"text": "Sensors deployed to monitor the surrounding environment report such information as event type, location, and time when a real event of interest is detected. An adversary may identify the real event source through eavesdropping and traffic analysis. Previous work has studied the source location privacy problem under a local adversary model. In this work, we aim to provide a stronger notion: event source unobservability, which promises that a global adversary cannot know whether a real event has ever occurred even if he is capable of collecting and analyzing all the messages in the network at all the time. Clearly, event source unobservability is a desirable and critical security property for event monitoring applications, but unfortunately it is also very difficult and expensive to achieve for resource-constrained sensor network.\n Our main idea is to introduce carefully chosen dummy traffic to hide the real event sources in combination with mechanisms to drop dummy messages to prevent explosion of network traffic. To achieve the latter, we select some sensors as proxies that proactively filter dummy messages on their way to the base station. Since the problem of optimal proxy placement is NP-hard, we employ local search heuristics. We propose two schemes (i) Proxy-based Filtering Scheme (PFS) and (ii) Tree-based Filtering Scheme (TFS) to accurately locate proxies. Simulation results show that our schemes not only quickly find nearly optimal proxy placement, but also significantly reduce message overhead and improve message delivery ratio. A prototype of our scheme was implemented for TinyOS-based Mica2 motes.",
"title": ""
},
{
"docid": "ec4d4e6d6f1c95ba3e5f0369562e25c4",
"text": "In this paper we merge individual census data, individual patenting data, and individual IQ data from Finnish Defence Force to look at the probability of becoming an innovator and at the returns to invention. On the former, we find that: (i) it is strongly correlated with parental income; (ii) this correlation is greatly decreased when we control for parental education and child IQ. Turning to the returns to invention, we find that: (i) inventing increases the annual wage rate of the inventor by a significant amounts over a prolonged period after the invention; (ii) coworkers in the same firm also benefit from an innovation, the highest returns being earned by senior managers and entrepreneurs in the firm, especially in the long term. Finally, we find that becoming an inventor enhances both, intragenerational and intergenerational income mobility, and that inventors are very likely to make it to top income brackets.",
"title": ""
},
{
"docid": "1728add8c17ff28fd9e580f4fb388155",
"text": "We study response selection for multi-turn conversation in retrieval based chatbots. Existing works either ignores relationships among utterances, or misses important information in context when matching a response with a highly abstract context vector finally. We propose a new session based matching model to address both problems. The model first matches a response with each utterance on multiple granularities, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models the relationships among the utterances. The final matching score is calculated with the hidden states of the RNN. Empirical study on two public data sets shows that our model can significantly outperform the state-of-the-art methods for response selection in multi-turn conversation.",
"title": ""
},
{
"docid": "5bb75cabe435f83b4f587bc04ba6cde9",
"text": "Cloud computing represents a novel on-demand computing approach where resources are provided in compliance to a set of predefined non-functional properties specified and negotiated by means of Service Level Agreements (SLAs). In order to avoid costly SLA violations and to timely react to failures and environmental changes, advanced SLA enactment strategies are necessary, which include appropriate resource-monitoring concepts. Currently, Cloud providers tend to adopt existing monitoring tools, as for example those from Grid environments. However, those tools are usually restricted to locality and homogeneity of monitored objects, are not scalable, and do not support mapping of low-level resource metrics e.g., system up and down time to high-level application specific SLA parameters e.g., system availability. In this paper we present a novel framework for managing the mappings of the Low-level resource Metrics to High-level SLAs (LoM2HiS framework). The LoM2HiS framework is embedded into FoSII infrastructure, which facilitates autonomic SLA management and enforcement. Thus, the LoM2HiS framework detects future SLA violation threats and can notify the enactor component to act so as to avert the threats. We discuss the conceptual model of the LoM2HiS framework, followed by the implementation details. Finally, we present the first experimental results and a proof of concept of the LoM2HiS framework.",
"title": ""
},
{
"docid": "7e68ac0eee3ab3610b7c68b69c27f3b6",
"text": "When digitizing a document into an image, it is common to include a surrounding border region to visually indicate that the entire document is present in the image. However, this border should be removed prior to automated processing. In this work, we present a deep learning system, PageNet, which identifies the main page region in an image in order to segment content from both textual and non-textual border noise. In PageNet, a Fully Convolutional Network obtains a pixel-wise segmentation which is post-processed into a quadrilateral region. We evaluate PageNet on 4 collections of historical handwritten documents and obtain over 94% mean intersection over union on all datasets and approach human performance on 2 collections. Additionally, we show that PageNet can segment documents that are overlayed on top of other documents.",
"title": ""
},
{
"docid": "768749e22e03aecb29385e39353dd445",
"text": "Query logs are of great interest for scientists and companies for research, statistical and commercial purposes. However, the availability of query logs for secondary uses raises privacy issues since they allow the identification and/or revelation of sensitive information about individual users. Hence, query anonymization is crucial to avoid identity disclosure. To enable the publication of privacy-preserved -but still usefulquery logs, in this paper, we present an anonymization method based on semantic microaggregation. Our proposal aims at minimizing the disclosure risk of anonymized query logs while retaining their semantics as much as possible. First, a method to map queries to their formal semantics extracted from the structured categories of the Open Directory Project is presented. Then, a microaggregation method is adapted to perform a semantically-grounded anonymization of query logs. To do so, appropriate semantic similarity and semantic aggregation functions are proposed. Experiments performed using real AOL query logs show that our proposal better retains the utility of anonymized query logs than other related works, while also minimizing the disclosure risk.",
"title": ""
},
{
"docid": "604362129b2ed5510750cc161cf54bbf",
"text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and speed are also important concerns. These are the two main characteristics that differentiate one encryption algorithm from another. This paper provides the performance comparison between four of the most commonly used encryption algorithms: DES(Data Encryption Standard), 3DES(Triple DES), BLOWFISH and AES (Rijndael). The comparison has been conducted by running several setting to process different sizes of data blocks to evaluate the algorithms encryption and decryption speed. Based on the performance analysis of these algorithms under different hardware and software platform, it has been concluded that the Blowfish is the best performing algorithm among the algorithms under the security against unauthorized attack and the speed is taken into consideration.",
"title": ""
},
{
"docid": "c3ca913fa81b2e79a2fff6d7a5e2fea7",
"text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].",
"title": ""
},
{
"docid": "c5e8ddfd076377992848f3032d9dff93",
"text": "Speech Activity Detection(SAD) is a well researched problem for communication, command and control applications, where audio segments are short duration and solution proposed for noisy as well as clean environments. In this study, we investigate the SAD problem using NASA’s Apollo space mission data [1]. Unlike traditional speech corpora, the audio recordings in Apollo are extensive from a longitudinal perspective (i.e., 612 days each). From SAD perspective, the data offers many challenges: (i) noise distortion with variable SNR, (ii) channel distortion, and (iii) extended periods of non-speech activity. Here, we use the recently proposed Combo-SAD, which has performed remarkably well in DARPA RATS evaluations, as our baseline system [2]. Our analysis reveals that the ComboSAD performs well when speech-pause durations are balanced in the audio segment, but deteriorates significantly when speech is sparse or absent. In order to mitigate this problem, we propose a simple yet efficient technique which builds an alternative model of speech using data from a separate corpora, and embeds this new information within the Combo-SAD framework. Our experiments show that the proposed approach has a major impact on SAD performance (i.e., +30% absolute), especially in audio segments that contain sparse or no speech information.",
"title": ""
},
{
"docid": "142acad6b5a76543cf023bfc25cb34f7",
"text": "Reasoning about ordinary human situations and activities requires the availability of diverse types of knowledge, including expectations about the probable results of actions and the lexical entailments for many predicates. We describe initial work to acquire such a collection of conditional (if–then) knowledge by exploiting presuppositional discourse patterns (such as ones involving ‘but’, ‘yet’, and ‘hoping to’) and abstracting the matched material into general rules.",
"title": ""
},
{
"docid": "a28c252f9f3e96869c72e6e41146b5bc",
"text": "Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance.",
"title": ""
},
{
"docid": "d2541bdc0eb9bf65fdeb1e50358c62eb",
"text": "Data management is a crucial aspect in the Internet of Things (IoT) on Cloud. Big data is about the processing and analysis of large data repositories on Cloud computing. Big document summarization method is an important technique for data management of IoT. Traditional document summarization methods are restricted to summarize suitable information from the exploding IoT big data on Cloud. This paper proposes a big data (i.e., documents, texts) summarization method using the extracted semantic feature which it is extracted by distributed parallel processing of NMF based cloud technique of Hadoop. The proposed method can well represent the inherent structure of big documents set using the semantic feature by the non-negative matrix factorization (NMF). In addition, it can summarize the big data size of document for IoT using the distributed parallel processing based on Hadoop. The experimental results demonstrate that the proposed method can summarize the big data document comparing with the single node of summarization methods. 1096 Yoo-Kang Ji et al.",
"title": ""
},
{
"docid": "aff504d1c2149d13718595fd3e745eb0",
"text": "Figure 1 illustrates a typical example of a prediction problem: given some noisy observations of a dependent variable at certain values of the independent variable , what is our best estimate of the dependent variable at a new value, ? If we expect the underlying function to be linear, and can make some assumptions about the input data, we might use a least-squares method to fit a straight line (linear regression). Moreover, if we suspect may also be quadratic, cubic, or even nonpolynomial, we can use the principles of model selection to choose among the various possibilities. Gaussian process regression (GPR) is an even finer approach than this. Rather than claiming relates to some specific models (e.g. ), a Gaussian process can represent obliquely, but rigorously, by letting the data ‘speak’ more clearly for themselves. GPR is still a form of supervised learning, but the training data are harnessed in a subtler way. As such, GPR is a less ‘parametric’ tool. However, it’s not completely free-form, and if we’re unwilling to make even basic assumptions about , then more general techniques should be considered, including those underpinned by the principle of maximum entropy; Chapter 6 of Sivia and Skilling (2006) offers an introduction.",
"title": ""
},
{
"docid": "5b0eef5eed1645ae3d88bed9b20901b9",
"text": "We present a radically new approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry’s bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2 security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with Õ(λ · L) per-gate computation – i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is Õ(λ), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results for LWE, but with worse performance. We introduce a number of further optimizations to our schemes. As an example, for circuits of large width – e.g., where a constant fraction of levels have width at least λ – we can reduce the per-gate computation of the bootstrapped version to Õ(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω̃(λ) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011). ∗Sponsored by the Air Force Research Laboratory (AFRL). Disclaimer: This material is based on research sponsored by DARPA under agreement number FA8750-11-C-0096 and FA8750-11-2-0225. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. Approved for Public Release, Distribution Unlimited. †This material is based on research sponsored by DARPA under Agreement number FA8750-11-2-0225. All disclaimers as above apply.",
"title": ""
},
{
"docid": "0dc0b31c4f174a69b5917cdf93a5dd22",
"text": "Webpage is becoming a more and more important visual input to us. While there are few studies on saliency in webpage, we in this work make a focused study on how humans deploy their attention when viewing webpages and for the first time propose a computational model that is designed to predict webpage saliency. A dataset is built with 149 webpages and eye tracking data from 11 subjects who free-view the webpages. Inspired by the viewing patterns on webpages, multi-scale feature maps that contain object blob representation and text representation are integrated with explicit face maps and positional bias. We propose to use multiple kernel learning (MKL) to achieve a robust integration of various feature maps. Experimental results show that the proposed model outperforms its counterparts in predicting webpage saliency.",
"title": ""
},
{
"docid": "2438a082eac9852d3dbcea22aa0402b2",
"text": "Importance\nDietary modification remains key to successful weight loss. Yet, no one dietary strategy is consistently superior to others for the general population. Previous research suggests genotype or insulin-glucose dynamics may modify the effects of diets.\n\n\nObjective\nTo determine the effect of a healthy low-fat (HLF) diet vs a healthy low-carbohydrate (HLC) diet on weight change and if genotype pattern or insulin secretion are related to the dietary effects on weight loss.\n\n\nDesign, Setting, and Participants\nThe Diet Intervention Examining The Factors Interacting with Treatment Success (DIETFITS) randomized clinical trial included 609 adults aged 18 to 50 years without diabetes with a body mass index between 28 and 40. The trial enrollment was from January 29, 2013, through April 14, 2015; the date of final follow-up was May 16, 2016. Participants were randomized to the 12-month HLF or HLC diet. The study also tested whether 3 single-nucleotide polymorphism multilocus genotype responsiveness patterns or insulin secretion (INS-30; blood concentration of insulin 30 minutes after a glucose challenge) were associated with weight loss.\n\n\nInterventions\nHealth educators delivered the behavior modification intervention to HLF (n = 305) and HLC (n = 304) participants via 22 diet-specific small group sessions administered over 12 months. The sessions focused on ways to achieve the lowest fat or carbohydrate intake that could be maintained long-term and emphasized diet quality.\n\n\nMain Outcomes and Measures\nPrimary outcome was 12-month weight change and determination of whether there were significant interactions among diet type and genotype pattern, diet and insulin secretion, and diet and weight loss.\n\n\nResults\nAmong 609 participants randomized (mean age, 40 [SD, 7] years; 57% women; mean body mass index, 33 [SD, 3]; 244 [40%] had a low-fat genotype; 180 [30%] had a low-carbohydrate genotype; mean baseline INS-30, 93 μIU/mL), 481 (79%) completed the trial. In the HLF vs HLC diets, respectively, the mean 12-month macronutrient distributions were 48% vs 30% for carbohydrates, 29% vs 45% for fat, and 21% vs 23% for protein. Weight change at 12 months was -5.3 kg for the HLF diet vs -6.0 kg for the HLC diet (mean between-group difference, 0.7 kg [95% CI, -0.2 to 1.6 kg]). There was no significant diet-genotype pattern interaction (P = .20) or diet-insulin secretion (INS-30) interaction (P = .47) with 12-month weight loss. There were 18 adverse events or serious adverse events that were evenly distributed across the 2 diet groups.\n\n\nConclusions and Relevance\nIn this 12-month weight loss diet study, there was no significant difference in weight change between a healthy low-fat diet vs a healthy low-carbohydrate diet, and neither genotype pattern nor baseline insulin secretion was associated with the dietary effects on weight loss. In the context of these 2 common weight loss diet approaches, neither of the 2 hypothesized predisposing factors was helpful in identifying which diet was better for whom.\n\n\nTrial Registration\nclinicaltrials.gov Identifier: NCT01826591.",
"title": ""
},
{
"docid": "eeb98a6bcec9a401d36d62ca49bb4b34",
"text": "A number of imaging technologies reconstruct an image function from its Radon projection using the convolution backprojection method. The convolution is an O(N2 logN) algorithm, where the image consists of N×N pixels, while the backprojection is an O(N3) algorithm, thus constituting the major computational burden of the convolution backprojection method. An O(N2 logN) multilevel backprojection method is presented here. When implemented with a Fourier-domain postprocessing technique, also presented here, the resulting image quality is similar or superior to the image quality of the classical backprojection technique.",
"title": ""
},
{
"docid": "fc25adc42c7e4267a9adfe13ddcabf75",
"text": "As automotive electronics have increased, models for predicting the transmission characteristics of wiring harnesses, suitable for the automotive EMC tests, are needed. In this paper, the repetitive structures of the cross-sectional shape of the twisted pair cable is focused on. By taking account of RLGC parameters, a theoretical analysis modeling for whole cables, based on multi-conductor transmission line theory, is proposed. Furthermore, the theoretical values are compared with measured values and a full-wave simulator. In case that a twisted pitch, a length of the cable, and a height of reference ground plane are changed, the validity of the proposed model is confirmed.",
"title": ""
},
{
"docid": "8adf698c03f01dced7d021cc103d51a4",
"text": "Real world data, especially in the domain of robotics, is notoriously costly to collect. One way to circumvent this can be to leverage the power of simulation in order to produce large amounts of labelled data. However, training models on simulated images does not readily transfer to real-world ones. Using domain adaptation methods to cross this “reality gap” requires at best a large amount of unlabelled real-world data, whilst domain randomization alone can waste modeling power, rendering certain reinforcement learning (RL) methods unable to learn the task of interest. In this paper, we present Randomized-to-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no real-world data. Our method learns to translate randomized rendered images into their equivalent non-randomized, canonical versions. This in turn allows for real images to also be translated into canonical sim images. We demonstrate the effectiveness of this sim-to-real approach by training a vision-based closed-loop grasping reinforcement learning agent in simulation, and then transferring it to the real world to attain 70% zeroshot grasp success on unseen objects, a result that almost doubles the success of learning the same task directly on domain randomization alone. Additionally, by joint finetuning in the real-world with only 5,000 real-world grasps, our method achieves 91%, outperforming a state-of-the-art system trained with 580,000 real-world grasps, resulting in a reduction of real-world data by more than 99%.",
"title": ""
},
{
"docid": "de116bf7704cb03b5f6890255552f337",
"text": "Brain-computer interfaces (BCIs) enable people with paralysis to communicate with their environment. Motor imagery can be used to generate distinct patterns of cortical activation in the electroencephalogram (EEG) and thus control a BCI. To elucidate the cortical correlates of BCI control, users of a sensory motor rhythm (SMR)-BCI were classified according to their BCI control performance. In a second session these participants performed a motor imagery, motor observation and motor execution task in a functional magnetic resonance imaging (fMRI) scanner. Group difference analysis between high and low aptitude BCI users revealed significantly higher activation of the supplementary motor areas (SMA) for the motor imagery and the motor observation tasks in high aptitude users. Low aptitude users showed no activation when observing movement. The number of activated voxels during motor observation was significantly correlated with accuracy in the EEG-BCI task (r=0.53). Furthermore, the number of activated voxels in the right middle frontal gyrus, an area responsible for processing of movement observation, correlated (r=0.72) with BCI-performance. This strong correlation highlights the importance of these areas for task monitoring and working memory as task goals have to be activated throughout the BCI session. The ability to regulate behavior and the brain through learning mechanisms involving imagery such as required to control a BCI constitutes the consequence of ideo-motor co-activation of motor brain systems during observation of movements. The results demonstrate that acquisition of a sensorimotor program reflected in SMR-BCI-control is tightly related to the recall of such sensorimotor programs during observation of movements and unrelated to the actual execution of these movement sequences.",
"title": ""
}
] |
scidocsrr
|
6c7e3ef92a24269304570fa71d090738
|
Experiences inside the Ubiquitous Oulu Smart City
|
[
{
"docid": "3d9fe9c30d09a9e66f7339b0ad24edb7",
"text": "Due to progress in wired and wireless home networking, sensor networks, networked appliances, mechanical and control engineering, and computers, we can build smart homes, and many smart home projects are currently proceeding throughout the world. However, we have to be careful not to repeat the same mistake that was made with home automation technologies that were booming in the 1970s. That is, [total?] automation should not be a goal of smart home technologies. I believe the following points are important in construction of smart homes from users¿ viewpoints: development of interface technologies between humans and systems for detection of human intensions, feelings, and situations; improvement of system knowledge; and extension of human activity support outside homes to the scopes of communities, towns, and cities.",
"title": ""
}
] |
[
{
"docid": "3ec70222394018f1d889692ae850b5ca",
"text": "In this paper, we proposed an automatic method to segment text from complex background for recognition task. First, a rule-based sampling method is proposed to get portion of the text pixels. Then, the sampled pixels are used for training Gaussian mixture models of intensity and hue components in HSI color space. Finally, the trained GMMs together with the spatial connectivity information are used for segment all of text pixels form their background. We used the word recognition rate to evaluate the segmentation result. Experiments results show that the proposed algorithm can work fully automatically and performs much better than the traditional methods.",
"title": ""
},
{
"docid": "9e3de4720dade2bb73d78502d7cccc8b",
"text": "Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i.e., an object that in one of the three dimensions is at most twovoxel thick. A surface-like object consists of surfaces and curves crossing each other. Its curve skeleton is a 1D set centred within the surface-like object and with preserved topological properties. It can be useful to achieve a qualitative shape representation of the object with reduced dimensionality. The basic idea behind our algorithm is to detect the curves and the junctions between different surfaces and prevent their removal as they retain the most significant shape representation. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "ff6cec55a05338f78b5ad57d2bc6922a",
"text": "Developing a virtual 3D environment by using game engine is a strategy to incorporate various multimedia data into one platform. The characteristic of game engine that is preinstalled with interactive and navigation tools allows users to explore and engage with the game objects. However, most CAD and GIS applications are not equipped with 3D tools and navigation systems intended to the user experience. In particular, 3D game engines provide standard 3D navigation tools as well as any programmable view to create engaging navigation thorough the virtual environment. By using a game engine, it is possible to create other interaction such as object manipulation, non playing character (NPC) interaction with player and/or environment. We conducted analysis on previous game engines and experiment on urban design project with Unity3D game engine for visualization and interactivity. At the end, we present the advantages and limitations using game technology as visual representation tool for architecture and urban design studies.",
"title": ""
},
{
"docid": "56aa0d8c7d0fa135f5b50ee0aa744cbd",
"text": "We explored cultural and historical variations in concepts of happiness. First, we analyzed the definitions of happiness in dictionaries from 30 nations to understand cultural similarities and differences in happiness concepts. Second, we analyzed the definition of happiness in Webster's dictionaries from 1850 to the present day to understand historical changes in American English. Third, we coded the State of the Union addresses given by U.S. presidents from 1790 to 2010. Finally, we investigated the appearance of the phrases happy nation versus happy person in Google's Ngram Viewer from 1800 to 2008. Across cultures and time, happiness was most frequently defined as good luck and favorable external conditions. However, in American English, this definition was replaced by definitions focused on favorable internal feeling states. Our findings highlight the value of a historical perspective in the study of psychological concepts.",
"title": ""
},
{
"docid": "11a000ec43847bae955160cf7ea3106d",
"text": "Malicious activities on the Internet are one of the most dangerous threats to Internet users and organizations. Malicious software controlled remotely is addressed as one of the most critical methods for executing the malicious activities. Since blocking domain names for command and control (C&C) of the malwares by analyzing their Domain Name System (DNS) activities has been the most effective and practical countermeasure, attackers attempt to hide their malwares by adopting several evasion techniques, such as client sub-grouping and domain flux on DNS activities. A common feature of the recently developed evasion techniques is the utilization of multiple domain names for render malware DNS activities temporally and spatially more complex. In contrast to analyzing the DNS activities for a single domain name, detecting the malicious DNS activities for multiple domain names is not a simple task. The DNS activities of malware that uses multiple domain names, termed multi-domain malware, are sparser and less synchronized with respect to space and time. In this paper, we introduce a malware activity detection mechanism, GMAD: Graph-based Malware Activity Detection that utilizes a sequence of DNS queries in order to achieve robustness against evasion techniques. GMAD uses a graph termed Domain Name Travel Graph which expresses DNS query sequences to detect infected clients and malicious domain names. In addition to detecting malware C&C domain names, GMAD detects malicious DNS activities such as blacklist checking and fake DNS querying. To detect malicious domain names utilized to malware activities, GMAD applies domain name clustering using the graph structure and determines malicious clusters by referring to public blacklists. Through experiments with four sets of DNS traffic captured in two ISP networks in the U.S. and South Korea, we show that GMAD detected thousands of malicious domain names that had neither been blacklisted nor detected through group activity of DNS clients. In a detection accuracy evaluation, GMAD showed an accuracy rate higher than 99% on average, with a higher than 90% precision and lower than 0:5% false positive rate. It is shown that the proposed method is effective for detecting multi-domain malware activities irrespective of evasion techniques. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b7c7984f10f5e55de0c497798b1d64ac",
"text": "The relationships between personality traits and performance are often assumed to be linear. This assumption has been challenged conceptually and empirically, but results to date have been inconclusive. In the current study, we took a theory-driven approach in systematically addressing this issue. Results based on two different samples generally supported our expectations of the curvilinear relationships between personality traits, including Conscientiousness and Emotional Stability, and job performance dimensions, including task performance, organizational citizenship behavior, and counterproductive work behaviors. We also hypothesized and found that job complexity moderated the curvilinear personality–performance relationships such that the inflection points after which the relationships disappear were lower for low-complexity jobs than they were for high-complexity jobs. This finding suggests that high levels of the two personality traits examined are more beneficial for performance in high- than low-complexity jobs. We conclude by discussing the implications of these findings for the use of personality in personnel selection.",
"title": ""
},
{
"docid": "9747e2be285a5739bd7ee3b074a20ffc",
"text": "While software metrics are a generally desirable feature in the software management functions of project planning and project evaluation, they are of especial importance with a new technology such as the object-oriented approach. This is due to the significant need to train software engineers in generally accepted object-oriented principles. This paper presents theoretical work that builds a suite of metrics for object-oriented design. In particular, these metrics are based upon measurement theory and are informed by the insights of experienced object-oriented software developers. The proposed metrics are formally evaluated against a widelyaccepted list of software metric evaluation criteria.",
"title": ""
},
{
"docid": "a4473c2cc7da3fb5ee52b60cee24b9b9",
"text": "The ALVINN (Autonomous h d Vehide In a N d Network) projea addresses the problem of training ani&ial naxal naarork in real time to perform difficult perapaon tasks. A L W is a back-propagation network dmpd to dnve the CMU Navlab. a modided Chevy van. 'Ibis ptpa describes the training techniques which allow ALVIN\" to luun in under 5 minutes to autonomously conm>l the Navlab by wardung ahuamr, dziver's rmaions. Usingthese technrques A L W has b&n trained to drive in a variety of Cirarmstanccs including single-lane paved and unprved roads. and multi-lane lined and rmlinecd roads, at speeds of up IO 20 miles per hour",
"title": ""
},
{
"docid": "3ddf6fab70092eade9845b04dd8344a0",
"text": "Fractional Fourier transform (FRFT) is a generalization of the Fourier transform, rediscovered many times over the past 100 years. In this paper, we provide an overview of recent contributions pertaining to the FRFT. Specifically, the paper is geared toward signal processing practitioners by emphasizing the practical digital realizations and applications of the FRFT. It discusses three major topics. First, the manuscripts relates the FRFT to other mathematical transforms. Second, it discusses various approaches for practical realizations of the FRFT. Third, we overview the practical applications of the FRFT. From these discussions, we can clearly state that the FRFT is closely related to other mathematical transforms, such as time–frequency and linear canonical transforms. Nevertheless, we still feel that major contributions are expected in the field of the digital realizations and its applications, especially, since many digital realizations of a b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "f21e0b6062b88a14e3e9076cdfd02ad5",
"text": "Beyond being facilitators of human interactions, social networks have become an interesting target of research, providing rich information for studying and modeling user’s behavior. Identification of personality-related indicators encrypted in Facebook profiles and activities are of special concern in our current research efforts. This paper explores the feasibility of modeling user personality based on a proposed set of features extracted from the Facebook data. The encouraging results of our study, exploring the suitability and performance of several classification techniques, will also be presented.",
"title": ""
},
{
"docid": "ed097b44837a57ad0053ae06a95f1543",
"text": "For underwater videos, the performance of object tracking is greatly affected by illumination changes, background disturbances and occlusion. Hence, there is a need to have a robust function that computes image similarity, to accurately track the moving object. In this work, a hybrid model that incorporates the Kalman Filter, a Siamese neural network and a miniature neural network has been developed for object tracking. It was observed that the usage of the Siamese network to compute image similarity significantly improved the robustness of the tracker. Although the model was developed for underwater videos, it was found that it performs well for both underwater and human surveillance videos. A metric has been defined for analyzing detections-to-tracks mapping accuracy. Tracking results have been analyzed using Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP)metrics.",
"title": ""
},
{
"docid": "965b13ed073b4f3d1c97beffe4db1397",
"text": "The purpose of this study was to develop a method of classifying cancers to specific diagnostic categories based on their gene expression signatures using artificial neural networks (ANNs). We trained the ANNs using the small, round blue-cell tumors (SRBCTs) as a model. These cancers belong to four distinct diagnostic categories and often present diagnostic dilemmas in clinical practice. The ANNs correctly classified all samples and identified the genes most relevant to the classification. Expression of several of these genes has been reported in SRBCTs, but most have not been associated with these cancers. To test the ability of the trained ANN models to recognize SRBCTs, we analyzed additional blinded samples that were not previously used for the training procedure, and correctly classified them in all cases. This study demonstrates the potential applications of these methods for tumor diagnosis and the identification of candidate targets for therapy.",
"title": ""
},
{
"docid": "d7fb7e12e0ec941fef8a721f63c91337",
"text": "This paper presents navigation system for an omni-directional AGV (automatic guided vehicle) with Mecanum wheels. The Mecanum wheel, one design for the wheel which can move in any direction, is a conventional wheel with a series of rollers attached to its circumference. The localization techniques for the general mobile robot use basically encoder. Otherwise, they use gyro and electronic compass with encoder. However, it is difficult to use the encoder because in the Mecanum wheel the slip occurs frequently by the rollers attached to conventional wheel's circumference. Hence, we propose the localization of the omnidirectional AGV with the Mecanum wheel. The proposed localization uses encoder, gyro, and accelerometer. In this paper, we ourselves designed and made the AGV with the Mecanum wheels for experiment. And we analyzed the accuracy of the localization when the AGV moves sideways a 20m distance at about 20cm/s and 38cm/s, respectively. In experimental result, we verified that the accuracies of the proposed localization are 27.4944mm and 29.2521mm respectively.",
"title": ""
},
{
"docid": "675007890407b7e8a7d15c1255e77ec6",
"text": "This study investigated the influence of the completeness of CRM relational information processes on customer-based relational performance and profit performance. In addition, interaction orientation and CRM readiness were adopted as moderators on the relationship between CRM relational information processes and customer-based performance. Both qualitative and quantitative approaches were applied in this study. The results revealed that the completeness of CRM relational information processes facilitates customer-based relational performance (i.e., customer satisfaction, and positive WOM), and in turn enhances profit performance (i.e., efficiency with regard to identifying, acquiring and retaining, and converting unprofitable customers to profitable ones). The alternative model demonstrated that both interaction orientation and CRM readiness play a mediating role in the relationship between information processes and relational performance. Managers should strengthen the completeness and smoothness of CRM information processes, should increase the level of interactional orientation with customers and should maintain firm CRM readiness to service their customers. The implications of this research and suggestions for managers were also discussed.",
"title": ""
},
{
"docid": "dc2c10774d761875fb9de0c2953af199",
"text": "The formation of precipitates, especially along austenite grain boundaries, greatly affects the formation of transverse cracks on the surface of continuous-cast steel. The steel composition and cooling history influences the formation of precipitates, and the higher temperature and corresponding larger grain growth rate under oscillation marks or surface depressions also have an important effect on crack problems. This paper develops a model to predict and track the amount, composition and size distribution of precipitates and the grain size in order to predict the susceptibility of different steel grades to ductility problems during continuous casting processes. The results are important for controlled cooling of microalloyed steels to prevent cracks and assure product quality.",
"title": ""
},
{
"docid": "77562b3fdfb57089d1490fd3f1b68a77",
"text": "Recent proposed rulemakings from the Federal Communications Commission in the United States offers the hope of unique access to valuable spectrum; so-called television whitespace (TVWS). Use of this spectrum is contingent upon the protection of the incumbent occupants of the proposed allocation. Television signals are among the most powerful terrestrial RF transmissions on Earth. Even so, detection of these signals to the required levels sufficient to protect these services has proven daunting. Supplemental techniques, such as geo-location, mitigate these challenges for fixed TV broadcast; however, other nomadic low power incumbents also occupy the TVWS spectrum. The most common of these are wireless microphones, a subset of which are licensed and entitled to protection. These devices are allowed a maximum conducted power level of 50 mW and 250 mW on the VHF and UHF channels, respectively. Critical to day-to-day television operations, these devices must also be afforded protection from unlicensed transmitters. Wireless microphones often operate at power levels of 25 mW or less, with inefficient antennas placed physically near the body, yielding effective radiated power levels of 5 to 10 mW, often times even less. In addition, the emissions from these devices are often audio-companded FM, making legitimate, licensed operations indistinguishable from narrowband unlicensed transmissions and other discrete carriers. To that end the IEEE 802.22 working group established task group 1 (TG1) to develop a standard for a protective, disabling beacon method capable of insuring detection of legitimate devices.",
"title": ""
},
{
"docid": "205ed1eba187918ac6b4a98da863a6f2",
"text": "Since the first papers on asymptotic waveform evaluation (AWE), Pade-based reduced order models have become standard for improving coupled circuit-interconnect simulation efficiency. Such models can be accurately computed using bi-orthogonalization algorithms like Pade via Lanczos (PVL), but the resulting Pade approximates can still be unstable even when generated from stable RLC circuits. For certain classes of RC circuits it has been shown that congruence transforms, like the Arnoldi algorithm, can generate guaranteed stable and passive reduced-order models. In this paper we present a computationally efficient model-order reduction technique, the coordinate-transformed Arnoldi algorithm, and show that this method generates arbitrarily accurate and guaranteed stable reduced-order models for RLC circuits. Examples are presented which demonstrates the enhanced stability and efficiency of the new method.",
"title": ""
},
{
"docid": "74fd65e8298a95b61bc323d9435eaa05",
"text": "Next-generation communication systems have to comply with very strict requirements for increased flexibility in heterogeneous environments, high spectral efficiency, and agility of carrier aggregation. This fact motivates research in advanced multicarrier modulation (MCM) schemes, such as filter bank-based multicarrier (FBMC) modulation. This paper focuses on the offset quadrature amplitude modulation (OQAM)-based FBMC variant, known as FBMC/OQAM, which presents outstanding spectral efficiency and confinement in a number of channels and applications. Its special nature, however, generates a number of new signal processing challenges that are not present in other MCM schemes, notably, in orthogonal-frequency-division multiplexing (OFDM). In multiple-input multiple-output (MIMO) architectures, which are expected to play a primary role in future communication systems, these challenges are intensified, creating new interesting research problems and calling for new ideas and methods that are adapted to the particularities of the MIMO-FBMC/OQAM system. The goal of this paper is to focus on these signal processing problems and provide a concise yet comprehensive overview of the recent advances in this area. Open problems and associated directions for future research are also discussed.",
"title": ""
},
{
"docid": "f6c1aa22e2afd24a6ad111d5dfdfc3f3",
"text": "This work describes the development of a social chatbot for the football domain. The chatbot, named chatbol, aims at answering a wide variety of questions related to the Spanish football league “La Liga”. Chatbol is deployed as a Slack client for text-based input interaction with users. One of the main Chatbol’s components, a NLU block, is trained to extract the intents and associated entities related to user’s questions about football players, teams, trainers and fixtures. The information for the entities is obtained by making sparql queries to Wikidata site in real time. Then, the retrieved data is used to update the specific chatbot responses. As a fallback strategy, a retrieval-based conversational engine is incorporated to the chatbot system. It allows for a wider variety and freedom of responses, still football oriented, for the case when the NLU module was unable to reply with high confidence to the user. The retrieval-based response database is composed of real conversations collected both from a IRC football channel and from football-related excerpts picked up across movie captions, extracted from the OpenSubtitles database.",
"title": ""
},
{
"docid": "bf3450649fdf5d5bb4ee89fbaf7ec0ff",
"text": "In this study, we propose a research model to assess the effect of a mobile health (mHealth) app on exercise motivation and physical activity of individuals based on the design and self-determination theory. The research model is formulated from the perspective of motivation affordance and gamification. We will discuss how the use of specific gamified features of the mHealth app can trigger/afford corresponding users’ exercise motivations, which further enhance users’ participation in physical activity. We propose two hypotheses to test the research model using a field experiment. We adopt a 3-phase longitudinal approach to collect data in three different time zones, in consistence with approach commonly adopted in psychology and physical activity research, so as to reduce the common method bias in testing the two hypotheses.",
"title": ""
}
] |
scidocsrr
|
d98c577fad1ae62fd3895ed2f6ac8d1f
|
Standardization for evaluating software-defined networking controllers
|
[
{
"docid": "3e066a6f96e74963046c9c24239196b4",
"text": "This paper presents an independent comprehensive analysis of the efficiency indexes of popular open source SDN/OpenFlow controllers (NOX, POX, Beacon, Floodlight, MuL, Maestro, Ryu). The analysed indexes include performance, scalability, reliability, and security. For testing purposes we developed the new framework called hcprobe. The test bed and the methodology we used are discussed in detail so that everyone could reproduce our experiments. The result of the evaluation show that modern SDN/OpenFlow controllers are not ready to be used in production and have to be improved in order to increase all above mentioned characteristics.",
"title": ""
}
] |
[
{
"docid": "3604f1ef7df6e0c224bd19034d7c0929",
"text": "BACKGROUND\nMost individuals at risk for developing cardiovascular disease (CVD) can reduce risk factors through diet and exercise before resorting to drug treatment. The effect of a combination of resistance training with vegetable-based (soy) versus animal-based (whey) protein supplementation on CVD risk reduction has received little study. The study's purpose was to examine the effects of 12 weeks of resistance exercise training with soy versus whey protein supplementation on strength gains, body composition and serum lipid changes in overweight, hyperlipidemic men.\n\n\nMETHODS\nTwenty-eight overweight, male subjects (BMI 25-30) with serum cholesterol >200 mg/dl were randomly divided into 3 groups (placebo (n = 9), and soy (n = 9) or whey (n = 10) supplementation) and participated in supervised resistance training for 12 weeks. Supplements were provided in a double blind fashion.\n\n\nRESULTS\nAll 3 groups had significant gains in strength, averaging 47% in all major muscle groups and significant increases in fat free mass (2.6%), with no difference among groups. Percent body fat and waist-to-hip ratio decreased significantly in all 3 groups an average of 8% and 2%, respectively, with no difference among groups. Total serum cholesterol decreased significantly, again with no difference among groups.\n\n\nCONCLUSION\nParticipation in a 12 week resistance exercise training program significantly increased strength and improved both body composition and serum cholesterol in overweight, hypercholesterolemic men with no added benefit from protein supplementation.",
"title": ""
},
{
"docid": "7f24dc012f65770b391d182c525fdaff",
"text": "This paper focuses on the task of knowledge-based question answering (KBQA). KBQA aims to match the questions with the structured semantics in knowledge base. In this paper, we propose a two-stage method. Firstly, we propose a topic entity extraction model (TEEM) to extract topic entities in questions, which does not rely on hand-crafted features or linguistic tools. We extract topic entities in questions with the TEEM and then search the knowledge triples which are related to the topic entities from the knowledge base as the candidate knowledge triples. Then, we apply Deep Structured Semantic Models based on convolutional neural network and bidirectional long short-term memory to match questions and predicates in the candidate knowledge triples. To obtain better training dataset, we use an iterative approach to retrieve the knowledge triples from the knowledge base. The evaluation result shows that our system achieves an AverageF1 measure of 79.57% on test dataset.",
"title": ""
},
{
"docid": "028070222acb092767aadfdd6824d0df",
"text": "The autism spectrum disorders (ASDs) are a group of conditions characterized by impairments in reciprocal social interaction and communication, and the presence of restricted and repetitive behaviours. Individuals with an ASD vary greatly in cognitive development, which can range from above average to intellectual disability. Although ASDs are known to be highly heritable (∼90%), the underlying genetic determinants are still largely unknown. Here we analysed the genome-wide characteristics of rare (<1% frequency) copy number variation in ASD using dense genotyping arrays. When comparing 996 ASD individuals of European ancestry to 1,287 matched controls, cases were found to carry a higher global burden of rare, genic copy number variants (CNVs) (1.19 fold, P = 0.012), especially so for loci previously implicated in either ASD and/or intellectual disability (1.69 fold, P = 3.4 × 10-4). Among the CNVs there were numerous de novo and inherited events, sometimes in combination in a given family, implicating many novel ASD genes such as SHANK2, SYNGAP1, DLGAP2 and the X-linked DDX53–PTCHD1 locus. We also discovered an enrichment of CNVs disrupting functional gene sets involved in cellular proliferation, projection and motility, and GTPase/Ras signalling. Our results reveal many new genetic and functional targets in ASD that may lead to final connected pathways.",
"title": ""
},
{
"docid": "5cc3d79d7bd762e8cfd9df658acae3fc",
"text": "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.",
"title": ""
},
{
"docid": "21324c71d70ca79d2f2c7117c759c915",
"text": "The wide-spread of social media provides unprecedented sources of written language that can be used to model and infer online demographics. In this paper, we introduce a novel visual text analytics system, DemographicVis, to aid interactive analysis of such demographic information based on user-generated content. Our approach connects categorical data (demographic information) with textual data, allowing users to understand the characteristics of different demographic groups in a transparent and exploratory manner. The modeling and visualization are based on ground truth demographic information collected via a survey conducted on Reddit.com. Detailed user information is taken into our modeling process that connects the demographic groups with features that best describe the distinguishing characteristics of each group. Features including topical and linguistic are generated from the user-generated contents. Such features are then analyzed and ranked based on their ability to predict the users' demographic information. To enable interactive demographic analysis, we introduce a web-based visual interface that presents the relationship of the demographic groups, their topic interests, as well as the predictive power of various features. We present multiple case studies to showcase the utility of our visual analytics approach in exploring and understanding the interests of different demographic groups. We also report results from a comparative evaluation, showing that the DemographicVis is quantitatively superior or competitive and subjectively preferred when compared to a commercial text analysis tool.",
"title": ""
},
{
"docid": "d156813b45cb419d86280ee2947b6cde",
"text": "Within the realm of service robotics, researchers have placed a great amount of effort into learning motions and manipulations for task execution by robots. The task of robot learning is very broad, as it involves many tasks such as object detection, action recognition, motion planning, localization, knowledge representation and retrieval, and the intertwining of computer vision and machine learning techniques. In this paper, we focus on how knowledge can be gathered, represented, and reproduced to solve problems as done by researchers in the past decades. We discuss the problems which have existed in robot learning and the solutions, technologies or developments (if any) which have contributed to solving them. Specifically, we look at three broad categories involved in task representation and retrieval for robotics: 1) activity recognition from demonstrations, 2) scene understanding and interpretation, and 3) task representation in robotics datasets and networks. Within each section, we discuss major breakthroughs and how their methods address present issues in robot learning and manipulation.",
"title": ""
},
{
"docid": "a74880697c58a2c4cb84ef1626344316",
"text": "This article provides an overview of contemporary and forward looking inter-cell interference coordination techniques for 4G OFDM systems with a specific emphasis on implementations for LTE. Viable approaches include the use of power control, opportunistic spectrum access, intra and inter-base station interference cancellation, adaptive fractional frequency reuse, spatial antenna techniques such as MIMO and SDMA, and adaptive beamforming, as well as recent innovations in decoding algorithms. The applicability, complexity, and performance gains possible with each of these techniques based on simulations and empirical measurements will be highlighted for specific cellular topologies relevant to LTE macro, pico, and femto deployments for both standalone and overlay networks.",
"title": ""
},
{
"docid": "8165a77b36b7c7dd26e5f8223e2564a7",
"text": "A novel design method of a wideband dual-polarized antenna is presented by using shorted dipoles, integrated baluns, and crossed feed lines. Simulation and equivalent circuit analysis of the antenna are given. To validate the design method, an antenna prototype is designed, optimized, fabricated, and measured. Measured results verify that the proposed antenna has an impedance bandwidth of 74.5% (from 1.69 to 3.7 GHz) for VSWR < 1.5 at both ports, and the isolation between the two ports is over 30 dB. Stable gain of 8–8.7 dBi and half-power beamwidth (HPBW) of 65°–70° are obtained for 2G/3G/4G base station frequency bands (1.7–2.7 GHz). Compared to the other reported dual-polarized dipole antennas, the presented antenna achieves wide impedance bandwidth, high port isolation, stable antenna gain, and HPBW with a simple structure and compact size.",
"title": ""
},
{
"docid": "0a625d5f0164f7ed987a96510c1b6092",
"text": "We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method maps textual queries and visual features from various regions into a shared space where they are compared for relevance with an inner product. Our method exhibits significant improvements in answering questions such as \"what color,\" where it is necessary to evaluate a specific location, and \"what room,\" where it selectively identifies informative image regions. Our model is tested on the recently released VQA [1] dataset, which features free-form human-annotated questions and answers.",
"title": ""
},
{
"docid": "f6362a62b69999bdc3d9f681b68842fc",
"text": "Women with breast cancer, whether screen detected or symptomatic, have both mammography and ultrasound for initial imaging assessment. Unlike X-ray or magnetic resonance, which produce an image of the whole breast, ultrasound provides comparatively limited 2D or 3D views located around the lesions. Combining different modalities is an essential task for accurate diagnosis and simulating ultrasound images based on whole breast data could be a way toward correlating different information about the same lesion. Very few studies have dealt with such a simulation framework since the breast undergoes large scale deformation between the prone position of magnetic resonance imaging and the largely supine or lateral position of ultrasound. We present a framework for the realistic simulation of 3D ultrasound images based on prone magnetic resonance images from which a supine position is generated using a biomechanical model. The simulation parameters are derived from a real clinical infrastructure and from transducers that are used for routine scans, leading to highly realistic ultrasound images of any region of the breast.",
"title": ""
},
{
"docid": "70a07b906b31054646cf43eb543ba50c",
"text": "1. Cellular and Molecular Research Center, and Neuroscience Department, Tehran University of Medical Sciences, Tehran, Iran 2. Anatomy Department, Tehran University of Medical Science, Tehran, Iran. 3. Physiology Research Center (PRC), Tehran university of Medical Sciences, Tehran, Iran. 4. Institute for Cognitive Science studies (ICSS), Tehran, Iran. 5. Department of Material Science and Engineering, Sharif University of Technology, Tehran, Iran.",
"title": ""
},
{
"docid": "6fb72f68aa41a71ea51b81806d325561",
"text": "An important aspect related to the development of face-aging algorithms is the evaluation of the ability of such algorithms to produce accurate age-progressed faces. In most studies reported in the literature, the performance of face-aging systems is established based either on the judgment of human observers or by using machine-based evaluation methods. In this paper we perform an experimental evaluation that aims to assess the applicability of human-based against typical machine based performance evaluation methods. The results of our experiments indicate that machines can be more accurate in determining the performance of face-aging algorithms. Our work aims towards the development of a complete evaluation framework for age progression methodologies.",
"title": ""
},
{
"docid": "aaf6ed732f2cb5ceff714f1d84dac9ed",
"text": "Video caption refers to generating a descriptive sentence for a specific short video clip automatically, which has achieved remarkable success recently. However, most of the existing methods focus more on visual information while ignoring the synchronized audio cues. We propose three multimodal deep fusion strategies to maximize the benefits of visual-audio resonance information. The first one explores the impact on cross-modalities feature fusion from low to high order. The second establishes the visual-audio short-term dependency by sharing weights of corresponding front-end networks. The third extends the temporal dependency to long-term through sharing multimodal memory across visual and audio modalities. Extensive experiments have validated the effectiveness of our three cross-modalities fusion strategies on two benchmark datasets, including Microsoft Research Video to Text (MSRVTT) and Microsoft Video Description (MSVD). It is worth mentioning that sharing weight can coordinate visualaudio feature fusion effectively and achieve the state-of-art performance on both BELU and METEOR metrics. Furthermore, we first propose a dynamic multimodal feature fusion framework to deal with the part modalities missing case. Experimental results demonstrate that even in the audio absence mode, we can still obtain comparable results with the aid of the additional audio modality inference module.",
"title": ""
},
{
"docid": "a62c03417176b5751471bad386bbfa08",
"text": "Platforms are defined as multisided marketplaces with business models that enable producers and users to create value together by interacting with each other. In recent years, platforms have benefited from the advances of digitalization. Hence, digital platforms continue to triumph, and continue to be attractive for companies, also for startups. In this paper, we first explore the research of platforms compared to digital platforms. We then proceed to analyze digital platforms as business models, in the context of startups looking for business model innovation. Based on interviews conducted at a technology startup event in Finland, we analyzed how 34 startups viewed their business model innovations. Using the 10 sub-constructs from the business model innovation scale by Clauss in 2016, we found out that the idea of business model innovation resonated with startups, as all of them were able to identify the source of their business model innovation. Furthermore, the results indicated the complexity of business model innovation as 79 percent of the respondents explained it with more than one sub-construct. New technology/equipment, new processes and new customers and markets got the most mentions as sources of business model innovation. Overall, the emphasis at startups is on the value creation innovation, with new proposition innovation getting less, and value capture innovation even less emphasis as the source of business model innovation.",
"title": ""
},
{
"docid": "41b3b48c10753600e36a584003eebdd6",
"text": "This paper deals with reliability problems of common types of generators in hard conditions. It shows possibilities of construction changes that should increase the machine reliability. This contribution is dedicated to the study of brushless alternator for automotive industry. There are described problems with usage of common types of alternators and main benefits and disadvantages of several types of brushless alternators.",
"title": ""
},
{
"docid": "64cc022ac7052a9c82108c88e06b0bf7",
"text": "Influential people have an important role in the process of information diffusion. However, there are several ways to be influential, for example, to be the most popular or the first that adopts a new idea. In this paper we present a methodology to find trendsetters in information networks according to a specific topic of interest. Trendsetters are people that adopt and spread new ideas influencing other people before these ideas become popular. At the same time, not all early adopters are trendsetters because only few of them have the ability of propagating their ideas by their social contacts through word-of-mouth. Differently from other influence measures, a trendsetter is not necessarily popular or famous, but the one whose ideas spread over the graph successfully. Other metrics such as node in-degree or even standard Pagerank focus only in the static topology of the network. We propose a ranking strategy that focuses on the ability of some users to push new ideas that will be successful in the future. To that end, we combine temporal attributes of nodes and edges of the network with a Pagerank based algorithm to find the trendsetters for a given topic. To test our algorithm we conduct innovative experiments over a large Twitter dataset. We show that nodes with high in-degree tend to arrive late for new trends, while users in the top of our ranking tend to be early adopters that also influence their social contacts to adopt the new trend.",
"title": ""
},
{
"docid": "404a662b55baea9402d449fae6192424",
"text": "Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.",
"title": ""
},
{
"docid": "1bdb24fb4c85b3aaf8a8e5d71328a920",
"text": "BACKGROUND\nHigh-grade intraepithelial neoplasia is known to progress to invasive squamous-cell carcinoma of the anus. There are limited reports on the rate of progression from high-grade intraepithelial neoplasia to anal cancer in HIV-positive men who have sex with men.\n\n\nOBJECTIVES\nThe purpose of this study was to describe in HIV-positive men who have sex with men with perianal high-grade intraepithelial neoplasia the rate of progression to anal cancer and the factors associated with that progression.\n\n\nDESIGN\nThis was a prospective cohort study.\n\n\nSETTINGS\nThe study was conducted at an outpatient clinic at a tertiary care center in Toronto.\n\n\nPATIENTS\nThirty-eight patients with perianal high-grade anal intraepithelial neoplasia were identified among 550 HIV-positive men who have sex with men.\n\n\nINTERVENTION\nAll of the patients had high-resolution anoscopy for symptoms, screening, or surveillance with follow-up monitoring/treatment.\n\n\nMAIN OUTCOME MEASURES\nWe measured the incidence of anal cancer per 100 person-years of follow-up.\n\n\nRESULTS\nSeven (of 38) patients (18.4%) with perianal high-grade intraepithelial neoplasia developed anal cancer. The rate of progression was 6.9 (95% CI, 2.8-14.2) cases of anal cancer per 100 person-years of follow-up. A diagnosis of AIDS, previously treated anal cancer, and loss of integrity of the lesion were associated with progression. Anal bleeding was more than twice as common in patients who progressed to anal cancer.\n\n\nLIMITATIONS\nThere was the potential for selection bias and patients were offered treatment, which may have affected incidence estimates.\n\n\nCONCLUSIONS\nHIV-positive men who have sex with men should be monitored for perianal high-grade intraepithelial neoplasia. Those with high-risk features for the development of anal cancer may need more aggressive therapy.",
"title": ""
},
{
"docid": "62688aa48180943a6fcf73fef154fe75",
"text": "Oxidative stress is a phenomenon associated with the pathology of several diseases including atherosclerosis, neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases, cancer, diabetes mellitus, inflammatory diseases, as well as psychiatric disorders or aging process. Oxidative stress is defined as an imbalance between the production of free radicals and reactive metabolites, so called oxidants, and their elimination by protective mechanisms named antioxidative systems. Free radicals and their metabolites prevail over antioxidants. This imbalance leads to damage of important biomolecules and organs with plausible impact on the whole organism. Oxidative and antioxidative processes are associated with electron transfer influencing the redox state of cells and organisms; therefore, oxidative stress is also known as redox stress. At present, the opinion that oxidative stress is not always harmful has been accepted. Depending on its intensity, it can play a role in regulation of other important processes through modulation of signal pathways, influencing synthesis of antioxidant enzymes, repair processes, inflammation, apoptosis and cell proliferation, and thus process of a malignity. Therefore, improper administration of antioxidants can potentially negatively impact biological systems.",
"title": ""
},
{
"docid": "91c792fac981d027ac1f2a2773674b10",
"text": "Cancer is a molecular disease associated with alterations in the genome, which, thanks to the highly improved sensitivity of mutation detection techniques, can be identified in cell-free DNA (cfDNA) circulating in blood, a method also called liquid biopsy. This is a non-invasive alternative to surgical biopsy and has the potential of revealing the molecular signature of tumors to aid in the individualization of treatments. In this review, we focus on cfDNA analysis, its advantages, and clinical applications employing genomic tools (NGS and dPCR) particularly in the field of oncology, and highlight its valuable contributions to early detection, prognosis, and prediction of treatment response.",
"title": ""
}
] |
scidocsrr
|
a058aa28ad57e16c5ec116aad0396726
|
Effects of anonymity, invisibility, and lack of eye-contact on toxic online disinhibition
|
[
{
"docid": "292b6bf59538f15cd7bf60fdbfdb2300",
"text": "Flaming is defined as \" displaying hostility by insulting, swearing or using otherwise offensive language. \" It seems to be common in comments on the video sharing website YouTube. In this explorative study, flaming on YouTube was studied using surveys among YouTube users. Three general conclusions were drawn. First, flaming is indeed very common on YouTube, although many users say not to flame themselves. Second, views on flaming are varied, but more often negative than positive. Some people refrain from uploading videos because of flaming, but most users do not think of flaming as a problem for themselves. Third, several explanations of flaming were found to be plausible, among which were perceived flaming norms and reduced awareness of other people's feelings. Although some YouTube users flame for entertainment, flaming is more often meant to express disagreement or to respond to perceived offense by others. A.1 Invitation for \" Senders \" 49 A.2 Invitation for \" Receivers \" 49 A.3 Invitation to the General Questionnaire 49 Appendix B – Questionnaires 51 B.1 Items Measuring Background Variables 51 B.2 The Last Page 51 B.3 Questionnaire for \" Senders \" 51 B.4 Questionnaire for \" Receivers \" 53 B.5 General Questionnaire 54",
"title": ""
}
] |
[
{
"docid": "e04ff1f4c08bc0541da0db5cd7928ef7",
"text": "Artificial neural networks are computer software or hardware models inspired by the structure and behavior of neurons in the human nervous system. As a powerful learning tool, increasingly neural networks have been adopted by many large-scale information processing applications but there is no a set of well defined criteria for choosing a neural network. The user mostly treats a neural network as a black box and cannot explain how learning from input data was done nor how performance can be consistently ensured. We have experimented with several information visualization designs aiming to open the black box to possibly uncover underlying dependencies between the input data and the output data of a neural network. In this paper, we present our designs and show that the visualizations not only help us design more efficient neural networks, but also assist us in the process of using neural networks for problem solving such as performing a classification task.",
"title": ""
},
{
"docid": "7c593a9fc4de5beb89022f7d438ffcb8",
"text": "The design of a low power low drop out voltage regulator with no off-chip capacitor and fast transient responses is presented in this paper. The LDO regulator uses a combination of a low power operational trans-conductance amplifier and comparators to drive the gate of the PMOS pass element. The amplifier ensures stability and accurate setting of the output voltage in addition to power supply rejection. The comparators ensure fast response of the regulator to any load or line transients. A settling time of less than 200ns is achieved in response to a load transient step of 50mA with a rise time of 100ns with an output voltage spike of less than 200mV at an output voltage of 3.25 V. A line transient step of 1V with a rise time of 100ns results also in a settling time of less than 400ns with a voltage spike of less than 100mV when the output voltage is 2.6V. The regulator is fabricated using a standard 0.35μm CMOS process and consumes a quiescent current of only 26 μA.",
"title": ""
},
{
"docid": "47f64720b0526a9141393131921c6e00",
"text": "The purpose of this study was to assess relative total body fat and skinfold patterning in Filipino national karate and pencak silat athletes. Participants were members of the Philippine men's and women's national teams in karate (12 males, 5 females) and pencak silat (17 males and 5 females). In addition to age, the following anthropometric measurements were taken: height, body mass, triceps, subscapular, supraspinale, umbilical, anterior thigh and medial calf skinfolds. Relative total body fat was expressed as sum of six skinfolds. Sum of skinfolds and each individual skinfold were also expressed relative to Phantom height. A two-way (Sport*Gender) ANOVA was used to determine the differences between men and women in total body fat and skinfold patterning. A Bonferroni-adjusted alpha was employed for all analyses. The women had a higher proportional sum of skinfols (80.19 ± 25.31 mm vs. 51.77 ± 21.13 mm, p = 0. 001, eta(2) = 0.275). The men had a lower proportional triceps skinfolds (-1.72 ± 0.71 versus - 0.35 ± 0.75, p < 0.001). Collapsed over gender, the karate athletes (-2.18 ± 0.66) had a lower proportional anterior thigh skinfold than their pencak silat colleagues (-1.71 ± 0.74, p = 0.001). Differences in competition requirements between sports may account for some of the disparity in anthropometric measurements. Key PointsThe purpose of the present investigation was to assess relative total body fat and skinfold patterning in Filipino national karate and pencak silat athletes.The results seem to suggest that there was no difference between combat sports in fatness.Skinfold patterning was more in line with what was reported in the literature with the males recording lower extremity fat.",
"title": ""
},
{
"docid": "7ac1aa20ed10b80c8fad4a5494919653",
"text": "This paper introduces a simple dynamic programming algorithm for performing text prediction. The algorithm is based on the KnuthMorris-Pratt string matching algorithm. It is well established that there is a close relationship between the tasks of prediction, compression, and classification. A compression technique called Prediction by Partial Matching (PPM) is very similar to the algorithm introduced in this paper. However, most variants of PPM have a higher space complexity and are significantly more difficult to implement. The algorithm is evaluated on a text classification task and outperforms several existing classification techniques.",
"title": ""
},
{
"docid": "d6f31a4a0d4823165ee3434152f42b40",
"text": "Despite a century of research on complex traits in humans, the relative importance and specific nature of the influences of genes and environment on human traits remain controversial. We report a meta-analysis of twin correlations and reported variance components for 17,804 traits from 2,748 publications including 14,558,903 partly dependent twin pairs, virtually all published twin studies of complex traits. Estimates of heritability cluster strongly within functional domains, and across all traits the reported heritability is 49%. For a majority (69%) of traits, the observed twin correlations are consistent with a simple and parsimonious model where twin resemblance is solely due to additive genetic variation. The data are inconsistent with substantial influences from shared environment or non-additive genetic variation. This study provides the most comprehensive analysis of the causes of individual differences in human traits thus far and will guide future gene-mapping efforts. All the results can be visualized using the MaTCH webtool.",
"title": ""
},
{
"docid": "6be3470d014aac14b6af9d343539b4b8",
"text": "In this paper, we discuss logic circuit designs using the circuit model of three-state quantum dot gate field effect transistors (QDGFETs). QDGFETs produce one intermediate state between the two normal stable ON and OFF states due to a change in the threshold voltage over this range. We have developed a simplified circuit model that accounts for this intermediate state. Interesting logic can be implemented using QDGFETs. In this paper, we discuss the designs of various two-input three-state QDGFET gates, including NAND- and NOR-like operations and their application in different combinational circuits like decoder, multiplier, adder, and so on. Increased number of states in three-state QDGFETs will increase the number of bit-handling capability of this device and will help us to handle more number of bits at a time with less circuit elements.",
"title": ""
},
{
"docid": "22ef6b3fd2f4c926d81881039244511f",
"text": "Whereas in most cases a fatty liver remains free of inflammation, 10%-20% of patients who have fatty liver develop inflammation and fibrosis (nonalcoholic steatohepatitis [NASH]). Inflammation may precede steatosis in certain instances. Therefore, NASH could reflect a disease where inflammation is followed by steatosis. In contrast, NASH subsequent to simple steatosis may be the consequence of a failure of antilipotoxic protection. In both situations, many parallel hits derived from the gut and/or the adipose tissue may promote liver inflammation. Endoplasmic reticulum stress and related signaling networks, (adipo)cytokines, and innate immunity are emerging as central pathways that regulate key features of NASH.",
"title": ""
},
{
"docid": "5b4fca273db1335cd31facf426b981c4",
"text": "One of the most influential ideas in the field of business ethics has been the suggestion that ethical conduct in a business context should be analyzed in terms of a set of fiduciary obligations toward various “stakeholder” groups. Moral problems, according to this view, involve reconciling such obligations in cases where stakeholder groups have conflicting interests. The question posed in this paper is whether the stakeholder paradigm represents the most fruitful way of articulating the moral problems that arise in business. By way of contrast, I outline two other possible approaches to business ethics: one, a more minimal conception, anchored in the notion of a fiduciary obligation toward shareholders; and the other, a broader conception, focused on the concept of market failure. I then argue that the latter offers a more satisfactory framework for the articulation of the social responsibilities",
"title": ""
},
{
"docid": "b29ddb800ec3b4f031a077e98a7fffb1",
"text": "Networks or graphs can easily represent a diverse set of data sources that are characterized by interacting units or actors. Social networks, representing people who communicate with each other, are one example. Communities or clusters of highly connected actors form an essential feature in the structure of several empirical networks. Spectral clustering is a popular and computationally feasible method to discover these communities. The Stochastic Block Model (Holland et al., 1983) is a social network model with well defined communities; each node is a member of one community. For a network generated from the Stochastic Block Model, we bound the number of nodes “misclustered” by spectral clustering. The asymptotic results in this paper are the first clustering results that allow the number of clusters in the model to grow with the number of nodes, hence the name high-dimensional. In order to study spectral clustering under the Stochastic Block Model, we first show that under the more general latent space model, the eigenvectors of the normalized graph Laplacian asymptotically converge to the eigenvectors of a “population” normalized graph Laplacian. Aside from the implication for spectral clustering, this provides insight into a graph visualization technique. Our method of studying the eigenvectors of random matrices is original. AMS 2000 subject classifications: Primary 62H30, 62H25; secondary 60B20.",
"title": ""
},
{
"docid": "8520c513d34ba33dfa4aa3fc621bccb6",
"text": "Current avionics architectures implemented on large aircraft use complex processors, which are shared by many avionics applications according Integrated Modular Avionics (IMA) concepts. Using less complex processors on smaller aircraft such as helicopters leads to a distributed IMA architecture. Allocation of the avionics applications on a distributed architecture has to deal with two main challenges. A first problem is about the feasibility of a static allocation of partitions on each processing element. The second problem is the worst-case end-to-end communication delay analysis: due to the scheduling of partitions on processing elements which are not synchronized, some allocation schemes are not valid. This paper first presents a mapping algorithm using an integrated approach taking into account these two issues. In a second step, we evaluate, on a realistic helicopter case study, the feasibility of mapping a given application on a variable number of processing elements. Finally, we present a scalability analysis of the proposed mapping algorithm.",
"title": ""
},
{
"docid": "885b3b33fad3dee064f47201ec10f3bb",
"text": "Traceability is the only means to ensure that the source code of a system is consistent with its requirements and that all and only the specified requirements have been implemented by developers. During software maintenance and evolution, requirement traceability links become obsolete because developers do not/cannot devote effort to updating them. Yet, recovering these traceability links later is a daunting and costly task for developers. Consequently, the literature has proposed methods, techniques, and tools to recover these traceability links semi-automatically or automatically. Among the proposed techniques, the literature showed that information retrieval (IR) techniques can automatically recover traceability links between free-text requirements and source code. However, IR techniques lack accuracy (precision and recall). In this paper, we show that mining software repositories and combining mined results with IR techniques can improve the accuracy (precision and recall) of IR techniques and we propose Trustrace, a trust--based traceability recovery approach. We apply Trustrace on four medium-size open-source systems to compare the accuracy of its traceability links with those recovered using state-of-the-art IR techniques from the literature, based on the Vector Space Model and Jensen-Shannon model. The results of Trustrace are up to 22.7 percent more precise and have 7.66 percent better recall values than those of the other techniques, on average. We thus show that mining software repositories and combining the mined data with existing results from IR techniques improves the precision and recall of requirement traceability links.",
"title": ""
},
{
"docid": "acba717edc26ae7ba64debc5f0d73ded",
"text": "Previous phase I-II clinical trials have shown that recombinant human erythropoietin (rHuEpo) can ameliorate anemia in a portion of patients with multiple myeloma (MM) and non-Hodgkin's lymphoma (NHL). Therefore, we performed a randomized controlled multicenter study to define the optimal initial dosage and to identify predictors of response to rHuEpo. A total of 146 patients who had hemoglobin (Hb) levels < or = 11 g/dL and who had no need for transfusion at the time of enrollment entered this trial. Patients were randomized to receive 1,000 U (n = 31), 2,000 U (n = 29), 5,000 U (n = 31), or 10,000 U (n = 26) of rHuEpo daily subcutaneously for 8 weeks or to receive no therapy (n = 29). Of the patients, 84 suffered from MM and 62 from low- to intermediate-grade NHL, including chronic lymphocytic leukemia; 116 of 146 (79%) received chemotherapy during the study. The mean baseline Hb level was 9.4 +/- 1.0 g/dL. The median serum Epo level was 32 mU/mL, and endogenous Epo production was found to be defective in 77% of the patients, as judged by a value for the ratio of observed-to-predicted serum Epo levels (O/P ratio) of < or = 0.9. An intention-to-treat analysis was performed to evaluate treatment efficacy. The median average increase in Hb levels per week was 0.04 g/dL in the control group and -0.04 (P = .57), 0.22 (P = .05), 0.43 (P = .01), and 0.58 (P = .0001) g/dL in the 1,000 U, 2,000 U, 5,000 U, and 10,000 U groups, respectively (P values versus control). The probability of response (delta Hb > or = 2 g/dL) increased steadily and, after 8 weeks, reached 31% (2,000 U), 61% (5,000 U), and 62% (10,000 U), respectively. Regression analysis using Cox's proportional hazard model and classification and regression tree analysis showed that serum Epo levels and the O/P ratio were the most important factors predicting response in patients receiving 5,000 or 10,000 U. Approximately three quarters of patients presenting with Epo levels inappropriately low for the degree of anemia responded to rHuEpo, whereas only one quarter of those with adequate Epo levels did so. Classification and regression tree analysis also showed that doses of 2,000 U daily were effective in patients with an average platelet count greater than 150 x 10(9)/L. About 50% of these patients are expected to respond to rHuEpo. Thus, rHuEpo was safe and effective in ameliorating the anemia of MM and NHL patients who showed defective endogenous Epo production. From a practical point of view, we conclude that the decision to use rHuEpo in an individual anemic patient with MM or NHL should be based on serum Epo levels, whereas the choice of the initial dosage should be based on residual marrow function.",
"title": ""
},
{
"docid": "02d5abb55d737fe47da98b55fccfbc8e",
"text": "Existing biometric fingerprint devices show numerous reliability problems such as wet or fake fingers. In this letter, a secured method using the internal structures of the finger (papillary layer) for fingerprint identification is presented. With a frequency-domain optical coherence tomography (FD-OCT) system, a 3-D image of a finger is acquired and the information of the internal fingerprint extracted. The right index fingers of 51 individuals were recorded three times. Using a commercial fingerprint identification program, 95% of internal fingerprint images were successfully recognized. These results demonstrate that OCT imaging of internal fingerprints can be used for accurate and reliable fingerprint recognition.",
"title": ""
},
{
"docid": "09fc272a6d9ea954727d07075ecd5bfd",
"text": "Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.",
"title": ""
},
{
"docid": "a3bc8e58c397343e2d381c6f662be6ff",
"text": "Researchers have assumed that low self-esteem predicts deviance, but empirical results have been mixed. This article draws upon recent theoretical developments regarding contingencies of self-worth to clarify the self-esteem/deviance relation. It was predicted that self-esteem level would relate to deviance only when self-esteem was not contingent on workplace performance. In this manner, contingent self-esteem is a boundary condition for self-consistency/behavioral plasticity theory predictions. Using multisource data collected from 123 employees over 6 months, the authors examined the interaction between level (high/low) and type (contingent/noncontingent) of self-esteem in predicting workplace deviance. Results support the hypothesized moderating effects of contingent self-esteem; implications for self-esteem theories are discussed.",
"title": ""
},
{
"docid": "b8aab94410391b0e2544f2d8b4a4891e",
"text": "In this paper, we present \"k-means+ID3\", a method to cascade k-means clustering and the ID3 decision tree learning methods for classifying anomalous and normal activities in a computer network, an active electronic circuit, and a mechanical mass-beam system. The k-means clustering method first partitions the training instances into k clusters using Euclidean distance similarity. On each cluster, representing a density region of normal or anomaly instances, we build an ID3 decision tree. The decision tree on each cluster refines the decision boundaries by learning the subgroups within the cluster. To obtain a final decision on classification, the decisions of the k-means and ID3 methods are combined using two rules: 1) the nearest-neighbor rule and 2) the nearest-consensus rule. We perform experiments on three data sets: 1) network anomaly data (NAD), 2) Duffing equation data (DED), and 3) mechanical system data (MSD), which contain measurements from three distinct application domains of computer networks, an electronic circuit implementing a forced Duffing equation, and a mechanical system, respectively. Results show that the detection accuracy of the k-means+ID3 method is as high as 96.24 percent at a false-positive-rate of 0.03 percent on NAD; the total accuracy is as high as 80.01 percent on MSD and 79.9 percent on DED",
"title": ""
},
{
"docid": "b16407fc67058110b334b047bcfea9ac",
"text": "In Educational Psychology (1997/1926), Vygotsky pleaded for a realistic approach to children’s literature. He is, among other things, critical of Chukovsky’s story “Crocodile” and maintains that this story deals with nonsense and gibberish, without social relevance. This approach Vygotsky would leave soon, and, in Psychology of Art (1971/1925), in which he develops his theory of art, he talks about connections between nursery rhymes and children’s play, exactly as the story of Chukovsky had done with the following argument: By dragging a child into a topsy-turvy world, we help his intellect work and his perception of reality. In his book Imagination and Creativity in Childhood (1995/1930), Vygotsky goes further and develops his theory of creativity. The book describes how Vygotsky regards the creative process of the human consciousness, the link between emotion and thought, and the role of the imagination. To Vygotsky, this brings to the fore the issue of the link between reality and imagination, and he discusses the issue of reproduction and creativity, both of which relate to the entire scope of human activity. Interpretations of Vygotsky in the 1990s have stressed the role of literature and the development of a cultural approach to psychology and education. It has been overlooked that Vygotsky started his career with work on the psychology of art. In this article, I want to describe Vygotsky’s theory of creativity and how he developed it. He started with a realistic approach to imagination, and he ended with a dialectical attitude to imagination. Criticism of Chukovsky’s “Crocodile” In 1928, the “Crocodile” story was forbidden. It was written by Korney Chukovsky (1882–1969). In his book From Two to Five Years, there is a chapter with the title “Struggle for the Fairy-Tale,” in which he attacks his antagonists, the pedologists, whom he described as a miserable group of theoreticans who studied children’s reading and maintained that the children of the proletarians needed neither “fairy-tales nor toys, or songs” (Chukovsky, 1975, p. 129). He describes how the pedologists let the word imagination become an abuse and how several stories were forbidden, for example, “Crocodile.” One of the slogans of the antagonists of fantasy literature was chukovskies, a term meaning of anthropomorphism and being bourgeois. In 1928, Krupskaja criticized Chukovky, the same year as Stalin was in power. Krupskaja maintained that the content of children’s literature ought to be concrete and realistic to inspire the children to be conscious communists. As an atheist, she was against everything that smelled of mysticism and religion. She pointed out, in an article in Pravda, that “Crocodile” did not live up to the demands that one could make on children’s literature. Many authors, however, came to Chukovsky’s defense, among them A. Tolstoy (Chukovsky, 1975). Ten years earlier in 1918, only a few months after the October Revolution, the first demands were made that children’s literature should be put in the service of communist ideology. It was necessary to replace old bourgeois books, and new writers were needed. In the first attempts to create a new children’s literature, a significant role was played by Maksim Gorky. His ideal was realistic literature with such moral ideals as heroism and optimism. Creativity Research Journal Copyright 2003 by 2003, Vol. 15, Nos. 2 & 3, 245–251 Lawrence Erlbaum Associates, Inc. Vygotsky’s Theory of Creativity Gunilla Lindqvist University of Karlstad Correspondence and requests for reprints should be sent to Gunilla Lindqvist, Department of Educational Sciences, University of Karlstad, 65188 Karlstad, Sweden. E-mail: gunilla.lindqvist@",
"title": ""
},
{
"docid": "06f1c7daafcf59a8eb2ddf430d0d7f18",
"text": "OBJECTIVES\nWe aimed to evaluate the efficacy of reinforcing short-segment pedicle screw fixation with polymethyl methacrylate (PMMA) vertebroplasty in patients with thoracolumbar burst fractures.\n\n\nMETHODS\nWe enrolled 70 patients with thoracolumbar burst fractures for treatment with short-segment pedicle screw fixation. Fractures in Group A (n = 20) were reinforced with PMMA vertebroplasty during surgery. Group B patients (n = 50) were not treated with PMMA vertebroplasty. Kyphotic deformity, anterior vertebral height, instrument failure rates, and neurological function outcomes were compared between the two groups.\n\n\nRESULTS\nKyphosis correction was achieved in Group A (PMMA vertebroplasty) and Group B (Group A, 6.4 degrees; Group B, 5.4 degrees). At the end of the follow-up period, kyphosis correction was maintained in Group A but lost in Group B (Group A, 0.33-degree loss; Group B, 6.20-degree loss) (P = 0.0001). After surgery, greater anterior vertebral height was achieved in Group A than in Group B (Group A, 12.9%; Group B, 2.3%) (P < 0.001). During follow-up, anterior vertebral height was maintained only in Group A (Group A, 0.13 +/- 4.06%; Group B, -6.17 +/- 1.21%) (P < 0.001). Patients in both Groups A and B demonstrated good postoperative Denis Pain Scale grades (P1 and P2), but Group A had better results than Group B in terms of the control of severe and constant pain (P4 and P5) (P < 0.001). The Frankel Performance Scale scores increased by nearly 1 in both Groups A and B. Group B was subdivided into Group B1 and B2. Group B1 consisted of patients who experienced instrument failure, including screw pullout, breakage, disconnection, and dislodgement (n = 11). Group B2 comprised patients from Group B who did not experience instrument failure (n = 39). There were no instrument failures among patients in Group A. Preoperative kyphotic deformity was greater in Group B1 (23.5 +/- 7.9 degrees) than in Group B2 (16.8 +/- 8.40 degrees), P < 0.05. Severe and constant pain (P4 and P5) was noted in 36% of Group B1 patients (P < 0.001), and three of these patients required removal of their implants.\n\n\nCONCLUSION\nReinforcement of short-segment pedicle fixation with PMMA vertebroplasty for the treatment of patients with thoracolumbar burst fracture may achieve and maintain kyphosis correction, and it may also increase and maintain anterior vertebral height. Good Denis Pain Scale grades and improvement in Frankel Performance Scale scores were found in patients without instrument failure (Groups A and B2). Patients with greater preoperative kyphotic deformity had a higher risk of instrument failure if they did not undergo reinforcement with vertebroplasty. PMMA vertebroplasty offers immediate spinal stability in patients with thoracolumbar burst fractures, decreases the instrument failure rate, and provides better postoperative pain control than without vertebroplasty.",
"title": ""
},
{
"docid": "147a6ce22db736f475408d28d0398651",
"text": "Curating labeled training data has become the primary bottleneck in machine learning. Recent frameworks address this bottleneck with generative models to synthesize labels at scale from weak supervision sources. The generative model's dependency structure directly affects the quality of the estimated labels, but selecting a structure automatically without any labeled data is a distinct challenge. We propose a structure estimation method that maximizes the ℓ 1-regularized marginal pseudolikelihood of the observed data. Our analysis shows that the amount of unlabeled data required to identify the true structure scales sublinearly in the number of possible dependencies for a broad class of models. Simulations show that our method is 100× faster than a maximum likelihood approach and selects 1/4 as many extraneous dependencies. We also show that our method provides an average of 1.5 F1 points of improvement over existing, user-developed information extraction applications on real-world data such as PubMed journal abstracts.",
"title": ""
},
{
"docid": "113cf957b47a8b8e3bbd031aa9a28ff2",
"text": "We present an approach for the recognition of acted emotional states based on the analysis of body movement and gesture expressivity. According to research showing that distinct emotions are often associated with different qualities of body movement, we use nonpropositional movement qualities (e.g. amplitude, speed and fluidity of movement) to infer emotions, rather than trying to recognise different gesture shapes expressing specific emotions. We propose a method for the analysis of emotional behaviour based on both direct classification of time series and a model that provides indicators describing the dynamics of expressive motion cues. Finally we show and interpret the recognition rates for both proposals using different classification algorithms.",
"title": ""
}
] |
scidocsrr
|
379bc9f0d7e44547dd6a08eb885ccc15
|
Anomaly Detection in Wireless Sensor Networks in a Non-Stationary Environment
|
[
{
"docid": "60fe7f27cd6312c986b679abce3fdea7",
"text": "In matters of great importance that have financial, medical, social, or other implications, we often seek a second opinion before making a decision, sometimes a third, and sometimes many more. In doing so, we weigh the individual opinions, and combine them through some thought process to reach a final decision that is presumably the most informed one. The process of consulting \"several experts\" before making a final decision is perhaps second nature to us; yet, the extensive benefits of such a process in automated decision making applications have only recently been discovered by computational intelligence community. Also known under various other names, such as multiple classifier systems, committee of classifiers, or mixture of experts, ensemble based systems have shown to produce favorable results compared to those of single-expert systems for a broad range of applications and under a variety of scenarios. Design, implementation and application of such systems are the main topics of this article. Specifically, this paper reviews conditions under which ensemble based systems may be more beneficial than their single classifier counterparts, algorithms for generating individual components of the ensemble systems, and various procedures through which the individual classifiers can be combined. We discuss popular ensemble based algorithms, such as bagging, boosting, AdaBoost, stacked generalization, and hierarchical mixture of experts; as well as commonly used combination rules, including algebraic combination of outputs, voting based techniques, behavior knowledge space, and decision templates. Finally, we look at current and future research directions for novel applications of ensemble systems. Such applications include incremental learning, data fusion, feature selection, learning with missing features, confidence estimation, and error correcting output codes; all areas in which ensemble systems have shown great promise",
"title": ""
},
{
"docid": "3be38e070678e358e23cb81432033062",
"text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system",
"title": ""
}
] |
[
{
"docid": "2fa6f761f22e0484a84f83e5772bef40",
"text": "We consider the problem of planning smooth paths for a vehicle in a region bounded by polygonal chains. The paths are represented as B-spline functions. A path is found by solving an optimization problem using a cost function designed to care for both the smoothness of the path and the safety of the vehicle. Smoothness is defined as small magnitude of the derivative of curvature and safety is defined as the degree of centering of the path between the polygonal chains. The polygonal chains are preprocessed in order to remove excess parts and introduce safety margins for the vehicle. The method has been implemented for use with a standard solver and tests have been made on application data provided by the Swedish mining company LKAB.",
"title": ""
},
{
"docid": "ba0dce539f33496dedac000b61efa971",
"text": "The webpage aesthetics is one of the factors that affect the way people are attracted to a site. But two questions emerge: how can we improve a webpage's aesthetics and how can we evaluate this item? In order to solve this problem, we identified some of the theory that is underlying graphic design, gestalt theory and multimedia design. Based in the literature review, we proposed principles for web site design. We also propose a tool to evaluate web design.",
"title": ""
},
{
"docid": "e726e11f855515017de77508b79d3308",
"text": "OBJECTIVES\nThis study was conducted to better understand the characteristics of chronic pain patients seeking treatment with medicinal cannabis (MC).\n\n\nDESIGN\nRetrospective chart reviews of 139 patients (87 males, median age 47 years; 52 females, median age 48 years); all were legally qualified for MC use in Washington State.\n\n\nSETTING\nRegional pain clinic staffed by university faculty.\n\n\nPARTICIPANTS\n\n\n\nINCLUSION CRITERIA\nage 18 years and older; having legally accessed MC treatment, with valid documentation in their medical records. All data were de-identified.\n\n\nMAIN OUTCOME MEASURES\nRecords were scored for multiple indicators, including time since initial MC authorization, qualifying condition(s), McGill Pain score, functional status, use of other analgesic modalities, including opioids, and patterns of use over time.\n\n\nRESULTS\nOf 139 patients, 15 (11 percent) had prior authorizations for MC before seeking care in this clinic. The sample contained 236.4 patient-years of authorized MC use. Time of authorized use ranged from 11 days to 8.31 years (median of 1.12 years). Most patients were male (63 percent) yet female patients averaged 0.18 years longer authorized use. There were no other gender-specific trends or factors. Most patients (n = 123, 88 percent) had more than one pain syndrome present. Myofascial pain syndrome was the most common diagnosis (n = 114, 82 percent), followed by neuropathic pain (n = 89, 64 percent), discogenic back pain (n = 72, 51.7 percent), and osteoarthritis (n = 37, 26.6 percent). Other diagnoses included diabetic neuropathy, central pain syndrome, phantom pain, spinal cord injury, fibromyalgia, rheumatoid arthritis, HIV neuropathy, visceral pain, and malignant pain. In 51 (37 percent) patients, there were documented instances of major hurdles related to accessing MC, including prior physicians unwilling to authorize use, legal problems related to MC use, and difficulties in finding an affordable and consistent supply of MC.\n\n\nCONCLUSIONS\nData indicate that males and females access MC at approximately the same rate, with similar median authorization times. Although the majority of patient records documented significant symptom alleviation with MC, major treatment access and delivery barriers remain.",
"title": ""
},
{
"docid": "b6dcf2064ad7f06fd1672b1348d92737",
"text": "In this paper, we propose a two-step method to recognize multiple-food images by detecting candidate regions with several methods and classifying them with various kinds of features. In the first step, we detect several candidate regions by fusing outputs of several region detectors including Felzenszwalb's deformable part model (DPM) [1], a circle detector and the JSEG region segmentation. In the second step, we apply a feature-fusion-based food recognition method for bounding boxes of the candidate regions with various kinds of visual features including bag-of-features of SIFT and CSIFT with spatial pyramid (SP-BoF), histogram of oriented gradient (HoG), and Gabor texture features. In the experiments, we estimated ten food candidates for multiple-food images in the descending order of the confidence scores. As results, we have achieved the 55.8% classification rate, which improved the baseline result in case of using only DPM by 14.3 points, for a multiple-food image data set. This demonstrates that the proposed two-step method is effective for recognition of multiple-food images.",
"title": ""
},
{
"docid": "d47143c38598cf88eeb8be654f8a7a00",
"text": "Long Short-Term Memory (LSTM) networks have yielded excellent results on handwriting recognition. This paper describes an application of bidirectional LSTM networks to the problem of machine-printed Latin and Fraktur recognition. Latin and Fraktur recognition differs significantly from handwriting recognition in both the statistical properties of the data, as well as in the required, much higher levels of accuracy. Applications of LSTM networks to handwriting recognition use two-dimensional recurrent networks, since the exact position and baseline of handwritten characters is variable. In contrast, for printed OCR, we used a one-dimensional recurrent network combined with a novel algorithm for baseline and x-height normalization. A number of databases were used for training and testing, including the UW3 database, artificially generated and degraded Fraktur text and scanned pages from a book digitization project. The LSTM architecture achieved 0.6% character-level test-set error on English text. When the artificially degraded Fraktur data set is divided into training and test sets, the system achieves an error rate of 1.64%. On specific books printed in Fraktur (not part of the training set), the system achieves error rates of 0.15% (Fontane) and 1.47% (Ersch-Gruber). These recognition accuracies were found without using any language modelling or any other post-processing techniques.",
"title": ""
},
{
"docid": "0b0273a1e2aeb98eb4115113c8957fd2",
"text": "This paper deals with the approach of integrating a bidirectional boost-converter into the drivetrain of a (hybrid) electric vehicle in order to exploit the full potential of the electric drives and the battery. Currently, the automotive norms and standards are defined based on the characteristics of the voltage source. The current technologies of batteries for automotive applications have voltage which depends on the load and the state-of charge. The aim of this paper is to provide better system performance by stabilizing the voltage without the need of redesigning any of the current components in the system. To show the added-value of the proposed electrical topology, loss estimation is developed and proved based on actual components measurements and design. The component and its modelling is then implemented in a global system simulation environment of the electric architecture to show how it contributes enhancing the performance of the system.",
"title": ""
},
{
"docid": "affa4a43b68f8c158090df3a368fe6b6",
"text": "The purpose of this study is to evaluate the impact of modulated light projections perceived through the eyes on the autonomic nervous system (ANS). Three types of light projections, each containing both specific colors and specific modulations in the brainwaves frequency range, were tested, in addition to a placebo projection consisting of non-modulated white light. Evaluation was done using a combination of physiological measures (HR, HRV, SC) and psychological tests (Amen, POMS). Significant differences were found in the ANS effects of each of the colored light projections, and also between the colored and white projections.",
"title": ""
},
{
"docid": "49f96e96623502ffe6053cab43054edf",
"text": "Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it.",
"title": ""
},
{
"docid": "21ad29105c4b6772b05156afd33ac145",
"text": "High resolution Digital Surface Models (DSMs) produced from airborne laser-scanning or stereo satellite images provide a very useful source of information for automated 3D building reconstruction. In this paper an investigation is reported about extraction of 3D building models from high resolution DSMs and orthorectified images produced from Worldview-2 stereo satellite imagery. The focus is on the generation of 3D models of parametric building roofs, which is the basis for creating Level Of Detail 2 (LOD2) according to the CityGML standard. In particular the building blocks containing several connected buildings with tilted roofs are investigated and the potentials and limitations of the modeling approach are discussed. The edge information extracted from orthorectified image has been employed as additional source of information in 3D reconstruction algorithm. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines for parametric building reconstruction. The 3D model is derived for each building part, and finally, a complete parametric model is formed by merging the 3D models of the individual building parts and adjusting the nodes after the merging step. For the remaining building parts that do not contain ridge lines, a prismatic model using polygon approximation of the corresponding boundary pixels is derived and merged to the parametric models to shape the final model of the building. A qualitative and quantitative assessment of the proposed method for the automatic reconstruction of buildings with parametric roofs is then provided by comparing the final model with the existing surface model as well as some field measurements. Remote Sens. 2013, 5 1682",
"title": ""
},
{
"docid": "c89ce1ded524ff65c1ebd3d20be155bc",
"text": "Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their predictive efficacies for violence. The effect sizes were extracted from 28 original reports published between 1999 and 2008, which assessed the predictive accuracy of more than one tool. We used a within-subject design to improve statistical power and multilevel regression models to disentangle random effects of variation between studies and tools and to adjust for study features. All 9 tools and their subscales predicted violence at about the same moderate level of predictive efficacy with the exception of Psychopathy Checklist--Revised (PCL-R) Factor 1, which predicted violence only at chance level among men. Approximately 25% of the total variance was due to differences between tools, whereas approximately 85% of heterogeneity between studies was explained by methodological features (age, length of follow-up, different types of violent outcome, sex, and sex-related interactions). Sex-differentiated efficacy was found for a small number of the tools. If the intention is only to predict future violence, then the 9 tools are essentially interchangeable; the selection of which tool to use in practice should depend on what other functions the tool can perform rather than on its efficacy in predicting violence. The moderate level of predictive accuracy of these tools suggests that they should not be used solely for some criminal justice decision making that requires a very high level of accuracy such as preventive detention.",
"title": ""
},
{
"docid": "16741aac03ea1a864ddab65c8c73eb7c",
"text": "This report describes a preliminary evaluation of performance of a cell-FPGA-like architecture for future hybrid \"CMOL\" circuits. Such circuits will combine a semiconduc-tor-transistor (CMOS) stack and a two-level nanowire crossbar with molecular-scale two-terminal nanodevices (program-mable diodes) formed at each crosspoint. Our cell-based architecture is based on a uniform CMOL fabric of \"tiles\". Each tile consists of 12 four-transistor basic cells and one (four times larger) latch cell. Due to high density of nanodevices, which may be used for both logic and routing functions, CMOL FPGA may be reconfigured around defective nanodevices to provide high defect tolerance. Using a semi-custom set of design automation tools we have evaluated CMOL FPGA performance for the Toronto 20 benchmark set, so far without optimization of several parameters including the power supply voltage and nanowire pitch. The results show that even without such optimization, CMOL FPGA circuits may provide a density advantage of more than two orders of magnitude over the traditional CMOS FPGA with the same CMOS design rules, at comparable time delay, acceptable power consumption and potentially high defect tolerance.",
"title": ""
},
{
"docid": "cffce89fbb97dc1d2eb31a060a335d3c",
"text": "This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, Naı̈ve Bayes and Maximum Entropy (ME)when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: • Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. • Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. • Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: ◦ the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method. ◦ the Hybrid Advanced Classification with Aggregation by Consensus (HACACO) method.",
"title": ""
},
{
"docid": "8c853251e0fb408c829e6f99a581d4cf",
"text": "We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.",
"title": ""
},
{
"docid": "fb89a5aa87f1458177d6a32ef25fdf3b",
"text": "The increase in population, the rapid economic growth and the rise in community living standards accelerate municipal solid waste (MSW) generation in developing cities. This problem is especially serious in Pudong New Area, Shanghai, China. The daily amount of MSW generated in Pudong was about 1.11 kg per person in 2006. According to the current population growth trend, the solid waste quantity generated will continue to increase with the city's development. In this paper, we describe a waste generation and composition analysis and provide a comprehensive review of municipal solid waste management (MSWM) in Pudong. Some of the important aspects of waste management, such as the current status of waste collection, transport and disposal in Pudong, will be illustrated. Also, the current situation will be evaluated, and its problems will be identified.",
"title": ""
},
{
"docid": "bcd16100ca6814503e876f9f15b8c7fb",
"text": "OBJECTIVE\nBrain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG.\n\n\nMETHODS\nA total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters.\n\n\nRESULTS\nThe classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard.\n\n\nCONCLUSIONS\nThis is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.",
"title": ""
},
{
"docid": "8e324cf4900431593d9ebc73e7809b23",
"text": "Even though there is a plethora of studies investigating the challenges of adopting ebanking services, a search through the literature indicates that prior studies have investigated either user adoption challenges or the bank implementation challenges. This study integrated both perspectives to provide a broader conceptual framework for investigating challenges banks face in marketing e-banking services in developing country such as Ghana. The results from the mixed method study indicated that institutional–based challenges as well as userbased challenges affect the marketing of e-banking products in Ghana. The strategic implications of the findings for marketing ebanking services are discussed to guide managers to implement e-banking services in Ghana.",
"title": ""
},
{
"docid": "62166980f94bba5e75c9c6ad4a4348f1",
"text": "In this paper the design and the implementation of a linear, non-uniform antenna array for a 77-GHz MIMO FMCW system that allows for the estimation of both the distance and the angular position of a target are presented. The goal is to achieve a good trade-off between the main beam width and the side lobe level. The non-uniform spacing in addition with the MIMO principle offers a superior performance compared to a classical uniform half-wavelength antenna array with an equal number of elements. However the design becomes more complicated and can not be tackled using analytical methods. Starting with elementary array factor considerations the design is approached using brute force, stepwise brute force, and particle swarm optimization. The particle swarm optimized array was also implemented. Simulation results and measurements are presented and discussed.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "13974867d98411b6a999374afcc5b2cb",
"text": "Current best local descriptors are learned on a large dataset of matching and non-matching keypoint pairs. However, data of this kind is not always available since detailed keypoint correspondences can be hard to establish. On the other hand, we can often obtain labels for pairs of keypoint bags. For example, keypoint bags extracted from two images of the same object under different views form a matching pair, and keypoint bags extracted from images of different objects form a non-matching pair. On average, matching pairs should contain more corresponding keypoints than non-matching pairs. We describe an end-to-end differentiable architecture that enables the learning of local keypoint descriptors from such weakly-labeled data.",
"title": ""
},
{
"docid": "bc7f80192416aa7787657aed1bda3997",
"text": "In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.",
"title": ""
}
] |
scidocsrr
|
b5191f41edd65e2b3d7ea309eb3b530c
|
HyViDE: a framework for virtual data center network embedding
|
[
{
"docid": "b93022efa40379ca7cc410d8b10ba48e",
"text": "The shared nature of the network in today's multi-tenant datacenters implies that network performance for tenants can vary significantly. This applies to both production datacenters and cloud environments. Network performance variability hurts application performance which makes tenant costs unpredictable and causes provider revenue loss. Motivated by these factors, this paper makes the case for extending the tenant-provider interface to explicitly account for the network. We argue this can be achieved by providing tenants with a virtual network connecting their compute instances. To this effect, the key contribution of this paper is the design of virtual network abstractions that capture the trade-off between the performance guarantees offered to tenants, their costs and the provider revenue.\n To illustrate the feasibility of virtual networks, we develop Oktopus, a system that implements the proposed abstractions. Using realistic, large-scale simulations and an Oktopus deployment on a 25-node two-tier testbed, we demonstrate that the use of virtual networks yields significantly better and more predictable tenant performance. Further, using a simple pricing model, we find that the our abstractions can reduce tenant costs by up to 74% while maintaining provider revenue neutrality.",
"title": ""
},
{
"docid": "3427d27d6c5c444a90a184183f991208",
"text": "Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change. Application of this technology relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as \"Virtual Network Embedding (VNE)\" algorithms. This paper presents a survey of current research in the VNE area. Based upon a novel classification scheme for VNE algorithms a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed.",
"title": ""
}
] |
[
{
"docid": "b3ea5290cad741aa7c3da97ab1c24ccd",
"text": "Methods of alloplastic forehead augmentation using soft expanded polytetrafluoroethylene (ePTFE) and silicone implants are described. Soft ePTFE forehead implantation has the advantage of being technically simpler, with better fixation. The disadvantages are a limited degree of forehead augmentation and higher chance of infection. Properly fabricated soft silicone implants provide potential for larger degree of forehead silhouette augmentation with less risk of infection. The corrugated edge and central perforations of the implant minimize mobility and capsule contraction.",
"title": ""
},
{
"docid": "f6f957790ab0655fb28bed62b08b7be3",
"text": "According to the signal hypothesis, a signal sequence, once having initiated export of a growing protein chain across the rough endoplasmic reticulum, is cleaved from the mature protein at a specific site. It has long been known that some part of the cleavage specificity resides in the last residue of the signal sequence, which invariably is one with a small, uncharged side-chain, but no further specific patterns of amino acids near the point of cleavage have been discovered so far. In this paper, some such patterns, based on a sample of 78 eukaryotic signal sequences, are presented and discussed, and a first attempt at formulating rules for the prediction of cleavage sites is made.",
"title": ""
},
{
"docid": "7b4dd695182f7e15e58f44e309bf897c",
"text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.",
"title": ""
},
{
"docid": "37d2671c9d89ce5a1c1957bd1490f944",
"text": "In some of object recognition problems, labeled data may not be available for all categories. Zero-shot learning utilizes auxiliary information (also called signatures) d escribing each category in order to find a classifier that can recognize samples from categories with no labeled instance . In this paper, we propose a novel semi-supervised zero-shot learning method that works on an embedding space corresponding to abstract deep visual features. We seek a linear transformation on signatures to map them onto the visual features, such that the mapped signatures of the seen classe s are close to labeled samples of the corresponding classes and unlabeled data are also close to the mapped signatures of one of the unseen classes. We use the idea that the rich deep visual features provide a representation space in whic h samples of each class are usually condensed in a cluster. The effectiveness of the proposed method is demonstrated through extensive experiments on four public benchmarks improving the state-of-the-art prediction accuracy on thr ee of them.",
"title": ""
},
{
"docid": "5e6c24f5f3a2a3c3b0aff67e747757cb",
"text": "Traps have been used extensively to provide early warning of hidden pest infestations. To date, however, there is only one type of trap on the market in the U.K. for storage mites, namely the BT mite trap, or monitor. Laboratory studies have shown that under the test conditions (20 °C, 65% RH) the BT trap is effective at detecting mites for at least 10 days for all three species tested: Lepidoglyphus destructor, Tyrophagus longior and Acarus siro. Further tests showed that all three species reached a trap at a distance of approximately 80 cm in a 24 h period. In experiments using 100 mites of each species, and regardless of either temperature (15 or 20 °C) or relative humidity (65 or 80% RH), the most abundant species in the traps was T. longior, followed by A. siro then L. destructor. Trap catches were highest at 20 °C and 65% RH. Temperature had a greater effect on mite numbers than humidity. Tests using different densities of each mite species showed that the number of L. destructor found in/on the trap was significantly reduced when either of the other two species was dominant. It would appear that there is an interaction between L. destructor and the other two mite species which affects relative numbers found within the trap.",
"title": ""
},
{
"docid": "281152e3fad12edfccfac1122b9467ad",
"text": "With tags widely used in organizing and searching contents in massive data era, how to automatically generate appropriate tags of resource for users became a hot issue on social networks research. Tag recommendation for text resource can be modeled as a keyword extraction problem, hence topic modeling such as LDA which extracts latent semantic topics from text is suitable for tag recommendation. However, latent topics are too coarse-grained to describe resource. Meanwhile, LDA trains corpus globally without considering context information. Besides, topics generated are difficult to be quantifiably represented. These problems lead to the poor quality of tag recommendation in topic model based method. In this paper, we propose topic representation method, which introduces embedding semantic representation into topic model. Our results of evaluation on real social networks show that the proposed method improves the quality of tag recommendation for Chinese text resource, when comparing with traditional LDA-based method, which demonstrates the effectiveness of modifying topic modeling.",
"title": ""
},
{
"docid": "4630cb81feb8519de1e12d9061d557f3",
"text": "Estimation of fragility functions using dynamic structural analysis is an important step in a number of seismic assessment procedures. This paper discusses the applicability of statistical inference concepts for fragility function estimation, describes appropriate fitting approaches for use with various structural analysis strategies, and studies how to fit fragility functions while minimizing the required number of structural analyses. Illustrative results show that multiple stripe analysis produces more efficient fragility estimates than incremental dynamic analysis for a given number of structural analyses, provided that some knowledge of the building’s capacity is available prior to analysis so that relevant portions of the fragility curve can be approximately identified. This finding has other benefits, as the multiple stripe analysis approach allows for different ground motions to be used for analyses at varying intensity levels, to represent the differing characteristics of low intensity and high intensity shaking. The proposed assessment approach also provides a framework for evaluating alternate analysis procedures that may arise in the future.",
"title": ""
},
{
"docid": "2c8b6d6e6b6c64d25fd885207eaa0327",
"text": "Many versions of Unix provide facilities for user-level packet capture, making possible the use of general purpose workstations for network monitoring. Because network monitors run as user-level processes, packets must be copied across the kernel/user-space protection boundary. This copying can be minimized by deploying a kernel agent called a packet filter , which discards unwanted packets as early as possible. The original Unix packet filter was designed around a stack-based filter evaluator that performs sub-optimally on current RISC CPUs. The BSD Packet Filter (BPF) uses a new, registerbased filter evaluator that is up to 20 times faster than the original design. BPF also uses a straightforward buffering strategy that makes its overall performance up to 100 times faster than Sun’s NIT running on the same hardware.",
"title": ""
},
{
"docid": "ab0a00f08aab01a2783c04c76ab841b7",
"text": "While accelerators such as GPUs have limited memory, deep neural networks are becoming larger and will not fit with the memory limitation of accelerators for training. We propose an approach to tackle this problem by rewriting the computational graph of a neural network, in which swap-out and swap-in operations are inserted to temporarily store intermediate results on CPU memory. In particular, we first revise the concept of a computational graph by defining a concrete semantics for variables in a graph. We then formally show how to derive swap-out and swap-in operations from an existing graph and present rules to optimize the graph. To realize our approach, we developed a module in TensorFlow, named TFLMS. TFLMS is published as a pull request in the TensorFlow repository for contributing to the TensorFlow community. With TFLMS, we were able to train ResNet-50 and 3DUnet with 4.7x and 2x larger batch size, respectively. In particular, we were able to train 3DUNet using images of size of 1923 for image segmentation, which, without TFLMS, had been done only by dividing the images to smaller images, which affects the accuracy.",
"title": ""
},
{
"docid": "5c4f20fcde1cc7927d359fd2d79c2ba5",
"text": "There are different interpretations of user experience that lead to different scopes of measure. The ISO definition suggests measures of user experience are similar to measures of satisfaction in usability. A survey at Nokia showed that user experience was interpreted in a similar way to usability, but with the addition of anticipation and hedonic responses. CHI 2009 SIG participants identified not just measurement methods, but methods that help understanding of how and why people use products. A distinction can be made between usability methods that have the objective of improving human performance, and user experience methods that have the objective of improving user satisfaction with achieving both pragmatic and hedonic goals. Sometimes the term “user experience” is used to refer to both approaches. DEFINITIONS OF USABILITY AND USER EXPERIENCE There has been a lot of recent debate about the scope of user experience, and how it should be defined [5]. The definition of user experience in ISO FDIS 9241-210 is: A person's perceptions and responses that result from the use and/or anticipated use of a product, system or service. This contrasts with the revised definition of usability in ISO FDIS 9241-210: Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Both these definitions suggest that usability or user experience can be measured during or after use of a product, system or service. A person's “perceptions and responses” in the definition of user experience are similar to the concept of satisfaction in usability. From this perspective, measures of user experience can be encompassed within the 3-component model of usability [1], particularly when the experience is task-related. A weakness of both definitions is that they are not explicitly concerned with time. Just as the ISO 9241-11 definition of usability has nothing to say about learnability (where usability changes over time), so the ISO 9241-210 definition of user experience has nothing to say about the way user experience evolves from expectation, through actual interaction, to a total experience that includes reflection on the experience [7]. USER EXPERIENCE NEEDS IN DESIGN AND DEVELOPMENT Ketola and Roto [4] surveyed the needs for information on user experience in Nokia, asking senior staff: Which User Experience information (measurable data gained from our target users directly or indirectly), is useful for your organization? How? 21 needs were identified from 18 respondents who worked in Research, Development, Care, and Quality. Ketola and Roto categorised the responses in terms of the area measured: UX lifecycle, retention, use of functions, breakdowns, customer care, localization, device performance and new technology. In Table 1, the needs have been recategorized by type of measure. It is clear that most of the measures are common to conventional approaches to user centred design, but three measures are specific to user experience: • The impact of expected UX to purchase decisions • Continuous excitement • Why and when the user experiences frustration? USER EXPERIENCE EVALUATION METHODS At the CHI 2009 SIG: “User Experience Evaluation – Do You Know Which Method to Use?” [6] [8], participants were asked to describe user experience evaluation methods that they used. 36 methods were collected (including the example methods presented by the organizers). These have been categorised in Table 2 by the type of evaluation context, and the type of data collected. There was very little mention of using measures specific to user experience, particularly from industry participants. It seems that industry's interpretation of user experience evaluation methods is much broader, going beyond conventional evaluation to encompass methods that collect information that helps design for user experience. In that sense user experience evaluation seems to be interpreted as user centred design methods for achieving user experience. The differentiating factor from more traditional usability work is thus a wider end goal: not just achieving effectiveness, efficiency and satisfaction, but optimising the whole user experience from expectation through actual interaction to reflection on the experience. DIFFERENCES BETWEEN USABILITY AND USER EXPERIENCE Although there is no fundamental difference between measures of usability and measures of user experience at a particular point in time, the difference in emphasis between task performance and pleasure leads to different concerns during development. In the context of user centred design, typical usability concerns include: Measurement category Measurement type Measure Area measured Anticipation Pre-purchase Anticipated use The impact of expected UX to purchase decisions UX lifecycle Overall usability First use Effectiveness Success of taking the product into use UX lifecycle Product upgrade Effectiveness Success in transferring content from old device to the new device UX lifecycle Expectations vs. reality Satisfaction Has the device met your expectations? Retention Long term experience Satisfaction Are you satisfied with the product quality (after 3 months of use) Retention Hedonic Engagement Pleasure Continuous excitement Retention UX Obstacles Frustration Why and when the user experiences frustration? Breakdowns Detailed usability Use of device functions How used What functions are used, how often, why, how, when, where? Use of functions Malfunction Technical problems Amount of “reboots” and severe technical problems experienced. Breakdowns Usability problems Usability problems Top 10 usability problems experienced by the customers. Breakdowns Effect of localization Satisfaction with localisation How do users perceive content in their local language? Localization Latencies Satisfaction with device performance Perceived latencies in key tasks. Device performance Performance Satisfaction with device performance Perceived UX on device performance Device performance Perceived complexity Satisfaction with task complexity Actual and perceived complexity of task accomplishments. Device performance User differences Previous devices Previous user experience Which device you had previously? Retention Differences in user groups User differences How different user groups access features? Use of functions Reliability of product planning User differences Comparison of target users vs. actual buyers? Use of functions Support Customer experience in “touchpoints” Satisfaction with support How does customer think & feel about the interaction in the touch points? Customer care Accuracy of support information Consequences of poor support Does inaccurate support information result in product returns? How? Customer care Innovation feedback User wish list New user ideas & innovations triggered by new experiences New technologies Impact of use Change in user behaviour How the device affects user behaviour How are usage patterns changing when new technologies are introduced New technologies Table 1. Categorisation of usability measures reported in [4] 1. Designing for and evaluating overall effectiveness and efficiency. 2. Designing for and evaluating user comfort and satisfaction. 3. Designing to make the product easy to use, and evaluating the product in order to identify and fix usability problems. 4. When relevant, the temporal aspect leads to a concern for learnability. In the context of user centred design, typical user experience concerns include: 1. Understanding and designing the user’s experience with a product: the way in which people interact with a product over time: what they do and why. 2. Maximising the achievement of the hedonic goals of stimulation, identification and evocation and associated emotional responses. Sometimes the two sets of issues are contrasted as usability and user experience. But some organisations would include both under the common umbrella of user experience. Evaluation context Lab tests Lab study with mind maps Paper prototyping Field tests Product / Tool Comparison Competitive evaluation of prototypes in the wild Field observation Long term pilot study Longitudinal comparison Contextual Inquiry Observation/Post Interview Activity Experience Sampling Longitudinal Evaluation Ethnography Field observations Longitudinal Studies Evaluation of groups Evaluating collaborative user experiences, Instrumented product TRUE Tracking Realtime User Experience Domain specific Nintendi Wii Children OPOS Outdoor Play Observation Scheme This-or-that Approaches Evaluating UX jointly with usability Evaluation data User opinion/interview Lab study with mind maps Quick and dirty evaluation Audio narrative Retrospective interview Contextual Inquiry Focus groups evaluation Observation \\ Post Interview Activity Experience Sampling Sensual Evaluation Instrument Contextual Laddering Interview ESM User questionnaire Survey Questions Emocards Experience sampling triggered by events, SAM Magnitude Estimation TRUE Tracking Realtime User Experience Questionnaire (e.g. AttrakDiff) Human responses PURE preverbal user reaction evaluation Psycho-physiological measurements Expert evaluation Expert evaluation Heuristic matrix Perspective-Based Inspection Table2. User experience evaluation methods (CHI 2009 SIG) CONCLUSIONS The scope of user experience The concept of user experience both broadens: • The range of human responses that would be measured to include pleasure. • The circumstances in which they would be measured to include anticipated use and reflection on use. Equally importantly the goal to achieve improved user experience over the whole lifecycle of user involvement with the product leads to increased emphasis on use of methods that help understand what can be done to improve this experience through the whole lifecycle of user involvement. However, notably absent from any of the current surveys or initiative",
"title": ""
},
{
"docid": "8f3323f43794789215e001b53fef149e",
"text": "Human pose estimation is one of the key problems in computer vision that has been studied for well over 15 years. The reason for its importance is the abundance of applications that can benefit from such a technology. For example, human pose estimation allows for higher level reasoning in the context of humancomputer interaction and activity recognition; it is also one of the basic building blocks for marker-less motion capture (MoCap) technology. MoCap technology is useful for applications ranging from character animation to clinical analysis of gait pathologies. Despite many years of research, however, pose estimation remains a very difficult and still largely unsolved problem. Among the most significant challenges are: (1) variability of human visual appearance in images, (2) variability in lighting conditions, (3) variability in human physique, (4) partial occlusions due to self articulation and layering of objects in the scene, (5) complexity of human skeletal structure, (6) high dimensionality of the pose, and (7) the loss of 3d information that results from observing the pose from 2d planar image projections. To date, there is no approach that can produce satisfactory results in general, unconstrained settings while dealing with all of the aforementioned challenges.",
"title": ""
},
{
"docid": "8e6be29997001367542283e94c7d8f05",
"text": "Character recognition has been widely used since its inception in applications involved processing of scanned or camera-captured documents. There exist multiple scripts in which the languages are written. The scripts could broadly be divided into cursive and non-cursive scripts. The recurrent neural networks have been proved to obtain state-of-the-art results for optical character recognition. We present a thorough investigation of the performance of recurrent neural network (RNN) for cursive and non-cursive scripts. We employ bidirectional long short-term memory (BLSTM) networks, which is a variant of the standard RNN. The output layer of the architecture used to carry out our investigation is a special layer called connectionist temporal classification (CTC) which does the sequence alignment. The CTC layer takes as an input the activations of LSTM and aligns the target labels with the inputs. The results were obtained at the character level for both cursive Urdu and non-cursive English scripts are significant and suggest that the BLSTM technique is potentially more useful than the existing OCR algorithms.",
"title": ""
},
{
"docid": "c29586780948b05929bed472bccb48e3",
"text": "Recognition and perception based mobile applications, such as image recognition, are on the rise. These applications recognize the user's surroundings and augment it with information and/or media. These applications are latency-sensitive. They have a soft-realtime nature - late results are potentially meaningless. On the one hand, given the compute-intensive nature of the tasks performed by such applications, execution is typically offloaded to the cloud. On the other hand, offloading such applications to the cloud incurs network latency, which can increase the user-perceived latency. Consequently, edge computing has been proposed to let devices offload intensive tasks to edge servers instead of the cloud, to reduce latency. In this paper, we propose a different model for using edge servers. We propose to use the edge as a specialized cache for recognition applications and formulate the expected latency for such a cache. We show that using an edge server like a typical web cache, for recognition applications, can lead to higher latencies. We propose Cachier, a system that uses the caching model along with novel optimizations to minimize latency by adaptively balancing load between the edge and the cloud, by leveraging spatiotemporal locality of requests, using offline analysis of applications, and online estimates of network conditions. We evaluate Cachier for image-recognition applications and show that our techniques yield 3x speedup in responsiveness, and perform accurately over a range of operating conditions. To the best of our knowledge, this is the first work that models edge servers as caches for compute-intensive recognition applications, and Cachier is the first system that uses this model to minimize latency for these applications.",
"title": ""
},
{
"docid": "d2430788229faccdeedd080b97d1741c",
"text": "Potentially, empowerment has much to offer health promotion. However, some caution needs to be exercised before the notion is wholeheartedly embraced as the major goal of health promotion. The lack of a clear theoretical underpinning, distortion of the concept by different users, measurement ambiguities, and structural barriers make 'empowerment' difficult to attain. To further discussion, th is paper proposes several assertions about the definition, components, process and outcome of 'empowerment', including the need for a distinction between psychological and community empowerment. These assertions and a model of community empowerment are offered in an attempt to clarify an important issue for health promotion.",
"title": ""
},
{
"docid": "a1787f832fe99a8c353805a41eeb9216",
"text": "This Proposed Work exposes, a advance computing technology that has been developed to help the farmer to take superior decision about many aspects of crop development process. Suitable evaluation and diagnosis of crop disease in the field is very critical for the increased production. Foliar is the major important fungal disease of cotton and occurs in all growing Indian regions. In this work we express new technological strategies using mobile captured symptoms of cotton leaf spot images and categorize the diseases using HPCCDD Proposed Algorithm. The classifier is being trained to achieve intelligent farming, including early Identification of diseases in the groves, selective fungicide application, etc. This proposed work is based on Image RGB feature ranging techniques used to identify the diseases (using Ranging values) in which, the captured images are processed for enhancement first. Then color image segmentation is carried out to get target regions (disease spots). Next Homogenize techniques like Sobel and Canny filter are used to Identify the edges, these extracted edge features are used in classification to identify the disease spots. Finally, pest recommendation is given to the farmers to ensure their crop and reduce the yeildloss.",
"title": ""
},
{
"docid": "84845323a1dcb318bb01fef5346c604d",
"text": "This paper introduced a centrifugal impeller-based wall-climbing robot with the μCOS-II System. Firstly, the climber's basic configurations of mechanical were described. Secondly, the mechanic analyses of walking mechanism was presented, which was essential to the suction device design. Thirdly, the control system including the PC remote control system and the STM32 master slave system was designed. Finally, an experiment was conducted to test the performance of negative pressure generating system and general abilities of wall-climbing robot.",
"title": ""
},
{
"docid": "fbc148e6c44e7315d55f2f5b9a2a2190",
"text": "India contributes about 70% of malaria in the South East Asian Region of WHO. Although annually India reports about two million cases and 1000 deaths attributable to malaria, there is an increasing trend in the proportion of Plasmodium falciparum as the agent. There exists heterogeneity and variability in the risk of malaria transmission between and within the states of the country as many ecotypes/paradigms of malaria have been recognized. The pattern of clinical presentation of severe malaria has also changed and while multi-organ failure is more frequently observed in falciparum malaria, there are reports of vivax malaria presenting with severe manifestations. The high burden populations are ethnic tribes living in the forested pockets of the states like Orissa, Jharkhand, Madhya Pradesh, Chhattisgarh and the North Eastern states which contribute bulk of morbidity and mortality due to malaria in the country. Drug resistance, insecticide resistance, lack of knowledge of actual disease burden along with new paradigms of malaria pose a challenge for malaria control in the country. Considering the existing gaps in reported and estimated morbidity and mortality, need for estimation of true burden of malaria has been stressed. Administrative, financial, technical and operational challenges faced by the national programme have been elucidated. Approaches and priorities that may be helpful in tackling serious issues confronting malaria programme have been outlined.",
"title": ""
},
{
"docid": "1ca92ec69901cda036fce2bb75512019",
"text": "Information Retrieval deals with searching and retrieving information within the documents and it also searches the online databases and internet. Web crawler is defined as a program or software which traverses the Web and downloads web documents in a methodical, automated manner. Based on the type of knowledge, web crawler is usually divided in three types of crawling techniques: General Purpose Crawling, Focused crawling and Distributed Crawling. In this paper, the applicability of Web Crawler in the field of web search and a review on Web Crawler to different problem domains in web search is discussed.",
"title": ""
},
{
"docid": "db94222542e570b4085e8694c7831c4f",
"text": "Sprengel deformity (ie, congenital elevation of the scapula) is a rare clinical entity. However, it is the most common congenital anomaly of the shoulder. Sprengel deformity is caused by abnormal descent of the scapula during embryonic development. Sprengel deformity is associated with cosmetic deformity and decreased shoulder function. Diagnostic confusion with limited scoliosis can be dangerous to the patient because it may delay proper treatment of other abnormalities that may be present with even mild cases. Sprengel deformity is commonly linked to a variety of conditions, including Klippel-Feil syndrome, scoliosis, and rib anomalies. Nonsurgical management can be considered for mild cases. Surgical management is typically warranted for more severe cases, with the goal of improving cosmesis and function. Surgical techniques are centered on resection of the protruding portion of the scapula and inferior translation of the scapula. Recent long-term studies indicate that patients treated surgically maintain improved shoulder function and appearance.",
"title": ""
},
{
"docid": "20acd69a0a61f2abb3d85d69cf721460",
"text": "Internet geolocation technology aims to determine the physical (geographic) location of Internet users and devices. It is currently proposed or in use for a wide variety of purposes, including targeted marketing, restricting digital content sales to authorized jurisdictions, and security applications such as reducing credit card fraud. This raises questions about the veracity of claims of accurate and reliable geolocation. We provide a survey of Internet geolocation technologies with an emphasis on adversarial contexts; that is, we consider how this technology performs against a knowledgeable adversary whose goal is to evade geolocation. We do so by examining first the limitations of existing techniques, and then, from this base, determining how best to evade existing geolocation techniques. We also consider two further geolocation techniques which may be of use even against adversarial targets: (1) the extraction of client IP addresses using functionality introduced in the 1.5 Java API, and (2) the collection of round-trip times using HTTP refreshes. These techniques illustrate that the seemingly straightforward technique of evading geolocation by relaying traffic through a proxy server (or network of proxy servers) is not as straightforward as many end-users might expect. We give a demonstration of this for users of the popular Tor anonymizing network.",
"title": ""
}
] |
scidocsrr
|
3c813c21dbb065c9da5562d21be5b73b
|
Toxic Behaviors in Esports Games: Player Perceptions and Coping Strategies
|
[
{
"docid": "ac46286c7d635ccdcd41358666026c12",
"text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.",
"title": ""
},
{
"docid": "3d7fabdd5f56c683de20640abccafc44",
"text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.",
"title": ""
}
] |
[
{
"docid": "244745da710e8c401173fe39359c7c49",
"text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.",
"title": ""
},
{
"docid": "9f5b61ad41dceff67ab328791ed64630",
"text": "In this paper we present a resource-adaptive framework for real-time vision-aided inertial navigation. Specifically, we focus on the problem of visual-inertial odometry (VIO), in which the objective is to track the motion of a mobile platform in an unknown environment. Our primary interest is navigation using miniature devices with limited computational resources, similar for example to a mobile phone. Our proposed estimation framework consists of two main components: (i) a hybrid EKF estimator that integrates two algorithms with complementary computational characteristics, namely a sliding-window EKF and EKF-based SLAM, and (ii) an adaptive image-processing module that adjusts the number of detected image features based oadaptive image-processing module that adjusts the number of detected image features based on the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework isn the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework is capable of real-time processing of image and inertial data on the processor of a mobile phone.",
"title": ""
},
{
"docid": "6779d20fd95ff4525404bdd4d3c7df4b",
"text": "A new method is presented for adaptive document image binarization, where the page is considered as a collection of subcomponents such as text, background and picture. The problems caused by noise, illumination and many source type-related degradations are addressed. Two new algorithms are applied to determine a local threshold for each pixel. The performance evaluation of the algorithm utilizes test images with ground-truth, evaluation metrics for binarization of textual and synthetic images, and a weight-based ranking procedure for the \"nal result presentation. The proposed algorithms were tested with images including di!erent types of document components and degradations. The results were compared with a number of known techniques in the literature. The benchmarking results show that the method adapts and performs well in each case qualitatively and quantitatively. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1dc7b9dc4f135625e2680dcde8c9e506",
"text": "This paper empirically analyzes di erent e ects of advertising in a nondurable, experience good market. A dynamic learning model of consumer behavior is presented in which we allow both \\informative\" e ects of advertising and \\prestige\" or \\image\" e ects of advertising. This learning model is estimated using consumer level panel data tracking grocery purchases and advertising exposures over time. Empirical results suggest that in this data, advertising's primary e ect was that of informing consumers. The estimates are used to quantify the value of this information to consumers and evaluate welfare implications of an alternative advertising regulatory regime. JEL Classi cations: D12, M37, D83 ' Economics Dept., Boston University, Boston, MA 02115 ([email protected]). This paper is a revised version of the second and third chapters of my doctoral dissertation at Yale University. Many thanks to my advisors: Steve Berry and Ariel Pakes, as well as Lanier Benkard, Russell Cooper, Gautam Gowrisankaran, Sam Kortum, Mike Riordan, John Rust, Roni Shachar, and many seminar participants, including most recently those at the NBER 1997Winter IO meetings, for advice and comments. I thank the Yale School of Management for gratefully providing the data used in this study. Financial support from the Cowles Foundation in the form of the Arvid Anderson Dissertation Fellowship is acknowledged and appreciated. All remaining errors in this paper are my own.",
"title": ""
},
{
"docid": "f26680bb9306ca413d0fd36efa406107",
"text": "Frequency-domain concepts and terminology are commonly used to describe antennas. These are very satisfactory for a CW or narrowband application. However, their validity is questionable for an instantaneous wideband excitation. Time-domain and/or wideband analyses can provide more insight and more effective terminology. Two approaches for this time-domain analysis have been described. The more complete one uses the transfer function, a function which describes the amplitude and phase of the response over the entire frequency spectrum. While this is useful for evaluating the overall response of a system, it may not be practical when trying to characterize an antenna's performance, and trying to compare it with that of other antennas. A more convenient and descriptive approach uses time-domain parameters, such as efficiency, energy pattern, receiving area, etc., with the constraint that the reference or excitation signal is known. The utility of both approaches, for describing the time-domain performance, was demonstrated for antennas which are both small and large, in comparison to the length of the reference signal. The approaches have also been used for other antennas, such as arrays, where they also could be applied to measure the effects of mutual impedance, for a wide-bandwidth signal. The time-domain ground-plane antenna range, on which these measurements were made, is suitable for symmetric antennas. However, the approach can be readily adapted to asymmetric antennas, without a ground plane, by using suitable reference antennas.<<ETX>>",
"title": ""
},
{
"docid": "c8b57dc6e3ef7c6b8712733ec6177275",
"text": "A student information system provides a simple interface for the easy collation and maintenance of all manner of student information. The creation and management of accurate, up-to-date information regarding students' academic careers is critical students and for the faculties and administration ofSebha University in Libya and for any other educational institution. A student information system deals with all kinds of data from enrollment to graduation, including program of study, attendance record, payment of fees and examination results to name but a few. All these dataneed to be made available through a secure, online interface embedded in auniversity's website. To lay the groundwork for such a system, first we need to build the student database to be integrated with the system. Therefore we proposed and implementedan online web-based system, which we named the student data system (SDS),to collect and correct all student data at Sebha University. The output of the system was evaluated by using a similarity (Euclidean distance) algorithm. The results showed that the new data collected by theSDS can fill the gaps and correct the errors in the old manual data records.",
"title": ""
},
{
"docid": "7b7e41ced300aeff7916509c04c4fd6a",
"text": "We present and evaluate various content-based recommendation models that make use of user and item profiles defined in terms of weighted lists of social tags. The studied approaches are adaptations of the Vector Space and Okapi BM25 information retrieval models. We empirically compare the recommenders using two datasets obtained from Delicious and Last.fm social systems, in order to analyse the performance of the approaches in scenarios with different domains and tagging behaviours.",
"title": ""
},
{
"docid": "3763da6b72ee0a010f3803a901c9eeb2",
"text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.",
"title": ""
},
{
"docid": "aa73df5eadafff7533994c05a8d3c415",
"text": "In this paper, we report on the outcomes of the European project EduWear. The aim of the project was to develop a construction kit with smart textiles and to examine its impact on young people. The construction kit, including a suitable programming environment and a workshop concept, was adopted by children in a number of workshops.\n The evaluation of the workshops showed that designing, creating, and programming wearables with a smart textile construction kit allows for creating personal meaningful projects which relate strongly to aspects of young people's life worlds. Through their construction activities, participants became more self-confident in dealing with technology and were able to draw relations between their own creations and technologies present in their environment. We argue that incorporating such constructionist processes into an appropriate workshop concept is essential for triggering thought processes about the character of digital media beyond the construction process itself.",
"title": ""
},
{
"docid": "f119b0ee9a237ab1e9acdae19664df0f",
"text": "Recent editorials in this journal have defended the right of eminent biologist James Watson to raise the unpopular hypothesis that people of sub-Saharan African descent score lower, on average, than people of European or East Asian descent on tests of general intelligence. As those editorials imply, the scientific evidence is substantial in showing a genetic contribution to these differences. The unjustified ill treatment meted out to Watson therefore requires setting the record straight about the current state of the evidence on intelligence, race, and genetics. In this paper, we summarize our own previous reviews based on 10 categories of evidence: The worldwide distribution of test scores; the g factor of mental ability; heritability differences; brain size differences; trans-racial adoption studies; racial admixture studies; regression-to-the-mean effects; related life-history traits; human origins research; and the poverty of predictions from culture-only explanations. The preponderance of evidence demonstrates that in intelligence, brain size, and other life-history variables, East Asians average a higher IQ and larger brain than Europeans who average a higher IQ and larger brain than Africans. Further, these group differences are 50–80% heritable. These are facts, not opinions and science must be governed by data. There is no place for the ‘‘moralistic fallacy’’ that reality must conform to our social, political, or ethical desires. !c 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7bd3f6b7b2f79f08534b70c16be91c02",
"text": "This paper describes a dual-loop delay-locked loop (DLL) which overcomes the problem of a limited delay range by using multiple voltage-controlled delay lines (VCDLs). A reference loop generates quadrature clocks, which are then delayed with controllable amounts by four VCDLs and multiplexed to generate the output clock in a main loop. This architecture enables the DLL to emulate the infinite-length VCDL with multiple finite-length VCDLs. The DLL incorporates a replica biasing circuit for low-jitter characteristics and a duty cycle corrector immune to prevalent process mismatches. A test chip has been fabricated using a 0.25m CMOS process. At 400 MHz, the peak-to-peak jitter with a quiet 2.5-V supply is 54 ps, and the supply-noise sensitivity is 0.32 ps/mV.",
"title": ""
},
{
"docid": "b0727e320a1c532bd3ede4fd892d8d01",
"text": "Semantic technologies could facilitate realizing features like interoperability and reasoning for Internet of Things (IoT). However, the dynamic and heterogeneous nature of IoT data, constrained resources, and real-time requirements set challenges for applying these technologies. In this paper, we study approaches for delivering semantic data from IoT nodes to distributed reasoning engines and reasoning over such data. We perform experiments to evaluate the scalability of these approaches and also study how reasoning is affected by different data aggregation strategies.",
"title": ""
},
{
"docid": "5a61c356940eef5eb18c53a71befbe5b",
"text": "Recently, plant construction throughout the world, including nuclear power plant construction, has grown significantly. The scale of Korea’s nuclear power plant construction in particular, has increased gradually since it won a contract for a nuclear power plant construction project in the United Arab Emirates in 2009. However, time and monetary resources have been lost in some nuclear power plant construction sites due to lack of risk management ability. The need to prevent losses at nuclear power plant construction sites has become more urgent because it demands professional skills and large-scale resources. Therefore, in this study, the Analytic Hierarchy Process (AHP) and Fuzzy Analytic Hierarchy Process (FAHP) were applied in order to make comparisons between decision-making methods, to assess the potential risks at nuclear power plant construction sites. To suggest the appropriate choice between two decision-making methods, a survey was carried out. From the results, the importance and the priority of 24 risk factors, classified by process, cost, safety, and quality, were analyzed. The FAHP was identified as a suitable method for risk assessment of nuclear power plant construction, compared with risk assessment using the AHP. These risk factors will be able to serve as baseline data for risk management in nuclear power plant construction projects.",
"title": ""
},
{
"docid": "d5ddc141311afb6050a58be88303b577",
"text": "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster RCNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.",
"title": ""
},
{
"docid": "609cc8dd7323e817ddfc5314070a68bf",
"text": "We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.",
"title": ""
},
{
"docid": "7eca894697ee372abe6f67a069dcd910",
"text": "Government agencies and consulting companies in charge of pavement management face the challenge of maintaining pavements in serviceable conditions throughout their life from the functional and structural standpoints. For this, the assessment and prediction of the pavement conditions are crucial. This study proposes a neuro-fuzzy model to predict the performance of flexible pavements using the parameters routinely collected by agencies to characterize the condition of an existing pavement. These parameters are generally obtained by performing falling weight deflectometer tests and monitoring the development of distresses on the pavement surface. The proposed hybrid model for predicting pavement performance was characterized by multilayer, feedforward neural networks that led the reasoning process of the IF-THEN fuzzy rules. The results of the neuro-fuzzy model were superior to those of the linear regression model in terms of accuracy in the approximation. The proposed neuro-fuzzy model showed good generalization capability, and the evaluation of the model performance produced satisfactory results, demonstrating the efficiency and potential of these new mathematical modeling techniques.",
"title": ""
},
{
"docid": "60bdd255a19784ed2d19550222e61b69",
"text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.",
"title": ""
},
{
"docid": "255ff39001f9bbcd7b1e6fe96f588371",
"text": "We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting, lattice alignment, and successive decoding.",
"title": ""
},
{
"docid": "85b77b88c2a06603267b770dbad8ec73",
"text": "Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.",
"title": ""
},
{
"docid": "a9b366b2b127b093b547f8a10ac05ca5",
"text": "Each user session in an e-commerce system can be modeled as a sequence of web pages, indicating how the user interacts with the system and makes his/her purchase. A typical recommendation approach, e.g., Collaborative Filtering, generates its results at the beginning of each session, listing the most likely purchased items. However, such approach fails to exploit current viewing history of the user and hence, is unable to provide a real-time customized recommendation service. In this paper, we build a deep recurrent neural network to address the problem. The network tracks how users browse the website using multiple hidden layers. Each hidden layer models how the combinations of webpages are accessed and in what order. To reduce the processing cost, the network only records a finite number of states, while the old states collapse into a single history state. Our model refreshes the recommendation result each time when user opens a new web page. As user's session continues, the recommendation result is gradually refined. Furthermore, we integrate the recurrent neural network with a Feedfoward network which represents the user-item correlations to increase the prediction accuracy. Our approach has been applied to Kaola (http://www.kaola.com), an e-commerce website powered by the NetEase technologies. It shows a significant improvement over previous recommendation service.",
"title": ""
}
] |
scidocsrr
|
805373dedabe870ad3c8e2df8b178041
|
Information sharing on social media sites
|
[
{
"docid": "b262ea4a0a8880d044c77acc84b0c859",
"text": "Online social networks may be important avenues for building and maintaining social capital as adult’s age. However, few studies have explicitly examined the role online communities play in the lives of seniors. In this exploratory study, U.S. seniors were interviewed to assess the impact of Facebook on social capital. Interpretive thematic analysis reveals Facebook facilitates connections to loved ones and may indirectly facilitate bonding social capital. Awareness generated via Facebook often lead to the sharing and receipt of emotional support via other channels. As such, Facebook acted as a catalyst for increasing social capital. The implication of “awareness” as a new dimension of social capital theory is discussed. Additionally, Facebook was found to have potential negative impacts on seniors’ current relationships due to open access to personal information. Finally, common concerns related to privacy, comfort with technology, and inappropriate content were revealed.",
"title": ""
},
{
"docid": "65dbd6cfc76d7a81eaa8a1dd49a838bb",
"text": "Organizations are attempting to leverage their knowledge resources by employing knowledge management (KM) systems, a key form of which are electronic knowledge repositories (EKRs). A large number of KM initiatives fail due to reluctance of employees to share knowledge through these systems. Motivated by such concerns, this study formulates and tests a theoretical model to explain EKR usage by knowledge contributors. The model employs social exchange theory to identify cost and benefit factors affecting EKR usage, and social capital theory to account for the moderating influence of contextual factors. The model is validated through a large-scale survey of public sector organizations. The results reveal that knowledge self-efficacy and enjoyment in helping others significantly impact EKR usage by knowledge contributors. Contextual factors (generalized trust, pro-sharing norms, and identification) moderate the impact of codification effort, reciprocity, and organizational reward on EKR usage, respectively. It can be seen that extrinsic benefits (reciprocity and organizational reward) impact EKR usage contingent on particular contextual factors whereas the effects of intrinsic benefits (knowledge self-efficacy and enjoyment in helping others) on EKR usage are not moderated by contextual factors. The loss of knowledge power and image do not appear to impact EKR usage by knowledge contributors. Besides contributing to theory building in KM, the results of this study inform KM practice.",
"title": ""
},
{
"docid": "206263868f70a1ce6aa734019d215a03",
"text": "This paper examines microblogging information diffusion activity during the 2011 Egyptian political uprisings. Specifically, we examine the use of the retweet mechanism on Twitter, using empirical evidence of information propagation to reveal aspects of work that the crowd conducts. Analysis of the widespread contagion of a popular meme reveals interaction between those who were \"on the ground\" in Cairo and those who were not. However, differences between information that appeals to the larger crowd and those who were doing on-the-ground work reveal important interplay between the two realms. Through both qualitative and statistical description, we show how the crowd expresses solidarity and does the work of information processing through recommendation and filtering. We discuss how these aspects of work mutually sustain crowd interaction in a politically sensitive context. In addition, we show how features of this retweet-recommendation behavior could be used in combination with other indicators to identify information that is new and likely coming from the ground.",
"title": ""
}
] |
[
{
"docid": "dde5083017c2db3ffdd90668e28bab4b",
"text": "Current industry standards for describing Web Services focus on ensuring interoperability across diverse platforms, but do not provide a good foundation for automating the use of Web Services. Representational techniques being developed for the Semantic Web can be used to augment these standards. The resulting Web Service specifications enable the development of software programs that can interpret descriptions of unfamiliar Web Services and then employ those services to satisfy user goals. OWL-S (“OWL for Services”) is a set of notations for expressing such specifications, based on the Semantic Web ontology language OWL. It consists of three interrelated parts: a profile ontology, used to describe what the service does; a process ontology and corresponding presentation syntax, used to describe how the service is used; and a grounding ontology, used to describe how to interact with the service. OWL-S can be used to automate a variety of service-related activities involving service discovery, interoperation, and composition. A large body of research on OWL-S has led to the creation of many open-source tools for developing, reasoning about, and dynamically utilizing Web Services.",
"title": ""
},
{
"docid": "bb6749bbd38f1581a35595fdff0b8581",
"text": "Time series analysis, as an application for high dimensional data mining, is a common task in biochemistry, meteorology, climate research, bio-medicine or marketing. Similarity search in data with increasing dimensionality results in an exponential growth of the search space, referred to as Curse of Dimensionality. A common approach to postpone this effect is to apply approximation to reduce the dimensionality of the original data prior to indexing. However, approximation involves loss of information, which also leads to an exponential growth of the search space. Therefore, indexing an approximation with a high dimensionality, i. e. high quality, is desirable.\n We introduce Symbolic Fourier Approximation (SFA) and the SFA trie which allows for indexing of not only large datasets but also high dimensional approximations. This is done by exploiting the trade-off between the quality of the approximation and the degeneration of the index by using a variable number of dimensions to represent each approximation. Our experiments show that SFA combined with the SFA trie can scale up to a factor of 5--10 more indexed dimensions than previous approaches. Thus, it provides lower page accesses and CPU costs by a factor of 2--25 respectively 2--11 for exact similarity search using real world and synthetic data.",
"title": ""
},
{
"docid": "9a1f69647c56d377f4592247d7e1688d",
"text": "We propose a novel solution for computing the relative pose between two generalized cameras that includes reconciling the internal scale of the generalized cameras. This approach can be used to compute a similarity transformation between two coordinate systems, making it useful for loop closure in visual odometry and registering multiple structure from motion reconstructions together. In contrast to alternative similarity transformation methods, our approach uses 2D-2D image correspondences thus is not subject to the depth uncertainty that often arises with 3D points. We utilize a known vertical direction (which may be easily obtained from IMU data or vertical vanishing point detection) of the generalized cameras to solve the generalized relative pose and scale problem as an efficient Quadratic Eigenvalue Problem. To our knowledge, this is the first method for computing similarity transformations that does not require any 3D information. Our experiments on synthetic and real data demonstrate that this leads to improved performance compared to methods that use 3D-3D or 2D-3D correspondences, especially as the depth of the scene increases.",
"title": ""
},
{
"docid": "75a87310e8ed951729fb3e86ea9fde25",
"text": "Disclaimer: The views, processes, or methodologies published in this article are those of the author. They do not necessarily reflect EMC Corporation's views, processes, or methodologies.",
"title": ""
},
{
"docid": "b2332b118b846c9f417558a02975e20a",
"text": "This is the third in a series of four tutorial papers on biomedical signal processing and concerns the estimation of the power spectrum (PS) and coherence function (CF) od biomedical data. The PS is introduced and its estimation by means of the discrete Fourier transform is considered in terms of the problem of resolution in the frequency domain. The periodogram is introduced and its variance, bias and the effects of windowing and smoothing are considered. The use of the autocovariance function as a stage in power spectral estimation is described and the effects of windows in the autocorrelation domain are compared with the related effects of windows in the original time domain. The concept of coherence is introduced and the many ways in which coherence functions might be estimated are considered.",
"title": ""
},
{
"docid": "edba95a46dd44f3e320a8ce417e5ec6d",
"text": "In this paper, the state of the art in ultra-low power (ULP) VLSI design is presented within a unitary framework for the first time. A few general principles are first introduced to gain an insight into the design issues and the approaches that are specific to ULP systems, as well as to better understand the challenges that have to be faced in the foreseeable future. Intuitive understanding is accompanied by rigorous analysis for each key concept. The analysis ranges from the circuit to the micro-architectural level, and reference is given to process, physical and system levels when necessary. Among the main goals of this paper, it is shown that many paradigms and approaches borrowed from traditional above-threshold low-power VLSI design are actually incorrect. Accordingly, common misconceptions in the ULP domain are debunked and replaced with technically sound explanations.",
"title": ""
},
{
"docid": "aba5e022ed343b44e61d272900026b7f",
"text": "Next-generation sequencing allows for cost-effective probing of virus populations at an unprecedented level of detail. The massively parallel sequencing approach can detect low-frequency mutations and it provides a snapshot of the entire virus population. However, analyzing ultra-deep sequencing data obtained from diverse virus populations is challenging because of PCR and sequencing errors and short read lengths, such that the experiment provides only indirect evidence of the underlying viral population structure. Recent computational and statistical advances allow for accommodating some of the confounding factors, including methods for read error correction, haplotype reconstruction, and haplotype frequency estimation. With these methods ultra-deep sequencing can be more reliably used to analyze, in a quantitative manner, the genetic diversity of virus populations.",
"title": ""
},
{
"docid": "ac07f85a8d6114061569e043e19747f5",
"text": "In this paper, some novel and modified driving techniques for a single switch zero voltage switching (ZVS) topology are introduced. These medium/high frequency and digitally synthesized driving techniques can be applied to decrease the dangers of peak currents that may damage the switching circuit when switching in out of nominal conditions. The technique is fully described and evaluated experimentally in a 2500W prototype intended for a domestic induction cooking application.",
"title": ""
},
{
"docid": "b305e3504e3a99a5cd026e7845d98dab",
"text": "This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST and the backwards-smoothing extended Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A twostep approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, Associate Professor, Department of Mechanical & Aerospace Engineering. Email: [email protected]. Associate Fellow AIAA. Aerospace Engineer, Guidance, Navigation and Control Systems Engineering Branch. Email: [email protected]. Fellow AIAA. Postdoctoral Research Fellow, Department of Mechanical & Aerospace Engineering. Email: [email protected]. Member AIAA.",
"title": ""
},
{
"docid": "d9aa5e0d687add02a6b31759c482489c",
"text": "This paper presents an accurate and fast algorithm for road segmentation using convolutional neural network (CNN) and gated recurrent units (GRU). For autonomous vehicles, road segmentation is a fundamental task that can provide the drivable area for path planning. The existing deep neural network based segmentation algorithms usually take a very deep encoder-decoder structure to fuse pixels, which requires heavy computations, large memory and long processing time. Hereby, a CNN-GRU network model is proposed and trained to perform road segmentation using data captured by the front camera of a vehicle. GRU network obtains a long spatial sequence with lower computational complexity, comparing to traditional encoderdecoder architecture. The proposed road detector is evaluated on the KITTI road benchmark and achieves high accuracy for road segmentation at real-time processing speed.",
"title": ""
},
{
"docid": "7a883f32f86dd6c9dbde6f0443072157",
"text": "Gaussian process (GP) regression models make for powerful predictors in out of sample exercises, but cubic runtimes for dense matrix decompositions severely limit the size of data—training and testing—on which they can be deployed. That means that in computer experiment, spatial/geo-physical, and machine learning contexts, GPs no longer enjoy privileged status as data sets continue to balloon in size. We discuss an implementation of local approximate Gaussian process models, in the laGP package for R, that offers a particular sparse-matrix remedy uniquely positioned to leverage modern parallel computing architectures. The laGP approach can be seen as an update on the spatial statistical method of local kriging neighborhoods. We briefly review the method, and provide extensive illustrations of the features in the package through worked-code examples. The appendix covers custom building options for symmetric multi-processor and graphical processing units, and built-in wrapper routines that automate distribution over a simple network of workstations.",
"title": ""
},
{
"docid": "0bdb1d537011582c599a68f70881b274",
"text": "This article examines the acquisition of vocational skills through apprenticeship-type situated learning. Findings from a studies of skilled workers revealed that learning processes that were consonant with the apprenticeship model of learning were highly valued as a means of acquiring and maintaining vocational skills. Supported by current research and theorising, this article, describes some conditions by which situated learning through apprenticeship can be utilised to develop vocational skills. These conditions include the nature of the activities learners engage in, the agency of the learning environment and mentoring role of experts. Conditions which may inhibit the effectiveness of an apprenticeship approach to learning are also addressed. The article concludes by suggesting that situated approaches to learning, such as the apprenticeship model may address problems of access to effective vocational skill development within the workforce.",
"title": ""
},
{
"docid": "5ebf60a0f113ec60c4f9f3c2089e86cb",
"text": "A rapidly burgeoning literature documents copious sex influences on brain anatomy, chemistry and function. This article highlights some of the more intriguing recent discoveries and their implications. Consideration of the effects of sex can help to explain seemingly contradictory findings. Research into sex influences is mandatory to fully understand a host of brain disorders with sex differences in their incidence and/or nature. The striking quantity and diversity of sex-related influences on brain function indicate that the still widespread assumption that sex influences are negligible cannot be justified, and probably retards progress in our field.",
"title": ""
},
{
"docid": "1861cbfefd392f662b350e70c60f3b6b",
"text": "Text mining concerns looking for patterns in unstructured text. The related task of Information Extraction (IE) is about locating specific items in natural-language documents. This paper presents a framework for text mining, called DISCOTEX (Discovery from Text EXtraction), using a learned information extraction system to transform text into more structured data which is then mined for interesting relationships. The initial version of DISCOTEX integrates an IE module acquired by an IE learning system, and a standard rule induction module. In addition, rules mined from a database extracted from a corpus of texts are used to predict additional information to extract from future documents, thereby improving the recall of the underlying extraction system. Encouraging results are presented on applying these techniques to a corpus of computer job announcement postings from an Internet newsgroup.",
"title": ""
},
{
"docid": "5ce00014f84277aca0a4b7dfefc01cbb",
"text": "The design of a planar dual-band wide-scan phased array is presented. The array uses novel dual-band comb-slot-loaded patch elements supporting two separate bands with a frequency ratio of 1.4:1. The antenna maintains consistent radiation patterns and incorporates a feeding configuration providing good bandwidths in both bands. The design has been experimentally validated with an X-band planar 9 × 9 array. The array supports wide-angle scanning up to a maximum of 60 ° and 50 ° at the low and high frequency bands respectively.",
"title": ""
},
{
"docid": "15c3ddb9c01d114ab7d09f010195465b",
"text": "In this paper we have described a solution for supporting independent living of the elderly by means of equipping their home with a simple sensor network to monitor their behaviour. Standard home automation sensors including movement sensors and door entry point sensors are used. By monitoring the sensor data, important information regarding any anomalous behaviour will be identified. Different ways of visualizing large sensor data sets and representing them in a format suitable for clustering the abnormalities are also investigated. In the latter part of this paper, recurrent neural networks are used to predict the future values of the activities for each sensor. The predicted values are used to inform the caregiver in case anomalous behaviour is predicted in the near future. Data collection, classification and prediction are investigated in real home environments with elderly occupants suffering from dementia.",
"title": ""
},
{
"docid": "a03257a06a81fe0d0f8aaa0c2afa26ca",
"text": "Food image recognition is one of the promising applications of visual object recognition in computer vision. In this study, a small-scale dataset consisting of 5822 images of ten categories and a five-layer CNN was constructed to recognize these images. The bag-of-features (BoF) model coupled with support vector machine was first tested as comparison, resulting in an overall accuracy of 56%; while the CNN performed much better with an overall accuracy of 74%. Data expansion techniques were applied to increase the size of training images, which achieved a significantly improved accuracy of more than 90% and prevent the overfitting issue that occurred to the CNN without using data expansion. Further improvement is within reach by collecting more images and optimizing the network architecture and relevant hyper-parameters.",
"title": ""
},
{
"docid": "a32d6897d74397f5874cc116221af207",
"text": "A plausible definition of “reasoning” could be “algebraically manipulating previously acquired knowledge in order to answer a new question”. This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labelled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.",
"title": ""
},
{
"docid": "f55356d766fd2e6f6b90f18d438edf53",
"text": "The London Interbank Offered Rate (Libor) and the Euro Interbank Offered Rate (Euribor) are two key market benchmark interest rates used in a plethora of financial contracts with notional amounts running into the hundreds of trillions of dollars. The integrity of the rate-setting process for these benchmarks has been under intense scrutiny ever since the first reports of attempts to manipulate these rates surfaced in 2007. In this paper, we analyze Libor and Euribor rate submissions by the individual panel banks and shed light on the underlying manipulation potential, by quantifying their effects on the final rate set (the “fixing”). We explicitly take into account the possibility of collusion between several market participants. Our setup allows us to quantify such effects for the actual rate-setting process that is in place at present, and compare it to several alternative rate-setting procedures. We find that such alternative rate fixings, particularly methodologies that eliminate outliers based on the median of submitted rates and the time-series of past submissions, could significantly reduce the effect of manipulation. Furthermore, we discuss the role of the sample size and the particular questions asked of the panel banks, which are different for Libor and Euribor, and examine the need for a transactions database to validate individual submissions.",
"title": ""
}
] |
scidocsrr
|
55b04e302617ae736e974e365ca8da70
|
COCA: Computation Offload to Clouds Using AOP
|
[
{
"docid": "74227709f4832c3978a21abb9449203b",
"text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.",
"title": ""
}
] |
[
{
"docid": "3c695b12b47f358012f10dc058bf6f6a",
"text": "This paper addresses the problem of classifying places in the environment of a mobile robot into semantic categories. We believe that semantic information about the type of place improves the capabilities of a mobile robot in various domains including localization, path-planning, or human-robot interaction. Our approach uses AdaBoost, a supervised learning algorithm, to train a set of classifiers for place recognition based on laser range data. In this paper we describe how this approach can be applied to distinguish between rooms, corridors, doorways, and hallways. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various environments.",
"title": ""
},
{
"docid": "3b8817b9838374ec58f75f43fbcf209c",
"text": "Background. Breastfeeding is the optimal method for achieving a normal growth and development of the baby. This study aimed to study mothers' perceptions and practices regarding breastfeeding in Mangalore, India. Methodology. A cross-sectional study of 188 mothers was conducted using a structured proforma. Results. Importance of breast feeding was known to most mothers. While initiation of breast feeding within one hour of birth was done by majority of mothers, few had discarded colostrum and adopted prelacteal feeding. Mothers opined that breast feeding is healthy for their babies (96.3%) and easier than infant feeding (79.8%), does not affect marital relationship (51%), and decreases family expenditure (61.1%). However, there were poor perceptions regarding the advantages of breast milk with respect to nutritive value, immune effect, and disease protection. Few respondents reported discontinuation of breastfeeding in previous child if the baby had fever/cold (6%) or diarrhea (18%) and vomiting (26%). There was a statistically significant association between mother's educational level and perceived importance of breastfeeding and also between the mode of delivery and initiation of breast feeding (p < 0.05). Conclusion. Importance of breast feeding was known to most mothers. Few perceptions related to breast milk and feeding along with myths and disbeliefs should be rectified by health education.",
"title": ""
},
{
"docid": "3efaaabf9a93460bace2e70abc71801d",
"text": "BACKGROUND\nNumerous studies report an association between social support and protection from depression, but no systematic review or meta-analysis exists on this topic.\n\n\nAIMS\nTo review systematically the characteristics of social support (types and source) associated with protection from depression across life periods (childhood and adolescence; adulthood; older age) and by study design (cross-sectional v cohort studies).\n\n\nMETHOD\nA systematic literature search conducted in February 2015 yielded 100 eligible studies. Study quality was assessed using a critical appraisal checklist, followed by meta-analyses.\n\n\nRESULTS\nSources of support varied across life periods, with parental support being most important among children and adolescents, whereas adults and older adults relied more on spouses, followed by family and then friends. Significant heterogeneity in social support measurement was noted. Effects were weaker in both magnitude and significance in cohort studies.\n\n\nCONCLUSIONS\nKnowledge gaps remain due to social support measurement heterogeneity and to evidence of reverse causality bias.",
"title": ""
},
{
"docid": "76d297fe81d50d9efa170fb033f3e0df",
"text": "In recent years, many companies have developed various distributed computation frameworks for processing machine learning (ML) jobs in clusters. Networking is a well-known bottleneck for ML systems and the cluster demands efficient scheduling for huge traffic (up to 1GB per flow) generated by ML jobs. Coflow has been proven an effective abstraction to schedule flows of such data-parallel applications. However, the implementation of coflow scheduling policy is constrained when coflow characteristics are unknown a prior, and when TCP congestion control misinterprets the congestion signal leading to low throughput. Fortunately, traffic patterns experienced by some ML jobs support to speculate the complete coflow characteristic with limited information. Hence this paper summarizes coflow from these ML jobs as self-similar coflow and proposes a decentralized self-similar coflow scheduler Cicada. Cicada assigns each coflow a probe flow to speculate its characteristics during the transportation and employs the Shortest Job First (SJF) to separate coflow into strict priority queues based on the speculation result. To achieve full bandwidth for throughput- sensitive ML jobs, and to guarantee the scheduling policy implementation, Cicada promotes the elastic transport-layer rate control that outperforms prior works. Large-scale simulations show that Cicada completes coflow 2.08x faster than the state-of-the-art schemes in the information-agnostic scenario.",
"title": ""
},
{
"docid": "b19630c809608601948a7f16910396f7",
"text": "This paper presents a novel, smart and portable active knee rehabilitation orthotic device (AKROD) designed to train stroke patients to correct knee hyperextension during stance and stiff-legged gait (defined as reduced knee flexion during swing). The knee brace provides variable damping controlled in ways that foster motor recovery in stroke patients. A resistive, variable damper, electro-rheological fluid (ERF) based component is used to facilitate knee flexion during stance by providing resistance to knee buckling. Furthermore, the knee brace is used to assist in knee control during swing, i.e. to allow patients to achieve adequate knee flexion for toe clearance and adequate knee extension in preparation to heel strike. The detailed design of AKROD, the first prototype built, closed loop control results and initial human testing are presented here",
"title": ""
},
{
"docid": "97a6a77cfa356636e11e02ffe6fc0121",
"text": "© 2019 Muhammad Burhan Hafez et al., published by De Gruyter. This work is licensed under the Creative CommonsAttribution-NonCommercial-NoDerivs4.0License. Paladyn, J. Behav. Robot. 2019; 10:14–29 Research Article Open Access Muhammad Burhan Hafez*, Cornelius Weber, Matthias Kerzel, and Stefan Wermter Deep intrinsically motivated continuous actor-critic for eflcient robotic visuomotor skill learning https://doi.org/10.1515/pjbr-2019-0005 Received June 6, 2018; accepted October 29, 2018 Abstract: In this paper, we present a new intrinsically motivated actor-critic algorithm for learning continuous motor skills directly from raw visual input. Our neural architecture is composed of a critic and an actor network. Both networks receive the hidden representation of a deep convolutional autoencoder which is trained to reconstruct the visual input, while the centre-most hidden representation is also optimized to estimate the state value. Separately, an ensemble of predictive world models generates, based on its learning progress, an intrinsic reward signal which is combined with the extrinsic reward to guide the exploration of the actor-critic learner. Our approach is more data-efficient and inherently more stable than the existing actor-critic methods for continuous control from pixel data. We evaluate our algorithm for the task of learning robotic reaching and grasping skills on a realistic physics simulator and on a humanoid robot. The results show that the control policies learnedwith our approach can achieve better performance than the compared state-of-the-art and baseline algorithms in both dense-reward and challenging sparse-reward settings.",
"title": ""
},
{
"docid": "eb83222ce7180fe3039c00eeb8600d2f",
"text": "Cloud-assisted video streaming has emerged as a new paradigm to optimize multimedia content distribution over the Internet. This article investigates the problem of streaming cloud-assisted real-time video to multiple destinations (e.g., cloud video conferencing, multi-player cloud gaming, etc.) over lossy communication networks. The user diversity and network dynamics result in the delay differences among multiple destinations. This research proposes <underline>D</underline>ifferentiated cloud-<underline>A</underline>ssisted <underline>VI</underline>deo <underline>S</underline>treaming (DAVIS) framework, which proactively leverages such delay differences in video coding and transmission optimization. First, we analytically formulate the optimization problem of joint coding and transmission to maximize received video quality. Second, we develop a quality optimization framework that integrates the video representation selection and FEC (Forward Error Correction) packet interleaving. The proposed DAVIS is able to effectively perform differentiated quality optimization for multiple destinations by taking advantage of the delay differences in cloud-assisted video streaming system. We conduct the performance evaluation through extensive experiments with the Amazon EC2 instances and Exata emulation platform. Evaluation results show that DAVIS outperforms the reference cloud-assisted streaming solutions in video quality and delay performance.",
"title": ""
},
{
"docid": "1ed93d114804da5714b7b612f40e8486",
"text": "Volleyball players are at high risk of overuse shoulder injuries, with spike biomechanics a perceived risk factor. This study compared spike kinematics between elite male volleyball players with and without a history of shoulder injuries. Height, mass, maximum jump height, passive shoulder rotation range of motion (ROM), and active trunk ROM were collected on elite players with (13) and without (11) shoulder injury history and were compared using independent samples t tests (P < .05). The average of spike kinematics at impact and range 0.1 s before and after impact during down-the-line and cross-court spike types were compared using linear mixed models in SPSS (P < .01). No differences were detected between the injured and uninjured groups. Thoracic rotation and shoulder abduction at impact and range of shoulder rotation velocity differed between spike types. The ability to tolerate the differing demands of the spike types could be used as return-to-play criteria for injured athletes.",
"title": ""
},
{
"docid": "7858fb4630f385d07e00cb5733e35c85",
"text": "Recommender system is used to recommend items and services to the users and provide recommendations based on prediction. The prediction performance plays vital role in the quality of recommendation. To improve the prediction performance, this paper proposed a new hybrid method based on naïve Bayesian classifier with Gaussian correction and feature engineering. The proposed method is experimented on the well known movie lens 100k data set. The results show better results when compared with existing methods.",
"title": ""
},
{
"docid": "2ed183563bd5cdaafa96b03836883730",
"text": "This is an introduction to the Classic Paper on MOSFET scaling by R. Dennardet al., “Design of Ion-Implanted MOSFET’s with Very Small Physical Dimensions,” published in the IEEE Journal of Solid-State Circuitsin October 1974. The history of scaling and its application to very large scale integration (VLSI) MOSFET technology is traced from 1970 to 1998. The role of scaling in the profound improvements in power delay product over the last three decades is analyzed in basic terms.",
"title": ""
},
{
"docid": "75642d6a79f6b9bb8b02f6d8ded6a370",
"text": "Spectral indices as a selection tool in plant breeding could improve genetic gains for different important traits. The objectives of this study were to assess the potential of using spectral reflectance indices (SRI) to estimate genetic variation for in-season biomass production, leaf chlorophyll, and canopy temperature (CT) in wheat (Triticum aestivum L.) under irrigated conditions. Three field experiments, GHIST (15 CIMMYT globally adapted historic genotypes), RILs1 (25 recombinant inbred lines [RILs]), and RILs2 (36 RILs) were conducted under irrigated conditions at the CIMMYT research station in northwest Mexico in three different years. Five SRI were evaluated to differentiate genotypes for biomass production. In general, genotypic variation for all the indices was significant. Near infrared radiation (NIR)–based indices gave the highest levels of associationwith biomass production and the higher associations were observed at heading and grainfilling, rather than at booting. Overall, NIR-based indices were more consistent and differentiated biomass more effectively compared to the other indices. Indices based on ratio of reflection spectra correlatedwith SPADchlorophyll values, and the associationwas stronger at the generative growth stages. These SRI also successfully differentiated the SPAD values at the genotypic level. The NIR-based indices showed a strong and significant association with CT at the heading and grainfilling stages. These results demonstrate the potential of using SRI as a breeding tool to select for increased genetic gains in biomass and chlorophyll content, plus for cooler canopies. SIGNIFICANT PROGRESS in grain yield of spring wheat under irrigated conditions has been made through the classical breeding approach (Slafer et al., 1994), even though the genetic basis of yield improvement in wheat is not well established (Reynolds et al., 1999). Several authors have reported that progress in grain yield is mainly attributed to better partitioning of photosynthetic products (Waddington et al., 1986; Calderini et al., 1995; Sayre et al., 1997). The systematic increase in the partitioning of assimilates (harvest index) has a theoretical upper limit of approximately 60% (Austin et al., 1980). Further yield increases in wheat through improvement in harvest index will be limited without a further increase in total crop biomass (Austin et al., 1980; Slafer and Andrade, 1991; Reynolds et al., 1999). Though until relatively recently biomass was not commonly associated with yield gains, increases in biomass of spring wheat have been reported (Waddington et al., 1986; Sayre et al., 1997) and more recently in association with yield increases (Singh et al., 1998; Reynolds et al., 2005; Shearman et al., 2005). Thus, a breeding approach is needed that will select genotypes with higher biomass capacity, while maintaining the high partitioning rate of photosynthetic products. Direct estimation of biomass is a timeand laborintensive undertaking. Moreover, destructive in-season sampling involves large sampling errors (Whan et al., 1991) and reduces the final area for estimation of grain yield and final biomass. Regan et al. (1992) demonstrated a method to select superior genotypes of spring wheat for early vigor under rainfed conditions using a destructive sampling technique, but such sampling is impossible for breeding programs where a large number of genotypes are being screened for various desirable traits. Spectral reflectance indices are a potentially rapid technique that could assess biomass at the genotypic level without destructive sampling (Elliott and Regan, 1993; Smith et al., 1993; Bellairs et al., 1996; Peñuelas et al., 1997). Canopy light reflectance properties based mainly on the absorption of light at a specific wavelength are associated with specific plant characteristics. The spectral reflectance in the visible (VIS) wavelengths (400–700 nm) depends on the absorption of light by leaf chlorophyll and associated pigments such as carotenoid and anthocyanins. The reflectance of the VIS wavelengths is relatively low because of the high absorption of light energy by these pigments. In contrast, the reflectance of theNIR wavelengths (700–1300 nm) is high, since it is not absorbed by plant pigments and is scattered by plant tissue at different levels in the canopy, such that much of it is reflected back rather than being absorbed by the soil (Knipling, 1970). Spectral reflectance indices were developed on the basis of simple mathematical formula, such as ratios or differences between the reflectance at given wavelengths (Araus et al., 2001). Simple ratio (SR 5 NIR/VIS) and the normalized difference vegetation M.A. Babar, A.R. Klatt, and W.R. Raun, Department of Plant and Soil Sciences, 368 Ag. Hall, Oklahoma State University, Stillwater, OK 74078, USA; M.P. Reynolds, International Maize and Wheat Improvement Center (CIMMYT), Km. 45, Carretera Mexico, El Batan, Texcoco, Mexico; M. van Ginkel, Department of Primary Industries (DPI), Private Bag 260, Horsham, Victoria, Postcode: 3401, DX Number: 216515, Australia; M.L. Stone, Department of Biosystems and Agricultural Engineering, Oklahoma State University, Stillwater, OK 74078, USA. This research was partially funded by the Oklahoma Wheat Research Foundation (OWRF), Oklahoma Wheat Commission, and CIMMYT (International Maize and Wheat Improvement Center), Mexico. Received 11 Mar. 2005. *Corresponding author ([email protected]). Published in Crop Sci. 46:1046–1057 (2006). Crop Breeding & Genetics doi:10.2135/cropsci2005.0211 a Crop Science Society of America 677 S. Segoe Rd., Madison, WI 53711 USA Abbreviations: CT, canopy temperature; CTD, canopy temperature depression; GHIST, global historic; NDVI, normalized difference vegetation index; NIR, near infrared radiation; NWI-1, normalized water index-1; NWI-2, normalized water index-2; PSSRa, pigment specific simple ratio-chlorophyll a; RARSa, ratio analysis of reflectance spectra-chlorophyll a; RARSb, ratio analysis of reflectance spectra-chlorophyll b; RARSc, ratio analysis of reflectance spectracarotenoids; RILs, recombinant inbred lines; SR, simple ratio; SRI, spectral reflectance indices; WI, water index. R e p ro d u c e d fr o m C ro p S c ie n c e . P u b lis h e d b y C ro p S c ie n c e S o c ie ty o f A m e ri c a . A ll c o p y ri g h ts re s e rv e d . 1046 Published online March 27, 2006",
"title": ""
},
{
"docid": "b4910e355c44077eb27c62a0c8237204",
"text": "Our proof is built on Perron-Frobenius theorem, a seminal work in matrix theory (Meyer 2000). By Perron-Frobenius theorem, the power iteration algorithm for predicting top K persuaders converges to a unique C and this convergence is independent of the initialization of C if the persuasion probability matrix P is nonnegative, irreducible, and aperiodic (Heath 2002). We first show that P is nonnegative. Each component of the right hand side of Equation (10) is positive except nD $ 0; thus, persuasion probability pij estimated with Equation (10) is positive, for all i, j = 1, 2, ..., n and i ... j. Because all diagonal elements of P are equal to zero and all non-diagonal elements of P are positive persuasion probabilities, P is nonnegative.",
"title": ""
},
{
"docid": "bc9666dbfd3d7eea16ee5793c883eb4c",
"text": "This work introduces VRNN-BPR, a novel deep learning model, which is utilized in sessionbased Recommender systems tackling the data sparsity problem. The proposed model combines a Recurrent Neural Network with an amortized variational inference setup (AVI) and a Bayesian Personalized Ranking in order to produce predictions on sequence-based data and generate recommendations. The model is assessed using a large real-world dataset and the results demonstrate its superiority over current state-of-the-art techniques.",
"title": ""
},
{
"docid": "041b308fe83ac9d5a92e33fd9c84299a",
"text": "Spaceborne synthetic aperture radar systems are severely constrained to a narrow swath by ambiguity limitations. Here a vertically scanned-beam synthetic aperture system (SCANSAR) is proposed as a solution to this problem. The potential length of synthetic aperture must be shared between beam positions, so the along-track resolution is poorer; a direct tradeoff exists between resolution and swath width. The length of the real aperture is independently traded against the number of scanning positions. Design curves and equations are presented for spaceborne SCANSARs for altitudes between 400 and 1400 km and inner angles of incidence between 20° and 40°. When the real antenna is approximately square, it may also be used for a microwave radiometer. The combined radiometer and synthetic-aperture (RADISAR) should be useful for those applications where the poorer resolution of the radiometer is useful for some purposes, but the finer resolution of the radar is needed for others.",
"title": ""
},
{
"docid": "ac7b607cc261654939868a62822a58eb",
"text": "Interdigitated capacitors (IDC) are extensively used for a variety of chemical and biological sensing applications. Printing and functionalizing these IDC sensors on bendable substrates will lead to new innovations in healthcare and medicine, food safety inspection, environmental monitoring, and public security. The synthesis of an electrically conductive aqueous graphene ink stabilized in deionized water using the polymer Carboxymethyl Cellulose (CMC) is introduced in this paper. CMC is a nontoxic hydrophilic cellulose derivative used in food industry. The water-based graphene ink is then used to fabricate IDC sensors on mechanically flexible polyimide substrates. The capacitance and frequency response of the sensors are analyzed, and the effect of mechanical stress on the electrical properties is examined. Experimental results confirm low thin film resistivity (~6;.6×10-3 Ω-cm) and high capacitance (>100 pF). The printed sensors are then used to measure water content of ethanol solutions to demonstrate the proposed conductive ink and fabrication methodology for creating chemical sensors on thin membranes.",
"title": ""
},
{
"docid": "bd590555337d3ada2c641c5f1918cf2c",
"text": "Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.",
"title": ""
},
{
"docid": "ce791426ecd9e110f56f1d3d221419c9",
"text": "Software bugs can cause significant financial loss and even the loss of human lives. To reduce such loss, developers devote substantial efforts to fixing bugs, which generally requires much expertise and experience. Various approaches have been proposed to aid debugging. An interesting recent research direction is automatic program repair, which achieves promising results, and attracts much academic and industrial attention. However, people also cast doubt on the effectiveness and promise of this direction. A key criticism is to what extent such approaches can fix real bugs. As only research prototypes for these approaches are available, it is infeasible to address the criticism by evaluating them directly on real bugs. Instead, in this paper, we design and develop BugStat, a tool that extracts and analyzes bug fixes. With BugStat's support, we conduct an empirical study on more than 9,000 real-world bug fixes from six popular Java projects. Comparing the nature of manual fixes with automatic program repair, we distill 15 findings, which are further summarized into four insights on the two key ingredients of automatic program repair: fault localization and faulty code fix. In addition, we provide indirect evidence on the size of the search space to fix real bugs and find that bugs may also reside in non-source files. Our results provide useful guidance and insights for improving the state-of-the-art of automatic program repair.",
"title": ""
},
{
"docid": "4c2223d141f6c9811f31c5da80d61a64",
"text": "Improvement of blast-induced fragmentation and crusher efficiency by means of optimized drilling and blasting in Aitik. ACKNOWLEDGMENTS The thesis project presented in this report was conducted in Boliden's Aitik mine; thereby I wish to gratefully thank Boliden Mines for their financial and technical support. I would like to express my very great appreciation to Ulf Nyberg, my supervisor at Luleå University of Technology, and Evgeny Novikov, my supervisor in Boliden for their patient guidance, technical support and valuable suggestions on this project. Useful advice given by Dr. Daniel Johansson is also greatly appreciated; I wish to acknowledge the constructive recommendations provided by Nikolaos Petropoulos as well. My special thanks are extended to the staff of Boliden Mines for all their help and technical support in Aitik. I am particularly grateful for the assistance given by Torbjörn Krigsman, Nils Johansson and Peter Palo. I would also like to acknowledge the help provided collection in Aitik mine. I would also like to thank Forcit company for their assistance with the collection of the data, my special thanks goes to Per-Arne Kortelainen for all his contribution. Finally my deep gratitude goes to my parents for their invaluable support, patience and encouragement throughout my academic studies. SUMMARY Rock blasting is one of the most dominating operations in open pit mining efficiency. As many downstream processes depend on the blast-induced fragmentation, an optimized blasting strategy can influence the total revenue of a mine to a large extent. Boliden Aitik mine in northern Sweden is one of the largest copper mines in Europe. The annual production of the mine is expected to reach 36 million tonnes of ore in 2014; so continuous efforts are being made to boost the production. Highly automated equipment and new processing plant, in addition to new crushers, have sufficient capacity to reach the production goals; the current obstacle in the process of production increase is a bottleneck in crushers caused by oversize boulders. Boulders require extra efforts for secondary blasting or hammer breakage and if entered the crushers, they cause downtimes. Therefore a more evenly distributed fragmentation with less oversize material can be advantageous. Furthermore, a better fragmentation can cause a reduction in energy costs by demanding less amounts of crushing energy. In order to achieve a more favorable fragmentation, two alternative blast designs in addition to a reference design were tested and the results were evaluated and compared to the …",
"title": ""
},
{
"docid": "607cd26b9c51b5b52d15087d0e6662cb",
"text": "Pseudo-NMOS level-shifters consume large static current making them unsuitable for portable devices implemented with HV CMOS. Dynamic level-shifters help reduce power consumption. To reduce on-current to a minimum (sub-nanoamp), modifications are proposed to existing pseudo-NMOS and dynamic level-shifter circuits. A low power three transistor static level-shifter design with a resistive load is also presented.",
"title": ""
},
{
"docid": "f46ae26ef53a692985c2e7dc39cef13b",
"text": "Assisting hip extension with a tethered exosuit and a simulation-optimized force profile reduces metabolic cost of running.",
"title": ""
}
] |
scidocsrr
|
27868cdcf9701d4e128362e20b2f1dd8
|
Student Performance Prediction via Online Learning Behavior Analytics
|
[
{
"docid": "d3b6ba3e4b8e80c3c371226d7ae6d610",
"text": "Interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognizing the value of learning analytics for improving not only learning and teaching but also the entire educational arena. However, theoretical concepts and empirical evidence need to be generated within the fast evolving field of learning analytics. The purpose of the two reported cases studies is to identify alternative approaches to data analysis and to determine the validity and accuracy of a learning analytics framework and its corresponding student and learning profiles. The findings indicate that educational data for learning analytics is context specific and variables carry different meanings and can have different implications across educational institutions and area of studies. Benefits, concerns, and challenges of learning analytics are critically reflected, indicating that learning analytics frameworks need to be sensitive to idiosyncrasies of the educational institution and its stakeholders.",
"title": ""
}
] |
[
{
"docid": "6226fddb004d4e8d41b1167f61d3fcd7",
"text": "We build a neural conversation system using a deep LST Seq2Seq model with an attention mechanism applied on the decoder. We further improve our system by introducing beam search and re-ranking with a Mutual Information objective function method to search for relevant and coherent responses. We find that both models achieve reasonable results after being trained on a domain-specific dataset and are able to pick up contextual information specific to the dataset. The second model, in particular, has promise with addressing the ”I don’t know” problem and de-prioritizing over-generic responses.",
"title": ""
},
{
"docid": "54537c242bc89fbf15d9191be80c5073",
"text": "In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature’s number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.",
"title": ""
},
{
"docid": "088df7d8d71c00f7129d5249844edbc5",
"text": "Intense multidisciplinary research has provided detailed knowledge of the molecular pathogenesis of Alzheimer disease (AD). This knowledge has been translated into new therapeutic strategies with putative disease-modifying effects. Several of the most promising approaches, such as amyloid-β immunotherapy and secretase inhibition, are now being tested in clinical trials. Disease-modifying treatments might be at their most effective when initiated very early in the course of AD, before amyloid plaques and neurodegeneration become too widespread. Thus, biomarkers are needed that can detect AD in the predementia phase or, ideally, in presymptomatic individuals. In this Review, we present the rationales behind and the diagnostic performances of the core cerebrospinal fluid (CSF) biomarkers for AD, namely total tau, phosphorylated tau and the 42 amino acid form of amyloid-β. These biomarkers reflect AD pathology, and are candidate markers for predicting future cognitive decline in healthy individuals and the progression to dementia in patients who are cognitively impaired. We also discuss emerging plasma and CSF biomarkers, and explore new proteomics-based strategies for identifying additional CSF markers. Furthermore, we outline the roles of CSF biomarkers in drug discovery and clinical trials, and provide perspectives on AD biomarker discovery and the validation of such markers for use in the clinic.",
"title": ""
},
{
"docid": "982af44d0c5fc3d0bddd2804cee77a04",
"text": "Coprime array offers a larger array aperture than uniform linear array with the same number of physical sensors, and has a better spatial resolution with increased degrees of freedom. However, when it comes to the problem of adaptive beamforming, the existing adaptive beamforming algorithms designed for the general array cannot take full advantage of coprime feature offered by the coprime array. In this paper, we propose a novel coprime array adaptive beamforming algorithm, where both robustness and efficiency are well balanced. Specifically, we first decompose the coprime array into a pair of sparse uniform linear subarrays and process their received signals separately. According to the property of coprime integers, the direction-of-arrival (DOA) can be uniquely estimated for each source by matching the super-resolution spatial spectra of the pair of sparse uniform linear subarrays. Further, a joint covariance matrix optimization problem is formulated to estimate the power of each source. The estimated DOAs and their corresponding power are utilized to reconstruct the interference-plus-noise covariance matrix and estimate the signal steering vector. Theoretical analyses are presented in terms of robustness and efficiency, and simulation results demonstrate the effectiveness of the proposed coprime array adaptive beamforming algorithm.",
"title": ""
},
{
"docid": "1ba6f0efdac239fa2cb32064bb743d29",
"text": "This paper presents a new method for determining efficient spatial distributions of police patrol areas. This method employs a traditional maximal covering formulation and an innovative backup covering formulation to provide alternative optimal solutions to police decision makers, and to address the lack of objective quantitative methods for police area design in the literature or in practice. This research demonstrates that operations research methods can be used in police decision making, presents a new backup coverage model that is appropriate for patrol area design, and encourages the integration of geographic information systems and optimal solution procedures. The models and methods are tested with the police geography of Dallas, TX. The optimal solutions are compared with the existing police geography, showing substantial improvement in number of incidents covered as well as total distance traveled.",
"title": ""
},
{
"docid": "26f957036ead7173f93ec16a57097a50",
"text": "The purpose of this paper is to present a direct digital manufacturing (DDM) process that is an order of magnitude faster than other DDM processes currently available. The developed process is based on a mask-image-projection-based Stereolithography process (MIP-SL), during which a Digital Micromirror Device (DMD) controlled projection light cures and cross-links liquid photopolymer resin. In order to achieve high-speed fabrication, we investigated the bottom-up projection system in the MIP-SL process. A set of techniques including film coating and the combination of two-way linear motions have been developed for the quick spreading of liquid resin into uniform thin layers. The process parameters and related settings to achieve the fabrication speed of a few seconds per layer are presented. Additionally, the hardware, software, and material setups developed for fabricating given three-dimensional (3D) digital models are presented. Experimental studies using the developed testbed have been performed to verify the effectiveness and efficiency of the presented fast MIP-SL process. The test results illustrate that the newly developed process can build a moderately sized part within minutes instead of hours that are typically required.",
"title": ""
},
{
"docid": "7c525afc11c41e0a8ca6e8c48bdec97c",
"text": "AT commands, originally designed in the early 80s for controlling modems, are still in use in most modern smartphones to support telephony functions. The role of AT commands in these devices has vastly expanded through vendor-specific customizations, yet the extent of their functionality is unclear and poorly documented. In this paper, we systematically retrieve and extract 3,500 AT commands from over 2,000 Android smartphone firmware images across 11 vendors. We methodically test our corpus of AT commands against eight Android devices from four different vendors through their USB interface and characterize the powerful functionality exposed, including the ability to rewrite device firmware, bypass Android security mechanisms, exfiltrate sensitive device information, perform screen unlocks, and inject touch events solely through the use of AT commands. We demonstrate that the AT command interface contains an alarming amount of unconstrained functionality and represents a broad attack surface on Android devices.",
"title": ""
},
{
"docid": "ac078f78fcf0f675c21a337f8e3b6f5f",
"text": "bstract. Plenoptic cameras, constructed with internal microlens rrays, capture both spatial and angular information, i.e., the full 4-D adiance, of a scene. The design of traditional plenoptic cameras ssumes that each microlens image is completely defocused with espect to the image created by the main camera lens. As a result, nly a single pixel in the final image is rendered from each microlens mage, resulting in disappointingly low resolution. A recently develped alternative approach based on the focused plenoptic camera ses the microlens array as an imaging system focused on the imge plane of the main camera lens. The flexible spatioangular tradeff that becomes available with this design enables rendering of final mages with significantly higher resolution than those from traditional lenoptic cameras. We analyze the focused plenoptic camera in ptical phase space and present basic, blended, and depth-based endering algorithms for producing high-quality, high-resolution imges. We also present our graphics-processing-unit-based impleentations of these algorithms, which are able to render full screen efocused images in real time. © 2010 SPIE and IS&T. DOI: 10.1117/1.3442712",
"title": ""
},
{
"docid": "3a1f8a6934e45b50cbd691b5d28036b1",
"text": "Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired. The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed toward increasing personal autonomy via a virtual augmented reality system. The system integrates an adapted geographic information system with different classes of objects useful for improving route selection and guidance. The database also includes models of important geolocated objects that may be detected by real-time embedded vision algorithms. Object localization (relative to the user) may serve both global positioning and sensorimotor actions such as heading, grasping, or piloting. The user is guided to his desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame. This paper presents the overall project design and architecture of the NAVIG system. In addition, details of a new type of detection and localization device are presented. This approach combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object. This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired.",
"title": ""
},
{
"docid": "38c1f6741d99ffc8ab2ab17b5b91e477",
"text": "This paper reviews recent advances in radar sensor design for low-power healthcare, indoor real-time positioning and other applications of IoT. Various radar front-end architectures and digital processing methods are proposed to improve the detection performance including detection accuracy, detection range and power consumption. While many of the reported designs were prototypes for concept verification, several integrated radar systems have been demonstrated with reliable measured results with demo systems. A performance comparison of latest radar chip designs has been provided to show their features of different architectures. With great development of IoT, short-range low-power radar sensors for healthcare and indoor positioning applications will attract more and more research interests in the near future.",
"title": ""
},
{
"docid": "88ffb30f1506bedaf7c1a3f43aca439e",
"text": "The multiprotein mTORC1 protein kinase complex is the central component of a pathway that promotes growth in response to insulin, energy levels, and amino acids and is deregulated in common cancers. We find that the Rag proteins--a family of four related small guanosine triphosphatases (GTPases)--interact with mTORC1 in an amino acid-sensitive manner and are necessary for the activation of the mTORC1 pathway by amino acids. A Rag mutant that is constitutively bound to guanosine triphosphate interacted strongly with mTORC1, and its expression within cells made the mTORC1 pathway resistant to amino acid deprivation. Conversely, expression of a guanosine diphosphate-bound Rag mutant prevented stimulation of mTORC1 by amino acids. The Rag proteins do not directly stimulate the kinase activity of mTORC1, but, like amino acids, promote the intracellular localization of mTOR to a compartment that also contains its activator Rheb.",
"title": ""
},
{
"docid": "c7631e1df773574e3640062c5fd55a01",
"text": "A cloud storage system, consisting of a collection of storage servers, provides long-term storage services over the Internet. Storing data in a third party's cloud system causes serious concern over data confidentiality. General encryption schemes protect data confidentiality, but also limit the functionality of the storage system because a few operations are supported over encrypted data. Constructing a secure storage system that supports multiple functions is challenging when the storage system is distributed and has no central authority. We propose a threshold proxy re-encryption scheme and integrate it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure and robust data storage and retrieval, but also lets a user forward his data in the storage servers to another user without retrieving the data back. The main technical contribution is that the proxy re-encryption scheme supports encoding operations over encrypted messages as well as forwarding operations over encoded and encrypted messages. Our method fully integrates encrypting, encoding, and forwarding. We analyze and suggest suitable parameters for the number of copies of a message dispatched to storage servers and the number of storage servers queried by a key server. These parameters allow more flexible adjustment between the number of storage servers and robustness.",
"title": ""
},
{
"docid": "397f6c39825a5d8d256e0cc2fbba5d15",
"text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.",
"title": ""
},
{
"docid": "f291c66ebaa6b24d858103b59de792b7",
"text": "In this study, the authors investigated the hypothesis that women's sexual orientation and sexual responses in the laboratory correlate less highly than do men's because women respond primarily to the sexual activities performed by actors, whereas men respond primarily to the gender of the actors. The participants were 20 homosexual women, 27 heterosexual women, 17 homosexual men, and 27 heterosexual men. The videotaped stimuli included men and women engaging in same-sex intercourse, solitary masturbation, or nude exercise (no sexual activity); human male-female copulation; and animal (bonobo chimpanzee or Pan paniscus) copulation. Genital and subjective sexual arousal were continuously recorded. The genital responses of both sexes were weakest to nude exercise and strongest to intercourse. As predicted, however, actor gender was more important for men than for women, and the level of sexual activity was more important for women than for men. Consistent with this result, women responded genitally to bonobo copulation, whereas men did not. An unexpected result was that homosexual women responded more to nude female targets exercising and masturbating than to nude male targets, whereas heterosexual women responded about the same to both sexes at each activity level.",
"title": ""
},
{
"docid": "d04042c81f2c2f7f762025e6b2bd9ab8",
"text": "AIMS AND OBJECTIVES\nTo examine the association between trait emotional intelligence and learning strategies and their influence on academic performance among first-year accelerated nursing students.\n\n\nDESIGN\nThe study used a prospective survey design.\n\n\nMETHODS\nA sample size of 81 students (100% response rate) who undertook the accelerated nursing course at a large university in Sydney participated in the study. Emotional intelligence was measured using the adapted version of the 144-item Trait Emotional Intelligence Questionnaire. Four subscales of the Motivated Strategies for Learning Questionnaire were used to measure extrinsic goal motivation, peer learning, help seeking and critical thinking among the students. The grade point average score obtained at the end of six months was used to measure academic achievement.\n\n\nRESULTS\nThe results demonstrated a statistically significant correlation between emotional intelligence scores and critical thinking (r = 0.41; p < 0.001), help seeking (r = 0.33; p < 0.003) and peer learning (r = 0.32; p < 0.004) but not with extrinsic goal orientation (r = -0.05; p < 0.677). Emotional intelligence emerged as a significant predictor of academic achievement (β = 0.25; p = 0.023).\n\n\nCONCLUSION\nIn addition to their learning styles, higher levels of awareness and understanding of their own emotions have a positive impact on students' academic achievement. Higher emotional intelligence may lead students to pursue their interests more vigorously and think more expansively about subjects of interest, which could be an explanatory factor for higher academic performance in this group of nursing students.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe concepts of emotional intelligence are central to clinical practice as nurses need to know how to deal with their own emotions as well as provide emotional support to patients and their families. It is therefore essential that these skills are developed among student nurses to enhance the quality of their clinical practice.",
"title": ""
},
{
"docid": "d15ce9f62f88a07db6fa427fae61f26c",
"text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.",
"title": ""
},
{
"docid": "9d2583618e9e00333d044ac53da65ceb",
"text": "The phosphor deposits of the β-sialon:Eu2+ mixed with various amounts (0-1 g) of the SnO₂ nanoparticles were fabricated by the electrophoretic deposition (EPD) process. The mixed SnO₂ nanoparticles was observed to cover onto the particle surfaces of the β-sialon:Eu2+ as well as fill in the voids among the phosphor particles. The external and internal quantum efficiencies (QEs) of the prepared deposits were found to be dependent on the mixing amount of the SnO₂: by comparing with the deposit without any mixing (48% internal and 38% external QEs), after mixing the SnO₂ nanoparticles, the both QEs were improved to 55% internal and 43% external QEs at small mixing amount (0.05 g); whereas, with increasing the mixing amount to 0.1 and 1 g, they were reduced to 36% and 29% for the 0.1 g addition and 15% and 12% l QEs for the 1 g addition. More interestingly, tunable color appearances of the deposits prepared by the EPD process were achieved, from yellow green to blue, by varying the addition amount of the SnO₂, enabling it as an alternative technique instead of altering the voltage and depositing time for the color appearance controllability.",
"title": ""
},
{
"docid": "de7b16961bb4aa2001a3d0859f68e4c6",
"text": "A new practical method is given for the self-calibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowledge of the orientations of the camera. Calibration is based on the image correspondences only. This method differs fundamentally from previous results by Maybank and Faugeras on selfcalibration using the epipolar structure of image pairs. In the method of this paper, there is no epipolar structure since all images are taken from the same point in space. Since the images are all taken from the same point in space, determination of point matches is considerably easier than for images taken with a moving camera, since problems of occlusion or change of aspect or illumination do not occur. The calibration method is evaluated on several sets of synthetic and real image data.",
"title": ""
},
{
"docid": "c70e2174bc25577ccac51912be9d7233",
"text": "In this paper, the bridge shape of interior permanent magnet synchronous motor (IPMSM) is designed for integrated starter and generator (ISG) which is applied in hybrid electric vehicle (HEV). Mechanical stress of rotor core which is caused by centrifugal force is the main issue when IPMSM is operated at high speed. The bridge is thin area in rotor core where is mechanically weak point and the shape of bridge significantly affects leakage flux and electromagnetic performance. Therefore, bridge should be designed considering both mechanic and electromagnetic characteristics. In the design process, we firstly find a shape of bridge has low leakage flux and mechanical stress. Next, the calculation of mechanical stress and the electromagnetic characteristics are performed by finite element analysis (FEA). The mechanical stress in rotor core is not maximized in steady high speed but dynamical high momentum. Therefore, transient FEA is necessary to consider the dynamic speed changing in real speed profile for durability experiment. Before the verification test, fatigue characteristic is investigated by using S-N curve of rotor core material. Lastly, the burst test of rotor is performed and the deformation of rotor core is compared between prototype and designed model to verify the design method.",
"title": ""
},
{
"docid": "22c749b089f0bdd1a3296f59fa9cdfc5",
"text": "Inspection of printed circuit board (PCB) has been a crucial process in the electronic manufacturing industry to guarantee product quality & reliability, cut manufacturing cost and to increase production. The PCB inspection involves detection of defects in the PCB and classification of those defects in order to identify the roots of defects. In this paper, all 14 types of defects are detected and are classified in all possible classes using referential inspection approach. The proposed algorithm is mainly divided into five stages: Image registration, Pre-processing, Image segmentation, Defect detection and Defect classification. The algorithm is able to perform inspection even when captured test image is rotated, scaled and translated with respect to template image which makes the algorithm rotation, scale and translation in-variant. The novelty of the algorithm lies in its robustness to analyze a defect in its different possible appearance and severity. In addition to this, algorithm takes only 2.528 s to inspect a PCB image. The efficacy of the proposed algorithm is verified by conducting experiments on the different PCB images and it shows that the proposed afgorithm is suitable for automatic visual inspection of PCBs.",
"title": ""
}
] |
scidocsrr
|
77486c517e7e625cbc9c644f139d57f3
|
Realization of Kalman filter in GNU radio
|
[
{
"docid": "79263437dad5927ce3615edd36ca1eab",
"text": "This paper gives an insight on how to develop plug-ins (signal processing blocks) for GNU Radio Companion. GRC is on the monitoring computer and does bulk of the signal processing before transmission and after reception. The coding done in order to develop any block is discussed. A block that performs Huffman coding has been built. Huffman coding is a coding technique that gives a prefix code. A block that performs convolution coding at any desired rate using any generator polynomial has also been built. Both Huffman and Convolution coding are done on data stored in file sources by these blocks. This paper thus describes the ease of signal processing that can be attained by developing blocks in demand by changing the C++ and PYTHON codes of the HOWTO package. Being an open source it is available to all, is highly cost effective and is a field with great potential.",
"title": ""
},
{
"docid": "0c17b82128d0356cf0607576c28ed95b",
"text": "This report analyzes the feasibility of using a Software Defined Radio solution for research purposes. We focused on the open source GNUradio project and studied its suitability for reproducing and analyzing some widespread wireless protocols, such as IEEE 802.11, Bluetooth, IEEE 802.15.4, and GSM. We found that the use of GNUradio with the Universal Software Radio Peripheral can help researchers in avoiding the closed source firmwares/drivers of commercial chipsets by providing a full customizability at physical and datalink layers. On the other hand, software radios are not always capable of correctly reproducing operations previously done in the hardware domain. This leads to several limitations with widespread standards. In this report we try to provide a picture of such limitations and the current status of the GNUradio framework. This work has been supported by the Telecommunications Research Center Vienna (ftw.) project N0. Ftw is supported by the Austrian Government and by the City of Vienna within the competence center program COMET. This work would not have been possible without the precious information contained in the official GNUradio mailing list archive. A special thanks goes to all the active participants.",
"title": ""
}
] |
[
{
"docid": "a30350bb79ef12284ad61ba85bc334a6",
"text": "In statistical machine translation (SMT), syntax-based pre-ordering of the source language is an effective method for dealing with language pairs where there are great differences in their respective word orders. This paper introduces a novel pre-ordering approach based on dependency parsing for Chinese-English SMT. We present a set of dependency-based preordering rules which improved the BLEU score by 1.61 on the NIST 2006 evaluation data. We also investigate the accuracy of the rule set by conducting human evaluations.",
"title": ""
},
{
"docid": "61b6b42e1ce7ac170a481cfc9b147fbb",
"text": "We propose a comprehensive formal framework to classify all market models of cyber-insurance we are aware of. The framework features a common terminology and deals with the specific properties of cyber-risk in a unified way: interdependent security, correlated risk, and information asymmetries. A survey of existing models, tabulated according to our framework, reveals a discrepancy between informal arguments in favor of cyber-insurance as a tool to align incentives for better network security, and analytical results questioning the viability of a market for cyber-insurance. Using our framework, we show which parameters should be considered and endogenized in future models to close this gap.",
"title": ""
},
{
"docid": "06f99b18bae3f15e77db8ff2d8c159cc",
"text": "The exact nature of the relationship among species range sizes, speciation, and extinction events is not well understood. The factors that promote larger ranges, such as broad niche widths and high dispersal abilities, could increase the likelihood of encountering new habitats but also prevent local adaptation due to high gene flow. Similarly, low dispersal abilities or narrower niche widths could cause populations to be isolated, but such populations may lack advantageous mutations due to low population sizes. Here we present a large-scale, spatially explicit, individual-based model addressing the relationships between species ranges, speciation, and extinction. We followed the evolutionary dynamics of hundreds of thousands of diploid individuals for 200,000 generations. Individuals adapted to multiple resources and formed ecological species in a multidimensional trait space. These species varied in niche widths, and we observed the coexistence of generalists and specialists on a few resources. Our model shows that species ranges correlate with dispersal abilities but do not change with the strength of fitness trade-offs; however, high dispersal abilities and low resource utilization costs, which favored broad niche widths, have a strong negative effect on speciation rates. An unexpected result of our model is the strong effect of underlying resource distributions on speciation: in highly fragmented landscapes, speciation rates are reduced.",
"title": ""
},
{
"docid": "a7c6c8cb92f8cb35c3826b5dc5a86f03",
"text": "Software Defined Satellite Network (SDSN) is a novel framework which brings Software Defined Network (SDN) technologies in the satellite networks. It has great potential to achieve effective and flexible management in the satellite networks. However, the frequent handovers will lead to an increase in the flow table size in SDSN. Due to the limited flow table space, a lot of flows will be dropped if the flow table is full during the handover. This is a burning issue to be solved for mobility management in SDSN. In this paper, we propose a heuristic Timeout Strategy-based Mobility Management algorithm for SDSN, named TSMM. TSMM aims to reduce the drop-flows during handover by considering two key points, the limited flow table space and satellite link handover. We implement TSMM mechanism and conduct contrast experiments. The experimental results verify the good performance in terms of transmission quality, an 8.2%-9.9% decrease in drop-flow rate, and a 6.9%–11.18% decrease in flow table size during the handover.",
"title": ""
},
{
"docid": "8bdd071cf5ff246fb02b986be05012df",
"text": "RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated with the Illumina platform, and to perform a cross-platform comparison based on the results obtained through Affymetrix microarray. As a case study for our work we, used the Saccharomyces cerevisiae strain CEN.PK 113-7D, grown under two different conditions (batch and chemostat). Here, we asses the influence of genetic variation on the estimation of gene expression level using three different aligners for read-mapping (Gsnap, Stampy and TopHat) on S288c genome, the capabilities of five different statistical methods to detect differential gene expression (baySeq, Cuffdiff, DESeq, edgeR and NOISeq) and we explored the consistency between RNA-seq analysis using reference genome and de novo assembly approach. High reproducibility among biological replicates (correlation≥0.99) and high consistency between the two platforms for analysis of gene expression levels (correlation≥0.91) are reported. The results from differential gene expression identification derived from the different statistical methods, as well as their integrated analysis results based on gene ontology annotation are in good agreement. Overall, our study provides a useful and comprehensive comparison between the two platforms (RNA-seq and microrrays) for gene expression analysis and addresses the contribution of the different steps involved in the analysis of RNA-seq data.",
"title": ""
},
{
"docid": "de38fa4dc01bd1ef779f377cfcbc52f7",
"text": "Like all software, mobile applications (\"apps\") must be adequately tested to gain confidence that they behave correctly. Therefore, in recent years, researchers and practitioners alike have begun to investigate ways to automate apps testing. In particular, because of Android's open source nature and its large share of the market, a great deal of research has been performed on input generation techniques for apps that run on the Android operating systems. At this point in time, there are in fact a number of such techniques in the literature, which differ in the way they generate inputs, the strategy they use to explore the behavior of the app under test, and the specific heuristics they use. To better understand the strengths and weaknesses of these existing approaches, and get general insight on ways they could be made more effective, in this paper we perform a thorough comparison of the main existing test input generation tools for Android. In our comparison, we evaluate the effectiveness of these tools, and their corresponding techniques, according to four metrics: ease of use, ability to work on multiple platforms, code coverage, and ability to detect faults. Our results provide a clear picture of the state of the art in input generation for Android apps and identify future research directions that, if suitably investigated, could lead to more effective and efficient testing tools for Android.",
"title": ""
},
{
"docid": "12a6a40af43d0543771e584b0735a826",
"text": "Purpose Early intervention and support for workers with mental health problems may be influenced by the mental health literacy of the worker, their colleagues and their supervisor. There are gaps, however, in our understanding of how to develop and evaluate mental health literacy within the context of the workplace. The purpose of this study was to evaluate the psychometric properties of a new Mental Health Literacy tool for the Workplace (MHL-W). Methods The MHL-W is a 16-question, vignette-based tool specifically tailored for the workplace context. It includes four vignettes featuring different manifestations of mental ill-health in the workplace, with parallel questions that explore each of the four dimensions of mental health literacy. In order to establish reliability and construct validity, data were collected from 192 healthcare workers who were participating in a mental health training project. Baseline data was used to examine the scale’s internal consistency, factor structure and correlations with general knowledge ratings, confidence ratings, attitudes towards people with mental illness, and attitudes towards seeking help. Paired t-tests were used to examine pre and post intervention scores in order to establish responsiveness of the scale. Results There was strong support for internal consistency of the tool and a one-factor solution. As predicted, the scores correlated highly with an overall rating of knowledge and confidence in addressing mental health issues, and moderately with attitudes towards seeking professional help and (decreased) stigmatized beliefs. It also appears to be responsive to change. Conclusions The MHL-W scale is promising tool to track the need for and impact of mental health education in the workplace.",
"title": ""
},
{
"docid": "c3b6d3b81153637d104efa5382a7a0c8",
"text": "The convex relaxation approaches for power system state estimation (PSSE) offer robust alternatives to the conventional PSSE algorithms, by avoiding local optima and providing guaranteed convergence, critical especially when the states deviate significantly from the nominal conditions. On the other hand, the associated semidefinite programming problem may be computationally demanding. In this work, a variable splitting technique called alternating direction method of multipliers is employed to reduce the complexity, and also efficiently accommodate a regularizer promoting desired low-rank matrix solutions. Both static and online formulations are developed. Numerical tests verify the efficacy of the proposed techniques.",
"title": ""
},
{
"docid": "403944ae7055f5de38e3c540dbe41346",
"text": "In this paper, we study the problem of joint routing, link scheduling and power control to support high data rates for broadband wireless multi-hop networks. We first address the problem of finding an optimal link scheduling and power control policy that minimizes the total average transmission power in the wireless multi-hop network, subject to given constraints regarding the minimum average data rate per link, as well as peak transmission power constraints per node. Multi-access signal interference is explicitly modeled. We use a duality approach whereby, as a byproduct of finding the optimal policy, we find the sensitivity of the minimal total average power with respect to the average data rate for each link. Since the minimal total average power is a convex function of the required minimum average data rates, shortest path algorithms with the link weights set to the link sensitivities can be used to guide the search for a globally optimum routing. We present a few simple examples that show our algorithm can find policies that support data rates that are not possible with conventional approaches. Moreover, we find that optimum allocations do not necessarily route traffic over minimum energy paths.",
"title": ""
},
{
"docid": "30df6113b8994575a6156a7a20eb89f3",
"text": "RESEARCH EXPERIENCE Database Research Group, Columbia University Research Assistant, Advised by Prof. Eugene Wu Sep 2015 – Dec 2016 ■ Data Visualization Management System ● Built a predictive model for user mouse interations on web-browsers and implemented a JavaScript library for the model ● Developed experiments using a Chrome extension to collect user interaction data ● Proposed a data streaming framework to improve the response time of client-server-based data visualizations, which supports prediction for user query intent and progressive data transmission",
"title": ""
},
{
"docid": "da9b9a32db674e5f6366f6b9e2c4ee10",
"text": "We introduce a data-driven approach to aid the repairing and conservation of archaeological objects: ORGAN, an object reconstruction generative adversarial network (GAN). By using an encoder-decoder 3D deep neural network on a GAN architecture, and combining two loss objectives: a completion loss and an Improved Wasserstein GAN loss, we can train a network to effectively predict the missing geometry of damaged objects. As archaeological objects can greatly differ between them, the network is conditioned on a variable, which can be a culture, a region or any metadata of the object. In our results, we show that our method can recover most of the information from damaged objects, even in cases where more than half of the voxels are missing, without producing many errors.",
"title": ""
},
{
"docid": "e700dfb9a3bc0c7c3c750230d37defbf",
"text": "DeviceNetTM and ControlNetTM are two well-known industrial networks based on the CIP protocol (CIP = Control an Information Protocol). Both networks have been developed by Rockwell Automation, but are now owned and maintained by the two manufacturer's organizations ODVA (Open DeviceNet Vendors Association) and ControlNet International. ODVA and ControlNet International have recently introduced the newest member of this family – EtherNet/IP (\"IP\" stands for \"Industrial Protocol\"). This paper describes the techniques and mechanisms that are used to implement a fully consistent set of services and data objects on a TCP/UDP/IP based Ethernet® network.",
"title": ""
},
{
"docid": "a669bebcbb6406549b78f365cf352008",
"text": "Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies--BitCoin--have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years--digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia--and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value.",
"title": ""
},
{
"docid": "c2a3344c607cf06c24ed8d2664243284",
"text": "It is common for cloud users to require clusters of inter-connected virtual machines (VMs) in a geo-distributed IaaS cloud, to run their services. Compared to isolated VMs, key challenges on dynamic virtual cluster (VC) provisioning (computation + communication resources) lie in two folds: (1) optimal placement of VCs and inter-VM traffic routing involve NP-hard problems, which are non-trivial to solve offline, not to mention if an online efficient algorithm is sought; (2) an efficient pricing mechanism is missing, which charges a market-driven price for each VC as a whole upon request, while maximizing system efficiency or provider revenue over the entire span. This paper proposes efficient online auction mechanisms to address the above challenges. We first design SWMOA, a novel online algorithm for dynamic VC provisioning and pricing, achieving truthfulness, individual rationality, computation efficiency, and <inline-formula><tex-math notation=\"LaTeX\">$(1+2\\log \\mu)$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq1-2601905.gif\"/></alternatives></inline-formula>-competitiveness in social welfare, where <inline-formula><tex-math notation=\"LaTeX\">$\\mu$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2601905.gif\"/></alternatives></inline-formula> is related to the problem size. Next, applying a randomized reduction technique, we convert the social welfare maximizing auction into a revenue maximizing online auction, PRMOA, achieving <inline-formula><tex-math notation=\"LaTeX\">$O(\\log \\mu)$ </tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq3-2601905.gif\"/></alternatives></inline-formula> -competitiveness in provider revenue, as well as truthfulness, individual rationality and computation efficiency. We investigate auction design in different cases of resource cost functions in the system. We validate the efficacy of the mechanisms through solid theoretical analysis and trace-driven simulations.",
"title": ""
},
{
"docid": "597d49edde282e49703ba0d9e02e3f1e",
"text": "BACKGROUND\nThe vitamin D receptor (VDR) pathway is important in the prevention and potentially in the treatment of many cancers. One important mechanism of VDR action is related to its interaction with the Wnt/beta-catenin pathway. Agonist-bound VDR inhibits the oncogenic Wnt/beta-catenin/TCF pathway by interacting directly with beta-catenin and in some cells by increasing cadherin expression which, in turn, recruits beta-catenin to the membrane. Here we identify TCF-4, a transcriptional regulator and beta-catenin binding partner as an indirect target of the VDR pathway.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn this work, we show that TCF-4 (gene name TCF7L2) is decreased in the mammary gland of the VDR knockout mouse as compared to the wild-type mouse. Furthermore, we show 1,25(OH)2D3 increases TCF-4 at the RNA and protein levels in several human colorectal cancer cell lines, the effect of which is completely dependent on the VDR. In silico analysis of the human and mouse TCF7L2 promoters identified several putative VDR binding elements. Although TCF7L2 promoter reporters responded to exogenous VDR, and 1,25(OH)2D3, mutation analysis and chromatin immunoprecipitation assays, showed that the increase in TCF7L2 did not require recruitment of the VDR to the identified elements and indicates that the regulation by VDR is indirect. This is further confirmed by the requirement of de novo protein synthesis for this up-regulation.\n\n\nCONCLUSIONS/SIGNIFICANCE\nAlthough it is generally assumed that binding of beta-catenin to members of the TCF/LEF family is cancer-promoting, recent studies have indicated that TCF-4 functions instead as a transcriptional repressor that restricts breast and colorectal cancer cell growth. Consequently, we conclude that the 1,25(OH)2D3/VDR-mediated increase in TCF-4 may have a protective role in colon cancer as well as diabetes and Crohn's disease.",
"title": ""
},
{
"docid": "39168bcf3cd49c13c86b13e89197ce7d",
"text": "An unprecedented booming has been witnessed in the research area of artistic style transfer ever since Gatys et al. introduced the neural method. One of the remaining challenges is to balance a trade-off among three critical aspects—speed, flexibility, and quality: (i) the vanilla optimization-based algorithm produces impressive results for arbitrary styles, but is unsatisfyingly slow due to its iterative nature, (ii) the fast approximation methods based on feed-forward neural networks generate satisfactory artistic effects but bound to only a limited number of styles, and (iii) feature-matching methods like AdaIN achieve arbitrary style transfer in a real-time manner but at a cost of the compromised quality. We find it considerably difficult to balance the trade-off well merely using a single feed-forward step and ask, instead, whether there exists an algorithm that could adapt quickly to any style, while the adapted model maintains high efficiency and good image quality. Motivated by this idea, we propose a novel method, coined MetaStyle, which formulates the neural style transfer as a bilevel optimization problem and combines learning with only a few post-processing update steps to adapt to a fast approximation model with satisfying artistic effects, comparable to the optimization-based methods for an arbitrary style. The qualitative and quantitative analysis in the experiments demonstrates that the proposed approach achieves high-quality arbitrary artistic style transfer effectively, with a good trade-off among speed, flexibility, and quality.",
"title": ""
},
{
"docid": "fba60a0dafd02886bd05c307a14da93b",
"text": "Deep neural network (DNN), being able to effectively learn from a training set and provide highly accurate classification results, has become the de-facto technique used in many mission-critical systems. The security of DNN itself is therefore of great concern. In this paper, we investigate the impact of fault injection attacks on DNN, wherein attackers try to misclassify a specified input pattern into an adversarial class by modifying the parameters used in DNN via fault injection. We propose two kinds of fault injection attacks to achieve this objective. Without considering stealthiness of the attack, single bias attack (SBA) only requires to modify one parameter in DNN for misclassification, based on the observation that the outputs of DNN may linearly depend on some parameters. Gradient descent attack (GDA) takes stealthiness into consideration. By controlling the amount of modification to DNN parameters, GDA is able to minimize the fault injection impact on input patterns other than the specified one. Experimental results demonstrate the effectiveness and efficiency of the proposed attacks.",
"title": ""
},
{
"docid": "ccc3cf21c4c97f9c56915b4d1e804966",
"text": "In this paper we present a prototype of a Microwave Imaging (MI) system for breast cancer detection. Our system is based on low-cost off-the-shelf microwave components, custom-made antennas, and a small form-factor processing system with an embedded Field-Programmable Gate Array (FPGA) for accelerating the execution of the imaging algorithm. We show that our system can compete with a vector network analyzer in terms of accuracy, and it is more than 20x faster than a high-performance server at image reconstruction.",
"title": ""
},
{
"docid": "715e5655651ed879f2439ed86e860bc9",
"text": "This paper presents a new permanent-magnet gear based on the cycloid gearing principle, which normally is characterized by an extreme torque density and a very high gearing ratio. An initial design of the proposed magnetic gear was designed, analyzed, and optimized with an analytical model regarding torque density. The results were promising as compared to other high-performance magnetic-gear designs. A test model was constructed to verify the analytical model.",
"title": ""
},
{
"docid": "b8505166c395750ee47127439a4afa1a",
"text": "Modern replicated data stores aim to provide high availability, by immediately responding to client requests, often by implementing objects that expose concurrency. Such objects, for example, multi-valued registers (MVRs), do not have sequential specifications. This paper explores a recent model for replicated data stores that can be used to precisely specify causal consistency for such objects, and liveness properties like eventual consistency, without revealing details of the underlying implementation. The model is used to prove the following results: An eventually consistent data store implementing MVRs cannot satisfy a consistency model strictly stronger than observable causal consistency (OCC). OCC is a model somewhat stronger than causal consistency, which captures executions in which client observations can use causality to infer concurrency of operations. This result holds under certain assumptions about the data store. Under the same assumptions, an eventually consistent and causally consistent replicated data store must send messages of unbounded size: If s objects are supported by n replicas, then, for every k > 1, there is an execution in which an Ω({n,s} k)-bit message is sent.",
"title": ""
}
] |
scidocsrr
|
e26691763ff4bc685f34d288d09a8332
|
Light it up: using paper circuitry to enhance low-fidelity paper prototypes for children
|
[
{
"docid": "f641e0da7b9aaffe0fabd1a6b60a6c52",
"text": "This paper introduces a low cost, fast and accessible technology to support the rapid prototyping of functional electronic devices. Central to this approach of 'instant inkjet circuits' is the ability to print highly conductive traces and patterns onto flexible substrates such as paper and plastic films cheaply and quickly. In addition to providing an alternative to breadboarding and conventional printed circuits, we demonstrate how this technique readily supports large area sensors and high frequency applications such as antennas. Unlike existing methods for printing conductive patterns, conductivity emerges within a few seconds without the need for special equipment. We demonstrate that this technique is feasible using commodity inkjet printers and commercially available ink, for an initial investment of around US$300. Having presented this exciting new technology, we explain the tools and techniques we have found useful for the first time. Our main research contribution is to characterize the performance of instant inkjet circuits and illustrate a range of possibilities that are enabled by way of several example applications which we have built. We believe that this technology will be of immediate appeal to researchers in the ubiquitous computing domain, since it supports the fabrication of a variety of functional electronic device prototypes.",
"title": ""
},
{
"docid": "7efc1612114cde04a70733ce9e851ba9",
"text": "Low-fidelity paper prototyping has proven to be a useful technique for designing graphical user interfaces [1]. Wizard of Oz prototyping for other input modalities, such as speech, also has a long history [2]. Yet to surface are guidelines for low-fidelity prototyping of multimodal applications, those that use multiple and sometimes simultaneous combination of different input types. This paper describes our recent research in low fidelity, multimodal, paper prototyping and suggest guidelines to be used by future designers of multimodal applications.",
"title": ""
}
] |
[
{
"docid": "2a77d3750d35fd9fec52514739303812",
"text": "We present a framework for analyzing and computing motion plans for a robot that operates in an environment that both varies over time and is not completely predictable. We rst classify sources of uncertainty in motion planning into four categories, and argue that the problems addressed in this paper belong to a fundamental category that has received little attention. We treat the changing environment in a exible manner by combining traditional connguration space concepts with a Markov process that models the environment. For this context, we then propose the use of a motion strategy, which provides a motion command for the robot for each contingency that it could be confronted with. We allow the speciication of a desired performance criterion, such as time or distance, and determine a motion strategy that is optimal with respect to that criterion. We demonstrate the breadth of our framework by applying it to a variety of motion planning problems. Examples are computed for problems that involve a changing conng-uration space, hazardous regions and shelters, and processing of random service requests. To achieve this, we have exploited the powerful principle of optimality, which leads to a dynamic programming-based algorithm for determining optimal strategies. In addition, we present several extensions to the basic framework that incorporate additional concerns, such as sensing issues or changes in the geometry of the robot.",
"title": ""
},
{
"docid": "b0e81e112b9aa7ebf653243f00b21f23",
"text": "Recent research indicates that toddlers and infants succeed at various non-verbal spontaneous-response false-belief tasks; here we asked whether toddlers would also succeed at verbal spontaneous-response false-belief tasks that imposed significant linguistic demands. We tested 2.5-year-olds using two novel tasks: a preferential-looking task in which children listened to a false-belief story while looking at a picture book (with matching and non-matching pictures), and a violation-of-expectation task in which children watched an adult 'Subject' answer (correctly or incorrectly) a standard false-belief question. Positive results were obtained with both tasks, despite their linguistic demands. These results (1) support the distinction between spontaneous- and elicited-response tasks by showing that toddlers succeed at verbal false-belief tasks that do not require them to answer direct questions about agents' false beliefs, (2) reinforce claims of robust continuity in early false-belief understanding as assessed by spontaneous-response tasks, and (3) provide researchers with new experimental tasks for exploring early false-belief understanding in neurotypical and autistic populations.",
"title": ""
},
{
"docid": "cc5f1304bb7564ec990cf61ada5c1c0f",
"text": "In the present study, the herbal preparation of Ophthacare brand eye drops was investigated for its anti-inflammatory, antioxidant and antimicrobial activity, using in vivo and in vitro experimental models. Ophthacare brand eye drops exhibited significant anti-inflammatory activity in turpentine liniment-induced ocular inflammation in rabbits. The preparation dose-dependently inhibited ferric chloride-induced lipid peroxidation in vitro and also showed significant antibacterial activity against Escherichia coli and Staphylococcus aureus and antifungal activity against Candida albicans. All these findings suggest that Ophthacare brand eye drops can be used in the treatment of various ophthalmic disorders.",
"title": ""
},
{
"docid": "da17a995148ffcb4e219bb3f56f5ce4a",
"text": "As education communities grow more interested in STEM (science, technology, engineering, and mathematics), schools have integrated more technology and engineering opportunities into their curricula. Makerspaces for all ages have emerged as a way to support STEM learning through creativity, community building, and hands-on learning. However, little research has evaluated the learning that happens in these spaces, especially in young children. One framework that has been used successfully as an evaluative tool in informal and technology-rich learning spaces is Positive Technological Development (PTD). PTD is an educational framework that describes positive behaviors children exhibit while engaging in digital learning experiences. In this exploratory case study, researchers observed children in a makerspace to determine whether the environment (the space and teachers) contributed to children’s Positive Technological Development. N = 20 children and teachers from a Kindergarten classroom were observed over 6 hours as they engaged in makerspace activities. The children’s activity, teacher’s facilitation, and the physical space were evaluated for alignment with the PTD framework. Results reveal that children showed high overall PTD engagement, and that teachers and the space supported children’s learning in complementary aspects of PTD. Recommendations for practitioners hoping to design and implement a young children’s makerspace are discussed.",
"title": ""
},
{
"docid": "82708e65107a0877a052ce81294f535c",
"text": "Abstract—Cyber exercises used to assess the preparedness of a community against cyber crises, technology failures and Critical Information Infrastructure (CII) incidents. The cyber exercises also called cyber crisis exercise or cyber drill, involved partnerships or collaboration of public and private agencies from several sectors. This study investigates Organisation Cyber Resilience (OCR) of participation sectors in cyber exercise called X Maya in Malaysia. This study used a principal based cyber resilience survey called CSuite Executive checklist developed by World Economic Forum in 2012. To ensure suitability of the survey to investigate the OCR, the reliability test was conducted on C-Suite Executive checklist items. The research further investigates the differences of OCR in ten Critical National Infrastructure Information (CNII) sectors participated in the cyber exercise. The One Way ANOVA test result showed a statistically significant difference of OCR among ten CNII sectors participated in the cyber exercise.",
"title": ""
},
{
"docid": "641a51f9a5af9fc9dba4be3d12829fd5",
"text": "In this paper, we present a novel SpaTial Attention Residue Network (STAR-Net) for recognising scene texts. The overall architecture of our STAR-Net is illustrated in fig. 1. Our STARNet emphasises the importance of representative image-based feature extraction from text regions by the spatial attention mechanism and the residue learning strategy. It is by far the deepest neural network proposed for scene text recognition.",
"title": ""
},
{
"docid": "625f1f11e627c570e26da9f41f89a28b",
"text": "In this paper, we propose an approach to realize substrate integrated waveguide (SIW)-based leaky-wave antennas (LWAs) supporting continuous beam scanning from backward to forward above the cutoff frequency. First, through phase delay analysis, it was found that SIWs with straight transverse slots support backward and forward radiation of the -1-order mode with an open-stopband (OSB) in between. Subsequently, by introducing additional longitudinal slots as parallel components, the OSB can be suppressed, leading to continuous beam scanning at least from -40° through broadside to 35°. The proposed method only requires a planar structure and obtains less dispersive beam scanning compared with a composite right/left-handed (CRLH) LWA. Both simulations and measurements verify the intended beam scanning operation while verifying the underlying theory.",
"title": ""
},
{
"docid": "837d1ef60937df15afc320b2408ad7b0",
"text": "Zero-shot learning has tremendous application value in complex computer vision tasks, e.g. image classification, localization, image captioning, etc., for its capability of transferring knowledge from seen data to unseen data. Many recent proposed methods have shown that the formulation of a compatibility function and its generalization are crucial for the success of a zero-shot learning model. In this paper, we formulate a softmax-based compatibility function, and more importantly, propose a regularized empirical risk minimization objective to optimize the function parameter which leads to a better model generalization. In comparison to eight baseline models on four benchmark datasets, our model achieved the highest average ranking. Our model was effective even when the training set size was small and significantly outperforming an alternative state-of-the-art model in generalized zero-shot recognition tasks.",
"title": ""
},
{
"docid": "714863ecaa627df1fee3301dde140995",
"text": "Eye movement-based interaction offers the potential of easy, natural, and fast ways of interacting in virtual environments. However, there is little empirical evidence about the advantages or disadvantages of this approach. We developed a new interaction technique for eye movement interaction in a virtual environment and compared it to more conventional 3-D pointing. We conducted an experiment to compare performance of the two interaction types and to assess their impacts on spatial memory of subjects and to explore subjects' satisfaction with the two types of interactions. We found that the eye movement-based interaction was faster than pointing, especially for distant objects. However, subjects' ability to recall spatial information was weaker in the eye condition than the pointing one. Subjects reported equal satisfaction with both types of interactions, despite the technology limitations of current eye tracking equipment.",
"title": ""
},
{
"docid": "7a54331811a4a93df69365b6756e1d5f",
"text": "With object storage services becoming increasingly accepted as replacements for traditional file or block systems, it is important to effectively measure the performance of these services. Thus people can compare different solutions or tune their systems for better performance. However, little has been reported on this specific topic as yet. To address this problem, we present COSBench (Cloud Object Storage Benchmark), a benchmark tool that we are currently working on in Intel for cloud object storage services. In addition, in this paper, we also share the results of the experiments we have performed so far.",
"title": ""
},
{
"docid": "2efb71ffb35bd05c7a124ffe8ad8e684",
"text": "We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.",
"title": ""
},
{
"docid": "45c8f409a5783067b6dce332500d5a88",
"text": "An online learning community enables learners to access up-to-date information via the Internet anytime–anywhere because of the ubiquity of the World Wide Web (WWW). Students can also interact with one another during the learning process. Hence, researchers want to determine whether such interaction produces learning synergy in an online learning community. In this paper, we take the Technology Acceptance Model as a foundation and extend the external variables as well as the Perceived Variables as our model and propose a number of hypotheses. A total of 436 Taiwanese senior high school students participated in this research, and the online learning community focused on learning English. The research results show that all the hypotheses are supported, which indicates that the extended variables can effectively predict whether users will adopt an online learning community. Finally, we discuss the implications of our findings for the future development of online English learning communities. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d798bc49068356495074f92b3bfe7a4b",
"text": "This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The e!ects of three main factors * input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the over\"tting problem.",
"title": ""
},
{
"docid": "5dcc5026f959b202240befbe56857ac4",
"text": "When a meta-analysis on results from experimental studies is conducted, differences in the study design must be taken into consideration. A method for combining results across independent-groups and repeated measures designs is described, and the conditions under which such an analysis is appropriate are discussed. Combining results across designs requires that (a) all effect sizes be transformed into a common metric, (b) effect sizes from each design estimate the same treatment effect, and (c) meta-analysis procedures use design-specific estimates of sampling variance to reflect the precision of the effect size estimates.",
"title": ""
},
{
"docid": "bcb615f8bfe9b2b13a4bfe72b698e4c7",
"text": "is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms. Researchers occasionally have to work with an extremely small sample size, defined herein as N ≤ 5. Some methodologists have cautioned against using the t-test when the sample size is extremely small, whereas others have suggested that using the t-test is feasible in such a case. The present simulation study estimated the Type I error rate and statistical power of the one-and two-sample t-tests for normally distributed populations and for various distortions such as unequal sample sizes, unequal variances, the combination of unequal sample sizes and unequal variances, and a lognormal population distribution. Ns per group were varied between 2 and 5. Results show that the t-test provides Type I error rates close to the 5% nominal value in most of the cases, and that acceptable power (i.e., 80%) is reached only if the effect size is very large. This study also investigated the behavior of the Welch test and a rank-transformation prior to conducting the t-test (t-testR). Compared to the regular t-test, the Welch test tends to reduce statistical power and the t-testR yields false positive rates that deviate from 5%. This study further shows that a paired t-test is feasible with extremely small Ns if the within-pair correlation is high. It is concluded that there are no principal objections to using a t-test with Ns as small as 2. A final cautionary note is made on the credibility of research findings when sample sizes are small. The dictum \" more is better \" certainly applies to statistical inference. According to the law of large numbers, a larger sample size implies that confidence intervals are narrower and that more reliable conclusions can be reached. The reality is that researchers are usually far from the ideal \" mega-trial \" performed with 10,000 subjects (cf. Ioannidis, 2013) and will have to work with much smaller samples instead. For a variety of reasons, such as budget, time, or ethical constraints, it may not be possible to gather a large sample. In some fields of science, such as research on rare animal species, persons having a rare illness, or prodigies scoring at the extreme of an ability distribution (e.g., Ruthsatz & Urbach, 2012), …",
"title": ""
},
{
"docid": "7f3bccab6d6043d3dedc464b195df084",
"text": "This paper introduces a new probabilistic graphical model called gated Bayesian network (GBN). This model evolved from the need to represent processes that include several distinct phases. In essence, a GBN is a model that combines several Bayesian networks (BNs) in such a manner that they may be active or inactive during queries to the model. We use objects called gates to combine BNs, and to activate and deactivate them when predefined logical statements are satisfied. In this paper we also present an algorithm for semi-automatic learning of GBNs. We use the algorithm to learn GBNs that output buy and sell decisions for use in algorithmic trading systems. We show how the learnt GBNs can substantially lower risk towards invested capital, while they at the same time generate similar or better rewards, compared to the benchmark investment strategy buy-and-hold. We also explore some differences and similarities between GBNs and other related formalisms.",
"title": ""
},
{
"docid": "5b2bc42cf2a801dbed78b808fdba894b",
"text": "In this paper, we report the development of a contactless position sensor with thin and planar structures for both sensor and target. The target is designed to be a compact resonator with resonance near the operating frequency, which improves the signal strength and increases the sensing range. The sensor is composed of a source coil and a pair of symmetrically arranged detecting coils. With differential measurement technique, highly accurate edge detection can be realized. Experiment results show that the sensor operates at varying gap size between the target and the sensor, even when the target is at 30 mm away, and the achieved accuracy is within 2% of the size of the sensing coil.",
"title": ""
},
{
"docid": "9871a5673f042b0565c50295be188088",
"text": "Formal security analysis has proven to be a useful tool for tracking modifications in communication protocols in an automated manner, where full security analysis of revisions requires minimum efforts. In this paper, we formally analysed prominent IoT protocols and uncovered many critical challenges in practical IoT settings. We address these challenges by using formal symbolic modelling of such protocols under various adversaries and security goals. Furthermore, this paper extends formal analysis to cryptographic Denial-of-Service (DoS) attacks and demonstrates that a vast majority of IoT protocols are vulnerable to such resource exhaustion attacks. We present a cryptographic DoS attack countermeasure that can be generally used in many IoT protocols. Our study of prominent IoT protocols such as CoAP and MQTT shows the benefits of our approach.",
"title": ""
},
{
"docid": "36be150e997a1fb6b245e8c88688b1b8",
"text": "Restricted Boltzmann Machines (RBMs) are generative models which can learn useful representations from samples of a dataset in an unsupervised fashion. They have been widely employed as an unsupervised pre-training method in machine learning. RBMs have been modified to model time series in two main ways: The Temporal RBM stacks a number of RBMs laterally and introduces temporal dependencies between the hidden layer units; The Conditional RBM, on the other hand, considers past samples of the dataset as a conditional bias and learns a representation which takes these into account. Here we propose a new training method for both the TRBM and the CRBM, which enforces the dynamic structure of temporal datasets. We do so by treating the temporal models as denoising autoencoders, considering past frames of the dataset as corrupted versions of the present frame and minimizing the reconstruction error of the present data by the model. We call this approach Temporal Autoencoding. This leads to a significant improvement in the performance of both models in a filling-in-frames task across a number of datasets. The error reduction for motion capture data is 56% for the CRBM and 80% for the TRBM. Taking the posterior mean prediction instead of single samples further improves the model’s estimates, decreasing the error by as much as 91% for the CRBM on motion capture data. We also trained the model to perform forecasting on a large number of datasets and have found TA pretraining to consistently improve the performance of the forecasts. Furthermore, by looking at the prediction error across time, we can see that this improvement reflects a better representation of the dynamics of the data as opposed to a bias towards reconstructing the observed data on a short time scale. We believe this novel approach of mixing contrastive divergence and autoencoder training yields better models of temporal data, bridging the way towards more robust generative models of time series.",
"title": ""
},
{
"docid": "e4cfcd8bd577fc04480c62bbc6e94a41",
"text": "Background and Objective: Binaural interaction component has been seen to be effective in assessing the binaural interaction process in normal hearing individuals. However, there is a lack of literature regarding the effects of SNHL on the Binaural Interaction Component of ABR. Hence, it is necessary to study binaural interaction occurs at the brainstem when there is an associated hearing impairment. Methods: Three groups of participants in the age range of 30 to 55 years were taken for study i.e. one control group and two experimental groups (symmetrical and asymmetrical hearing loss). The binaural interaction component was determined by subtracting the binaurally evoked auditory potentials from the sum of the monaural auditory evoked potentials: BIC= [{left monaural + right monaural)-binaural}. The latency and amplitude of V peak was estimated for click evoked ABR for monaural and binaural recordings. Results: One way ANOVA revealed a significant difference for binaural interaction component in terms of latency between different groups. One-way ANOVA also showed no significant difference seen between the three different groups in terms of amplitude. Conclusion: The binaural interaction component of auditory brainstem response can be used to evaluate the binaural interaction in symmetrical and asymmetrical hearing loss. This will be helpful to circumvent the effect of peripheral hearing loss in binaural processing of the auditory system. Additionally the test does not require any behavioral cooperation from the client, hence can be administered easily.",
"title": ""
}
] |
scidocsrr
|
7f94a0e839dbdd0cb698f1f04f9f83c1
|
Design for 5G Mobile Network Architecture
|
[
{
"docid": "4412bca4e9165545e4179d261828c85c",
"text": "Today 3G mobile systems are on the ground providing IP connectivity for real-time and non-real-time services. On the other side, there are many wireless technologies that have proven to be important, with the most important ones being 802.11 Wireless Local Area Networks (WLAN) and 802.16 Wireless Metropolitan Area Networks (WMAN), as well as ad-hoc Wireless Personal Area Network (WPAN) and wireless networks for digital TV and radio broadcast. Then, the concepts of 4G is already much discussed and it is almost certain that 4G will include several standards under a common umbrella, similarly to 3G, but with IEEE 802.xx wireless mobile networks included from the beginning. The main contribution of this paper is definition of 5G (Fifth Generation) mobile network concept, which is seen as user-centric concept instead of operator-centric as in 3G or service-centric concept as seen for 4G. In the proposed concept the mobile user is on the top of all. The 5G terminals will have software defined radios and modulation scheme as well as new error-control schemes can be downloaded from the Internet on the run. The development is seen towards the user terminals as a focus of the 5G mobile networks. The terminals will have access to different wireless technologies at the same time and the terminal should be able to combine different flows from different technologies. Each network will be responsible for handling user-mobility, while the terminal will make the final choice among different wireless/mobile access network providers for a given service. The paper also proposes intelligent Internet phone concept where the mobile phone can choose the best connections by selected constraints and dynamically change them during a single end-to-end connection. The proposal in this paper is fundamental shift in the mobile networking philosophy compared to existing 3G and near-soon 4G mobile technologies, and this concept is called here the 5G.",
"title": ""
}
] |
[
{
"docid": "bda4bdc27e9ea401abb214c3fb7c9813",
"text": "Lipedema is a common, but often underdiagnosed masquerading disease of obesity, which almost exclusively affects females. There are many debates regarding the diagnosis as well as the treatment strategies of the disease. The clinical diagnosis is relatively simple, however, knowledge regarding the pathomechanism is less than limited and curative therapy does not exist at all demanding an urgent need for extensive research. According to our hypothesis, lipedema is an estrogen-regulated polygenetic disease, which manifests in parallel with feminine hormonal changes and leads to vasculo- and lymphangiopathy. Inflammation of the peripheral nerves and sympathetic innervation abnormalities of the subcutaneous adipose tissue also involving estrogen may be responsible for neuropathy. Adipocyte hyperproliferation is likely to be a secondary phenomenon maintaining a vicious cycle. Herein, the relevant articles are reviewed from 1913 until now and discussed in context of the most likely mechanisms leading to the disease, which could serve as a starting point for further research.",
"title": ""
},
{
"docid": "a727d28ed4153d9d9744b3e2b5e47251",
"text": "Darts is enjoyed both as a pub game and as a professional competitive activity.Yet most players aim for the highest scoring region of the board, regardless of their level of skill. By modelling a dart throw as a two-dimensional Gaussian random variable, we show that this is not always the optimal strategy.We develop a method, using the EM algorithm, for a player to obtain a personalized heat map, where the bright regions correspond to the aiming locations with high (expected) pay-offs. This method does not depend in any way on our Gaussian assumption, and we discuss alternative models as well.",
"title": ""
},
{
"docid": "9a4fc12448d166f3a292bfdf6977745d",
"text": "Enabled by the rapid development of virtual reality hardware and software, 360-degree video content has proliferated. From the network perspective, 360-degree video transmission imposes significant challenges because it consumes 4 6χ the bandwidth of a regular video with the same resolution. To address these challenges, in this paper, we propose a motion-prediction-based transmission mechanism that matches network video transmission to viewer needs. Ideally, if viewer motion is perfectly known in advance, we could reduce bandwidth consumption by 80%. Practically, however, to guarantee the quality of viewing experience, we have to address the random nature of viewer motion. Based on our experimental study of viewer motion (comprising 16 video clips and over 150 subjects), we found the viewer motion can be well predicted in 100∼500ms. We propose a machine learning mechanism that predicts not only viewer motion but also prediction deviation itself. The latter is important because it provides valuable input on the amount of redundancy to be transmitted. Based on such predictions, we propose a targeted transmission mechanism that minimizes overall bandwidth consumption while providing probabilistic performance guarantees. Real-data-based evaluations show that the proposed scheme significantly reduces bandwidth consumption while minimizing performance degradation, typically a 45% bandwidth reduction with less than 0.1% failure ratio.",
"title": ""
},
{
"docid": "850e9c1beae0635e629fbb44bda14dc7",
"text": "Power law distribution seems to be an important characteristic of web graphs. Several existing web graph models generate power law graphs by adding new vertices and non-uniform edge connectivities to existing graphs. Researchers have conjectured that preferential connectivity and incremental growth are both required for the power law distribution. In this paper, we propose a different web graph model with power law distribution that does not require incremental growth. We also provide a comparison of our model with several others in their ability to predict web graph clustering behavior.",
"title": ""
},
{
"docid": "e7664a3c413f86792b98912a0241a6ac",
"text": "Seq2seq learning has produced promising results on summarization. However, in many cases, system summaries still struggle to keep the meaning of the original intact. They may miss out important words or relations that play critical roles in the syntactic structure of source sentences. In this paper, we present structure-infused copy mechanisms to facilitate copying important words and relations from the source sentence to summary sentence. The approach naturally combines source dependency structure with the copy mechanism of an abstractive sentence summarizer. Experimental results demonstrate the effectiveness of incorporating source-side syntactic information in the system, and our proposed approach compares favorably to state-of-the-art methods.",
"title": ""
},
{
"docid": "55658c75bcc3a12c1b3f276050f28355",
"text": "Sensing systems such as biomedical implants, infrastructure monitoring systems, and military surveillance units are constrained to consume only picowatts to nanowatts in standby and active mode, respectively. This tight power budget places ultra-low power demands on all building blocks in the systems. This work proposes a voltage reference for use in such ultra-low power systems, referred to as the 2T voltage reference, which has been demonstrated in silicon across three CMOS technologies. Prototype chips in 0.13 μm show a temperature coefficient of 16.9 ppm/°C (best) and line sensitivity of 0.033%/V, while consuming 2.22 pW in 1350 μm2. The lowest functional Vdd 0.5 V. The proposed design improves energy efficiency by 2 to 3 orders of magnitude while exhibiting better line sensitivity and temperature coefficient in less area, compared to other nanowatt voltage references. For process spread analysis, 49 dies are measured across two runs, showing the design exhibits comparable spreads in TC and output voltage to existing voltage references in the literature. Digital trimming is demonstrated, and assisted one temperature point digital trimming, guided by initial samples with two temperature point trimming, enables TC <; 50 ppm/°C and ±0.35% output precision across all 25 dies. Ease of technology portability is demonstrated with silicon measurement results in 65 nm, 0.13 μm, and 0.18 μm CMOS technologies.",
"title": ""
},
{
"docid": "7437f0c8549cb8f73f352f8043a80d19",
"text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.",
"title": ""
},
{
"docid": "a871176628b28af28f630c447236a2d9",
"text": "More than 70 years ago, the filamentous ascomycete Trichoderma reesei was isolated on the Solomon Islands due to its ability to degrade and thrive on cellulose containing fabrics. This trait that relies on its secreted cellulases is nowadays exploited by several industries. Most prominently in biorefineries which use T. reesei enzymes to saccharify lignocellulose from renewable plant biomass in order to produce biobased fuels and chemicals. In this review we summarize important milestones of the development of T. reesei as the leading production host for biorefinery enzymes, and discuss emerging trends in strain engineering. Trichoderma reesei has very recently also been proposed as a consolidated bioprocessing organism capable of direct conversion of biopolymeric substrates to desired products. We therefore cover this topic by reviewing novel approaches in metabolic engineering of T. reesei.",
"title": ""
},
{
"docid": "101ecfb3d6a20393d147cd2061414369",
"text": "In this paper we propose a novel volumetric multi-resolution mapping system for RGB-D images that runs on a standard CPU in real-time. Our approach generates a textured triangle mesh from a signed distance function that it continuously updates as new RGB-D images arrive. We propose to use an octree as the primary data structure which allows us to represent the scene at multiple scales. Furthermore, it allows us to grow the reconstruction volume dynamically. As most space is either free or unknown, we allocate and update only those voxels that are located in a narrow band around the observed surface. In contrast to a regular grid, this approach saves enormous amounts of memory and computation time. The major challenge is to generate and maintain a consistent triangle mesh, as neighboring cells in the octree are more difficult to find and may have different resolutions. To remedy this, we present in this paper a novel algorithm that keeps track of these dependencies, and efficiently updates corresponding parts of the triangle mesh. In our experiments, we demonstrate the real-time capability on a large set of RGB-D sequences. As our approach does not require a GPU, it is well suited for applications on mobile or flying robots with limited computational resources.",
"title": ""
},
{
"docid": "988c161ceae388f5dbcdcc575a9fa465",
"text": "This work presents an architecture for single source, single point noise cancellation that seeks adequate gain margin and high performance for both stationary and nonstationary noise sources by combining feedforward and feedback control. Gain margins and noise reduction performance of the hybrid control architecture are validated experimentally using an earcup from a circumaural hearing protector. Results show that the hybrid system provides 5 to 30 dB active performance in the frequency range 50-800 Hz for tonal noise and 18-27 dB active performance in the same frequency range for nonstationary noise, such as aircraft or helicopter cockpit noise, improving low frequency (> 100 Hz) performance by up to 15 dB over either control component acting individually.",
"title": ""
},
{
"docid": "0c420c064519e15e071660c750c0b7e3",
"text": "In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.",
"title": ""
},
{
"docid": "22b1974fa802c9ea224e6b0b6f98cedb",
"text": "This paper presents a human-inspired control approach to bipedal robotic walking: utilizing human data and output functions that appear to be intrinsic to human walking in order to formally design controllers that provably result in stable robotic walking. Beginning with human walking data, outputs-or functions of the kinematics-are determined that result in a low-dimensional representation of human locomotion. These same outputs can be considered on a robot, and human-inspired control is used to drive the outputs of the robot to the outputs of the human. The main results of this paper are that, in the case of both under and full actuation, the parameters of this controller can be determined through a human-inspired optimization problem that provides the best fit of the human data while simultaneously provably guaranteeing stable robotic walking for which the initial condition can be computed in closed form. These formal results are demonstrated in simulation by considering two bipedal robots-an underactuated 2-D bipedal robot, AMBER, and fully actuated 3-D bipedal robot, NAO-for which stable robotic walking is automatically obtained using only human data. Moreover, in both cases, these simulated walking gaits are realized experimentally to obtain human-inspired bipedal walking on the actual robots.",
"title": ""
},
{
"docid": "f409eace05cd617355440509da50d685",
"text": "Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p < 0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.",
"title": ""
},
{
"docid": "16ce10ae21b7ef66746937ba6c9bf321",
"text": "Recent years, deep learning is increasingly prevalent in the field of Software Engineering (SE). However, many open issues still remain to be investigated. How do researchers integrate deep learning into SE problems? Which SE phases are facilitated by deep learning? Do practitioners benefit from deep learning? The answers help practitioners and researchers develop practical deep learning models for SE tasks. To answer these questions, we conduct a bibliography analysis on 98 research papers in SE that use deep learning techniques. We find that 41 SE tasks in all SE phases have been facilitated by deep learning integrated solutions. In which, 84.7% papers only use standard deep learning models and their variants to solve SE problems. The practicability becomes a concern in utilizing deep learning techniques. How to improve the effectiveness, efficiency, understandability, and testability of deep learning based solutions may attract more SE researchers in the future. Introduction Driven by the success of deep learning in data mining and pattern recognition, recent years have witnessed an increasing trend for industrial practitioners and academic researchers to integrate deep learning into SE tasks [1]-[3]. For typical SE tasks, deep learning helps SE participators extract requirements from natural language text [1], generate source code [2], predict defects in software [3], etc. As an initial statistics of research papers in SE in this study, deep learning has achieved competitive performance against previous algorithms on about 40 SE tasks. There are at least 98 research papers published or accepted in 66 venues, integrating deep learning into SE tasks. Despite the encouraging amount of papers and venues, there exists little overview analysis on deep learning in SE, e.g., the common way to integrate deep learning into SE, the SE phases facilitated by deep learning, the interests of SE practitioners on deep learning, etc. Understanding these questions is important. On the one hand, it helps practitioners and researchers get an overview understanding of deep learning in SE. On the other hand, practitioners and researchers can develop more practical deep learning models according to the analysis. For this purpose, this study conducts a bibliography analysis on research papers in the field of SE that use deep learning techniques. In contrast to literature reviews,",
"title": ""
},
{
"docid": "986279f6f47189a6d069c0336fa4ba94",
"text": "Compared to the traditional single-phase-shift control, dual-phase-shift (DPS) control can greatly improve the performance of the isolated bidirectional dual-active-bridge dc-dc converter (IBDC). This letter points out some wrong knowledge about transmission power of IBDC under DPS control in the earlier studies. On this basis, this letter gives the detailed theoretical and experimental analyses of the transmission power of IBDC under DPS control. And the experimental results showed agreement with theoretical analysis.",
"title": ""
},
{
"docid": "19792ab5db07cd1e6cdde79854ba8cb7",
"text": "Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.",
"title": ""
},
{
"docid": "220a0be60be41705a95908df8180cf95",
"text": "Since the introduction of the first power module by Semikron in 1975, many innovations have been made to improve the thermal, electrical, and mechanical performance of power modules. These innovations in packaging technology focus on the enhancement of the heat dissipation and thermal cycling capability of the modules. Thermal cycles, caused by varying load and environmental operating conditions, induce high mechanical stress in the interconnection layers of the power module due to the different coefficients of thermal expansion (CTE), leading to fatigue and growth of microcracks in the bonding materials. As a result, the lifetime of power modules can be severely limited in practical applications. Furthermore, to reduce the size and weight of converters, the semiconductors are being operated at higher junction temperatures. Higher temperatures are especially of great interest for use of wide-?bandgap materials, such as SiC and GaN, because these materials leverage their material characteristics, particularly at higher temperatures. To satisfy these tightened requirements, on the one hand, conventional power modules, i.e., direct bonded Cu (DBC)-based systems with bond wire contacts, have been further improved. On the other hand, alternative packaging techniques, e.g., chip embedding into printed circuit boards (PCBs) and power module packaging based on the selective laser melting (SLM) technique, have been developed, which might constitute an alternative to conventional power modules in certain applications.",
"title": ""
},
{
"docid": "06f1c7daafcf59a8eb2ddf430d0d7f18",
"text": "OBJECTIVES\nWe aimed to evaluate the efficacy of reinforcing short-segment pedicle screw fixation with polymethyl methacrylate (PMMA) vertebroplasty in patients with thoracolumbar burst fractures.\n\n\nMETHODS\nWe enrolled 70 patients with thoracolumbar burst fractures for treatment with short-segment pedicle screw fixation. Fractures in Group A (n = 20) were reinforced with PMMA vertebroplasty during surgery. Group B patients (n = 50) were not treated with PMMA vertebroplasty. Kyphotic deformity, anterior vertebral height, instrument failure rates, and neurological function outcomes were compared between the two groups.\n\n\nRESULTS\nKyphosis correction was achieved in Group A (PMMA vertebroplasty) and Group B (Group A, 6.4 degrees; Group B, 5.4 degrees). At the end of the follow-up period, kyphosis correction was maintained in Group A but lost in Group B (Group A, 0.33-degree loss; Group B, 6.20-degree loss) (P = 0.0001). After surgery, greater anterior vertebral height was achieved in Group A than in Group B (Group A, 12.9%; Group B, 2.3%) (P < 0.001). During follow-up, anterior vertebral height was maintained only in Group A (Group A, 0.13 +/- 4.06%; Group B, -6.17 +/- 1.21%) (P < 0.001). Patients in both Groups A and B demonstrated good postoperative Denis Pain Scale grades (P1 and P2), but Group A had better results than Group B in terms of the control of severe and constant pain (P4 and P5) (P < 0.001). The Frankel Performance Scale scores increased by nearly 1 in both Groups A and B. Group B was subdivided into Group B1 and B2. Group B1 consisted of patients who experienced instrument failure, including screw pullout, breakage, disconnection, and dislodgement (n = 11). Group B2 comprised patients from Group B who did not experience instrument failure (n = 39). There were no instrument failures among patients in Group A. Preoperative kyphotic deformity was greater in Group B1 (23.5 +/- 7.9 degrees) than in Group B2 (16.8 +/- 8.40 degrees), P < 0.05. Severe and constant pain (P4 and P5) was noted in 36% of Group B1 patients (P < 0.001), and three of these patients required removal of their implants.\n\n\nCONCLUSION\nReinforcement of short-segment pedicle fixation with PMMA vertebroplasty for the treatment of patients with thoracolumbar burst fracture may achieve and maintain kyphosis correction, and it may also increase and maintain anterior vertebral height. Good Denis Pain Scale grades and improvement in Frankel Performance Scale scores were found in patients without instrument failure (Groups A and B2). Patients with greater preoperative kyphotic deformity had a higher risk of instrument failure if they did not undergo reinforcement with vertebroplasty. PMMA vertebroplasty offers immediate spinal stability in patients with thoracolumbar burst fractures, decreases the instrument failure rate, and provides better postoperative pain control than without vertebroplasty.",
"title": ""
},
{
"docid": "deb3ac73ec2e8587371c6078dc4b2205",
"text": "Natural antimicrobials as well as essential oils (EOs) have gained interest to inhibit pathogenic microorganisms and to control food borne diseases. Campylobacter spp. are one of the most common causative agents of gastroenteritis. In this study, cardamom, cumin, and dill weed EOs were evaluated for their antibacterial activities against Campylobacter jejuni and Campylobacter coli by using agar-well diffusion and broth microdilution methods, along with the mechanisms of antimicrobial action. Chemical compositions of EOs were also tested by gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS). The results showed that cardamom and dill weed EOs possess greater antimicrobial activity than cumin with larger inhibition zones and lower minimum inhibitory concentrations. The permeability of cell membrane and cell membrane integrity were evaluated by determining relative electric conductivity and release of cell constituents into supernatant at 260 nm, respectively. Moreover, effect of EOs on the cell membrane of Campylobacter spp. was also investigated by measuring extracellular ATP concentration. Increase of relative electric conductivity, extracellular ATP concentration, and cell constituents' release after treatment with EOs demonstrated that tested EOs affected the membrane integrity of Campylobacter spp. The results supported high efficiency of cardamom, cumin, and dill weed EOs to inhibit Campylobacter spp. by impairing the bacterial cell membrane.",
"title": ""
}
] |
scidocsrr
|
0fea86753a4674344a7e8aee282da4c1
|
Drivers of consumer-brand identification
|
[
{
"docid": "368f904533e17beec78d347ee8ceabb1",
"text": "A brand community from a customer-experiential perspective is a fabric of relationships in which the customer is situated. Crucial relationships include those between the customer and the brand, between the customer and the firm, between the customer and the product in use, and among fellow customers. The authors delve ethnographically into a brand community and test key findings through quantitative methods. Conceptually, the study reveals insights that differ from prior research in four important ways: First, it expands the definition of a brand community to entities and relationships neglected by previous research. Second, it treats vital characteristics of brand communities, such as geotemporal concentrations and the richness of social context, as dynamic rather than static phenomena. Third, it demonstrates that marketers can strengthen brand communities by facilitating shared customer experiences in ways that alter those dynamic characteristics. Fourth, it yields a new and richer conceptualization of customer loyalty as integration in a brand community.",
"title": ""
}
] |
[
{
"docid": "97b9627380d9a9fc00dfa63661d199f9",
"text": "We study sequences of consumption in which the same item may be consumed multiple times. We identify two macroscopic behavior patterns of repeated consumptions. First, in a given user’s lifetime, very few items live for a long time. Second, the last consumptions of an item exhibit growing inter-arrival gaps consistent with the notion of increasing boredom leading up to eventual abandonment. We then present what is to our knowledge the first holistic model of sequential repeated consumption, covering all observed aspects of this behavior. Our simple and purely combinatorial model includes no planted notion of lifetime distributions or user boredom; nonetheless, the model correctly predicts both of these phenomena. Further, we provide theoretical analysis of the behavior of the model confirming these phenomena. Additionally, the model quantitatively matches a number of microscopic phenomena across a broad range of datasets. Intriguingly, these findings suggest that the observation in a variety of domains of increasing user boredom leading to abandonment may be explained simply by probabilistic conditioning on an extinction event in a simple model, without resort to explanations based on complex human dynamics.",
"title": ""
},
{
"docid": "c3bed90c68e26eaf3b41bd3dc28d1501",
"text": "This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor’s policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.",
"title": ""
},
{
"docid": "5551c139bf9bdb144fabce6a20fda331",
"text": "A common prerequisite for a number of debugging and performanceanalysis techniques is the injection of auxiliary program code into the application under investigation, a process called instrumentation. To accomplish this task, source-code preprocessors are often used. Unfortunately, existing preprocessing tools either focus only on a very specific aspect or use hard-coded commands for instrumentation. In this paper, we examine which basic constructs are required to specify a user-defined routine entry/exit instrumentation. This analysis serves as a basis for a generic instrumentation component working on the source-code level where the instructions to be inserted can be flexibly configured. We evaluate the identified constructs with our prototypical implementation and show that these are sufficient to fulfill the needs of a number of todays’ performance-analysis tools.",
"title": ""
},
{
"docid": "7d38b4b2d07c24fdfb2306116017cd5e",
"text": "Science Technology Engineering, Art, Mathematics (STEAM) is an integration of art into Science Technology Engineering, Mathematics (STEM). Connecting art to science makes learning more effective and innovative. This study aims to determine the increase in mastery of the concept of high school students after the application of STEAM education in learning with the theme of Water and Us. The research method used is one group Pretestposttest design with students of class VII (n = 37) junior high school. The instrument used in the form of question of mastery of concepts in the form of multiple choices amounted to 20 questions and observation sheet of learning implementation. The results of the study show that there is an increase in conceptualization on the theme of Water and Us which is categorized as medium (<g>=0, 46) after the application of the STEAM approach. The conclusion obtained that by applying STEAM approach in learning can improve the mastery of concept",
"title": ""
},
{
"docid": "7e68fe5b6a164359d2389f30686ec049",
"text": "Tracking the articulated 3D motion of the hand has important applications, for example, in human-computer interaction and teleoperation. We present a novel method that can capture a broad range of articulated hand motions at interactive rates. Our hybrid approach combines, in a voting scheme, a discriminative, part-based pose retrieval method with a generative pose estimation method based on local optimization. Color information from a multi-view RGB camera setup along with a person-specific hand model are used by the generative method to find the pose that best explains the observed images. In parallel, our discriminative pose estimation method uses fingertips detected on depth data to estimate a complete or partial pose of the hand by adopting a part-based pose retrieval strategy. This part-based strategy helps reduce the search space drastically in comparison to a global pose retrieval strategy. Quantitative results show that our method achieves state-of-the-art accuracy on challenging sequences and a near-real time performance of 10 fps on a desktop computer.",
"title": ""
},
{
"docid": "c42edb326ec95c257b821cc617e174e6",
"text": "recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios from business process modelling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions. Iman Avazpour Faculty of ICT, Centre for Computing and Engineering Software and Systems (SUCCESS), Swinburne University of Technology, Hawthorn, Victoria 3122, Australia e-mail: iavazpour@swin.",
"title": ""
},
{
"docid": "1700ee1ba5fef2c9efa9a2b8bfa7d6bd",
"text": "This work studies resource allocation in a cloud market through the auction of Virtual Machine (VM) instances. It generalizes the existing literature by introducing combinatorial auctions of heterogeneous VMs, and models dynamic VM provisioning. Social welfare maximization under dynamic resource provisioning is proven NP-hard, and modeled with a linear integer program. An efficient α-approximation algorithm is designed, with α ~ 2.72 in typical scenarios. We then employ this algorithm as a building block for designing a randomized combinatorial auction that is computationally efficient, truthful in expectation, and guarantees the same social welfare approximation factor α. A key technique in the design is to utilize a pair of tailored primal and dual LPs for exploiting the underlying packing structure of the social welfare maximization problem, to decompose its fractional solution into a convex combination of integral solutions. Empirical studies driven by Google Cluster traces verify the efficacy of the randomized auction.",
"title": ""
},
{
"docid": "db10e034790a8bd5af57bce3e4c59547",
"text": "When describing images, humans tend not to talk about the obvious, but rather mention what they find interesting. We argue that abnormalities and deviations from typicalities are among the most important components that form what is worth mentioning. In this paper we introduce the abnormality detection as a recognition problem and show how to model typicalities and, consequently, meaningful deviations from prototypical properties of categories. Our model can recognize abnormalities and report the main reasons of any recognized abnormality. We also show that abnormality predictions can help image categorization. We introduce the abnormality detection dataset and show interesting results on how to reason about abnormalities.",
"title": ""
},
{
"docid": "26787002ed12cc73a3920f2851449c5e",
"text": "This article brings together three current themes in organizational behavior: (1) a renewed interest in assessing person-situation interactional constructs, (2) the quantitative assessment of organizational culture, and (3) the application of \"Q-sort,\" or template-matching, approaches to assessing person-situation interactions. Using longitudinal data from accountants and M.B.A. students and cross-sectional data from employees of government agencies and public accounting firms, we developed and validated an instrument for assessing personorganization fit, the Organizational Culture Profile (OCP). Results suggest that the dimensionality of individual preferences for organizational cultures and the existence of these cultures are interpretable. Further, person-organization fit predicts job satisfaction and organizational commitment a year after fit was measured and actual turnover after two years. This evidence attests to the importance of understanding the fit between individuals' preferences and organizational cultures.",
"title": ""
},
{
"docid": "5ca75490c015685a1fc670b2ee5103ff",
"text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.",
"title": ""
},
{
"docid": "29907ff921e98b28284b703fec4f0170",
"text": "This paper develops a general service sector model of repurchase intention from the consumer theory literature. A key contribution of the structural equation model is the incorporation of customer perceptions of equity and value and customer brand preference into an integrated repurchase intention analysis. The model describes the extent to which customer repurchase intention is influenced by seven important factors – service quality, equity and value, customer satisfaction, past loyalty, expected switching cost and brand preference. The general model is applied to customers of comprehensive car insurance and personal superannuation services. The analysis finds that although perceived quality does not directly affect customer satisfaction, it does so indirectly via customer equity and value perceptions. The study also finds that past purchase loyalty is not directly related to customer satisfaction or current brand preference and that brand preference is an intervening factor between customer satisfaction and repurchase intention. The main factor influencing brand preference was perceived value with customer satisfaction and expected switching cost having less influence. Introduction The objective of this paper is to test a general model which aims to describe the extent to which customer intention to repurchase a service is influenced by customer perceptions of quality, equity and value, customer satisfaction, past loyalty, expected switching cost and brand preference. The objective is important because customer repurchase intention research is largely fragmented and is in need of an empirically verified general theory. Some studies have concentrated on determining the basic antecedent variables to repurchase intention (Hocutt, 1998; Storbacka et al., 1994; Zahorik and Rust, 1992). Other studies, such as Bitner et al. (1990), Bolton and Drew (1991a, b), Boulding et al. (1993), Grayson and Ambler (1999), Liljander and Strandvik (1995), and Price et al. (1995) have considered the single incident, critical encounters and longitudinal interactions or relationships between these variables. Still others have considered the predictive validity of repurchase intention for subsequent repurchase behaviour (Bemmaor, 1995; Mittal and Kamakura, 2001; Morwitz et al., 1993). Despite the fact that research in this area largely relies on stochastic and deterministic approaches to customer retention analysis (Ehrenberg, 1988; Howard, 1977; Lilien et al., 1992), few comprehensive, empirically tested, structural models of the customer retention process are evident in marketing literature. Even the understanding of the interrelationships between customer service perceptions per se, or how these relate to overall service satisfaction appears unclear (Bolton and Drew, 1994; Fornell et al., 1996; Roest and Pieters, 1997; Taylor and Baker, 1994; Zahorik and Rust, 1992). Furthermore, a customer behaviour model, which holistically defines the processes by which customers make a choice between several competing service brands or providers, is still to be developed. Some progress in this direction has been made by the evaluation of known alternatives being factored into customer assessments, via the disconfirmation of expectations (Bearden and Teel, 1983; Bolton and Drew, 1991b; Boulding et al., 1993; Cadotte et al., 1987; Oliver, 1980; Oliver and Bearden, 1985). While this approach measures the difference between pre and post consumption assessments, it provides only a partial explanation of how customer retention mechanisms might operate (Bagozzi et al., 1999; Mano and Oliver, 1993; Oliver, 1993; Oliver and DeSarbo, 1988; Oliver and Swan, 1989; Price et al., 1995; Westbrook, 1987). This paper examines the following customer repurchase intention issues within the specific service environments of comprehensive car insurance and personal superannuation: What is the impact of customer satisfaction and brand preference on repurchase intention? What is the effect of customer loyalty and switching costs on brand preference? How important is the contribution of perceived value to customer satisfaction and brand preference? What is the impact of perceived equity on customer perceived value and satisfaction? How does perceived quality contribute to customer satisfaction? “The research model” section of this paper outlines the theoretical foundation of the general model, and the propositions arising from the various relationships. The “survey method” section explains the research approach and sample design, establishes the measurement scales and provides confirmatory factor analysis and parameter estimates for the model. The “structural equation analysis” section tests the fit of the general model to the empirical data and a modified model is developed. The modified model is then tested against data from selected customer groups. This paper concludes with sections covering the study findings, the management implications of these findings and suggested avenues for future research. The research model Several researchers have found satisfaction and attitude to be major antecedents of customer repurchase intention (Bearden and Teel, 1983; Innis, 1991; Oliver, 1980, 1981; Roest and Pieters, 1997). When attitude is treated as a post-purchase construct, the general sequence is:Equation 1In this context, satisfaction is the overall level of customer pleasure and contentment resulting from experience with the service. Attitude is the customer's positive, neutral or negative learned disposition (often as a result of past evaluative experiences), with respect to the good service, company, or brand under consideration (Roest and Pieters, 1997). However, the precise relationship between customer learned disposition and customer preference for perceived alternatives remains unclear. In the literature, different terms have been used for similar or closely related preference constructs. Examples of terms used are, customer commitment (Storbacka et al., 1994), brand choice (Manrai, 1995), product attitude (Roest and Pieters, 1997) and consumer preference (Mantel and Kardes, 1999). In this paper, the approach taken is that a separate and distinct evaluation of alternatives (brand preference) precedes customer repurchase intention (Manrai, 1995; Storbacka et al., 1994). In the conceptual model developed here the major antecedents to repurchase intention are thus:Equation 2The research model, shown in Figure 1, delineates the key factors preceding customer satisfaction and brand preference. Each of the model components is defined as follows: Repurchase intention. The individual's judgement about buying again a designated service from the same company, taking into account his or her current situation and likely circumstances. Brand preference. The extent to which the customer favours the designated service provided by his or her present company, in comparison to the designated service provided by other companies in his or her consideration set. Expected switching cost. The customer's estimate of the personal loss or sacrifice in time, effort and money associated with the customer changing to another service provider. Customer loyalty. The degree to which the customer has exhibited, over recent years, repeat purchase behaviour of a particular company service; and the significance of that expenditure in terms of the customer's total outlay on that particular type of service. Customer satisfaction. The degree of overall pleasure or contentment felt by the customer, resulting from the ability of the service to fulfil the customer's desires, expectations and needs in relation to the service. Perceived value. The customer's overall appraisal of the net worth of the service, based on the customer's assessment of what is received (benefits provided by the service), and what is given (costs or sacrifice in acquiring and utilising the service). Perceived equity. The customer's overall assessment of the standard of fairness and justice of the company's service transaction and its customer problem and complaint handling process. Perceived quality. The customer's overall assessment of the standard of the service delivery process. The theoretical basis of the research model is derived from several sources. The model is developed from the satisfaction, attitude and intention relationships examined by Oliver (1980, 1981) and from the analyses of customer perceptions of service performance by Cronin and Taylor (1992, 1994), Dodds et al. (1991), Oliver and Swan (1989) and Zeithaml (1988). The model also incorporates the defensive factors to switching identified by Fornell (1992). Analysis of the inter-relationships between customer retention factors can be undertaken at the single transaction (micro) level or at a global (macro) level. The model adopts a macro framework. This is because the customer repurchase decision often depends on a general assessment of the service and supplier, based on multiple service transaction experiences with that supplier (Danaher and Mattsson, 1994; Liljander and Strandvik, 1995). The service attribute of perceived quality is delineated as an important antecedent factor to customer satisfaction (Cronin and Taylor, 1992, 1994; Fornell et al., 1996; Parasuraman et al., 1994a). The other service attributes regarded as important determinants of satisfaction are perceived value (Crosby and Stephens, 1987; Fornell et al., 1996) and perceived equity (Oliver, 1993; Oliver and DeSarbo, 1988; Oliver and Swan, 1989). The model also proposes perceived quality and perceived equity to be antecedents to perceived value (Chang and Wildt, 1994; Dodds et al., 1991; Fornell et al., 1996; Oliver and DeSarbo, 1988; Smith Gooding, 1995; Zeithaml, 1988). There have been many approaches to the measurement of the factors influencing customer satisfaction (Erevelles and Leavitt, 1992). The performance compared to expectations approach (expectations dis",
"title": ""
},
{
"docid": "4c0c6373c40bd42417fa2890fc80986b",
"text": "Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods were used in the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem severely limits both the expressiveness of possible regularity conditions and the variety of provably convergent Plug-and-Play denoising operators. In this paper, we introduce the concept of consensus equilibrium (CE), which generalizes regularized inversion to include a much wider variety of regularity operators without the need for an optimization formulation. Consensus equilibrium is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equations, which can be approached in multiple ways, including as a fixed point problem that generalizes the ADMM approach used in the Plug-and-Play method. We present the Douglas-Rachford (DR) algorithm for computing the CE solution as a fixed point and prove the convergence of this algorithm under conditions that include denoising operators that do not arise from optimization problems and that may not be nonexpansive. We give several examples to illustrate the idea of consensus equilibrium and the convergence properties of the DR algorithm and demonstrate this method on a sparse interpolation problem using electron microscopy data.",
"title": ""
},
{
"docid": "73efa57fe1d799a1c174d5ede1bcfe8a",
"text": "A growing number of online services, such as Google, Yahoo!, and Amazon, are starting to charge users for their storage. Customers often use these services to store valuable data such as email, family photos and videos, and disk backups. Today, a customer must entirely trust such external services to maintain the integrity of hosted data and return it intact. Unfortunately, no service is infallible. To make storage services accountable for data loss, we present protocols that allow a thirdparty auditor to periodically verify the data stored by a service and assist in returning the data intact to the customer. Most importantly, our protocols are privacy-preserving, in that they never reveal the data contents to the auditor. Our solution removes the burden of verification from the customer, alleviates both the customer’s and storage service’s fear of data leakage, and provides a method for independent arbitration of data retention contracts.",
"title": ""
},
{
"docid": "e1cdf2b32e2f56664813f44a5f7b713f",
"text": "Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.",
"title": ""
},
{
"docid": "4088b1148b5631f91f012ddc700cc136",
"text": "BACKGROUND\nAny standard skin flap of the body including a detectable or identified perforator at its axis can be safely designed and harvested in a free-style fashion.\n\n\nMETHODS\nFifty-six local free-style perforator flaps in the head and neck region, 33 primary and 23 recycle flaps, were performed in 53 patients. The authors introduced the term \"recycle\" to describe a perforator flap harvested within the borders of a previously transferred flap. A Doppler device was routinely used preoperatively for locating perforators in the area adjacent to a given defect. The final flap design and degree of mobilization were decided intraoperatively, depending on the location of the most suitable perforator and the ability to achieve primary closure of the donor site. Based on clinical experience, the authors suggest a useful classification of local free-style perforator flaps.\n\n\nRESULTS\nAll primary and 20 of 23 recycle free-style perforator flaps survived completely, providing tension-free coverage and a pleasing final contour for patients. In the remaining three recycle cases, the skeletonization of the pedicle resulted in pedicle damage, because of surrounding postradiotherapy scarring and flap failure. All donor sites except one were closed primarily, and all of them healed without any complications.\n\n\nCONCLUSIONS\nThe free-style concept has significantly increased the potential and versatility of the standard local and recycled head and neck flap alternatives for moderate to large defects, providing a more robust, custom-made, tissue-sparing, and cosmetically superior outcome in a one-stage procedure, with minimal donor-site morbidity.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.",
"title": ""
},
{
"docid": "0abbf8df158969484bcb95579af7be6a",
"text": "Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy that is different from the currently optimized policy. A common approach is to use importance sampling techniques for compensating for the bias of value function estimators caused by the difference between the data-sampling policy and the target policy. However, existing off-policy methods often do not take the variance of the value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.",
"title": ""
},
{
"docid": "56587879aeb4ecce05513e94bc019956",
"text": "In opportunistic networks, the nodes usually exploit a contact opportunity to perform hop-by-hop routing, since an end-to-end path between the source node and destination node may not exist. Most social-based routing protocols use social information extracted from real-world encounter networks to select an appropriate message relay. A protocol based on encounter history, however, takes time to build up a knowledge database from which to take routing decisions. An opportunistic routing protocol which extracts social information from multiple social networks, can be an alternative approach to avoid suboptimal paths due to partial information on encounters. While contact information changes constantly and it takes time to identify strong social ties, online social network ties remain rather stable and can be used to augment available partial contact information. In this paper, we propose a novel opportunistic routing approach, called ML-SOR (Multi-layer Social Network based Routing), which extracts social network information from multiple social contexts. To select an effective forwarding node, ML-SOR measures the forwarding capability of a node when compared to an encountered node in terms of node centrality, tie strength and link prediction. These metrics are computed by ML-SOR on different social network layers. Trace driven simulations show that ML-SOR, when compared to other schemes, is able to deliver messages with high probability while keeping overhead ratio very small.",
"title": ""
},
{
"docid": "ec49f419b86fc4276ceba06fd0208749",
"text": "In order to organize the large number of products listed in e-commerce sites, each product is usually assigned to one of the multi-level categories in the taxonomy tree. It is a time-consuming and difficult task for merchants to select proper categories within thousan ds of options for the products they sell. In this work, we propose an automatic classification tool to predict the matching category for a given product title and description. We used a combinatio n of two different neural models, i.e., deep belief nets and deep autoencoders, for both titles and descriptions. We implemented a selective reconstruction approach for the input layer during the training of the deep neural networks, in order to scale-out for large-sized sparse feature vectors. GPUs are utilized in order to train neural networks in a reasonable time. We have trained o ur m dels for around 150 million products with a taxonomy tree with at most 5 levels that contains 28,338 leaf categories. Tests with millions of products show that our first prediction s matches 81% of merchants’ assignments, when “others” categories are excluded.",
"title": ""
},
{
"docid": "639afc633f05f54c790077f80c3628b8",
"text": "It has been demonstrated that the sparse representation based framework is one of the most popular and promising ways to handle the single image super-resolution (SISR) issue. However, due to the complexity of image degradation and inevitable existence of noise, the coding coefficients produced by imposing sparse prior only are not precise enough for faithful reconstructions. In order to overcome it, we present an improved SISR reconstruction method based on the proposed bidirectionally aligned sparse representation (BASR) model. In our model, the bidirectional similarities are first modeled and constructed to form a complementary pair of regularization terms. The raw sparse coefficients are additionally aligned to this pair of standards to restrain sparse coding noise and therefore result in better recoveries. On the basis of fast iterative shrinkage-thresholding algorithm, a well-designed mathematic implementation is introduced for solving the proposed BASR model efficiently. Thorough experimental results indicate that the proposed method performs effectively and efficiently, and outperforms many recently published baselines in terms of both objective evaluation and visual fidelity.",
"title": ""
},
{
"docid": "2c6239f99889ff4a44b95af6b745041f",
"text": "Sparseness of user-to-item rating data is one of the major factors that deteriorate the quality of recommender system. To handle the sparsity problem, several recommendation techniques have been proposed that additionally consider auxiliary information to improve rating prediction accuracy. In particular, when rating data is sparse, document modeling-based approaches have improved the accuracy by additionally utilizing textual data such as reviews, abstracts, or synopses. However, due to the inherent limitation of the bag-of-words model, they have difficulties in effectively utilizing contextual information of the documents, which leads to shallow understanding of the documents. This paper proposes a novel context-aware recommendation model, convolutional matrix factorization (ConvMF) that integrates convolutional neural network (CNN) into probabilistic matrix factorization (PMF). Consequently, ConvMF captures contextual information of documents and further enhances the rating prediction accuracy. Our extensive evaluations on three real-world datasets show that ConvMF significantly outperforms the state-of-the-art recommendation models even when the rating data is extremely sparse. We also demonstrate that ConvMF successfully captures subtle contextual difference of a word in a document. Our implementation and datasets are available at http://dm.postech.ac.kr/ConvMF.",
"title": ""
}
] |
scidocsrr
|
76f0513df0e14762b4da085193cc7d1f
|
Enterprise Architecture as Enabler of Organizational Agility - A Municipality Case Study
|
[
{
"docid": "bb5cca7f3d3a7ddcfb6455f3e2cc94a6",
"text": "Many organizations have adopted an Enterprise Architecture (EA) approach because of the potential benefits resulting from a more standardized and coordinated approach to systems development and management, and because of the tighter alignment of business and information technology in support of business strategy execution. At the same time, experience shows that having an effective EA practice is easier said than done and the coordination and implementation efforts can be daunting. While nobody disputes the potential benefits of well architected systems, there is no empirical evidence showing whether the organizational benefits of EA outweigh the coordination and management costs associated with the architecting process. Furthermore, most practitioners we have interviewed can provide technical metrics for internal EA efficiency and effectiveness, but none of our participants were able to provide concrete metrics or evidence about the bottom line impact that EA has on the organization as a whole. In this article we raise key issues associated with the evaluation of the organizational impact of EA and propose a framework for empirical research in this area.",
"title": ""
},
{
"docid": "fad4ff82e9b11f28a70749d04dfbf8ca",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact [email protected]. Enterprise architecture (EA) is the definition and representation of a high-level view of an enterprise's business processes and IT systems, their interrelationships, and the extent to which these processes and systems are shared by different parts of the enterprise. EA aims to define a suitable operating platform to support an organisation's future goals and the roadmap for moving towards this vision. Despite significant practitioner interest in the domain, understanding the value of EA remains a challenge. Although many studies make EA benefit claims, the explanations of why and how EA leads to these benefits are fragmented, incomplete, and not grounded in theory. This article aims to address this knowledge gap by focusing on the question: How does EA lead to organisational benefits? Through a careful review of EA literature, the paper consolidates the fragmented knowledge on EA benefits and presents the EA Benefits Model (EABM). The EABM proposes that EA leads to organisational benefits through its impact on four benefit enablers: Organisational Alignment, Information Availability, Resource Portfolio Optimisation, and Resource Complementarity. The article concludes with a discussion of a number of potential avenues for future research, which could build on the findings of this study.",
"title": ""
}
] |
[
{
"docid": "70d4545496bfd3b68e092d0ce11be299",
"text": "This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.",
"title": ""
},
{
"docid": "3cc9f615445f3692aa258300d73f57ff",
"text": "In good old-fashioned artificial intelligence (GOFAI), humans specified systems that solved problems. Much of the recent progress in AI has come from replacing human insights by learning. However, learning itself is still usually built by humans – specifically the choice that parameter updates should follow the gradient of a cost function. Yet, in analogy with GOFAI, there is no reason to believe that humans are particularly good at defining such learning systems: we may expect learning itself to be better if we learn it. Recent research in machine learning has started to realize the benefits of that strategy. We should thus expect this to be relevant for neuroscience: how could the correct learning rules be acquired? Indeed, behavioral science has long shown that humans learn-to-learn, which is potentially responsible for their impressive learning abilities. Here we discuss ideas across machine learning, neuroscience, and behavioral science that matter for the principle of learning-to-learn.",
"title": ""
},
{
"docid": "4d52c27f623fdf083d2a5bddb4dfaade",
"text": "The Iron Man media franchise glorifies futuristic interfaces and devices like holographic screens, powerful mobile devices, and heads-up displays. Consequently, a mainstream audience has come to know about and discursively relate to Augmented Reality (AR) technology through fan participation. This paper identifies how Iron Man fans reveal the belief that technology sensationalized in the films and comics may actually become real. Using humanities theories and methods, it argues for a new way to explore potential users' expectations for augmented reality. HCI as a field needs to broaden its focus and attend to fans in terms of their future as consumers and users.",
"title": ""
},
{
"docid": "46ad960f5fe60635c6d556105b5e3607",
"text": "The authors explored the utility of the Difficulties in Emotion Regulation Scale (DERS) in assessing adolescents' emotion regulation. Adolescents (11-17 years; N = 870) completed the DERS and measures of externalizing and internalizing problems. Confirmatory factor analysis suggested a similar factor structure in the adolescent sample of the authors as demonstrated previously among adults. Furthermore, results indicated no gender bias in ratings of DERS factors on three scales (as evidenced by strong factorial gender invariance) and limited gender bias on the other three scales (as evidenced by metric invariance). Female adolescents scored higher on four of six DERS factors than male adolescents. DERS factors were meaningfully related to adolescents' externalizing and internalizing problems. Results suggest that scores on the DERS show promising internal consistency and validity in a community sample of adolescents.",
"title": ""
},
{
"docid": "ca985aa9f64536c339a365b5218ce61f",
"text": "Dependency network measures capture various facets of the dependencies among software modules. For example, betweenness centrality measures how much information flows through a module compared to the rest of the network. Prior studies have shown that these measures are good predictors of post-release failures. However, these studies did not explore the causes for such good performance and did not provide guidance for practitioners to avoid future bugs. In this paper, we closely examine the causes for such performance by replicating prior studies using data from the Eclipse project. Our study shows that a small subset of dependency network measures have a large impact on post-release failure, while other network measures have a very limited impact. We also analyze the benefit of bug prediction in reducing testing cost. Finally, we explore the practical implications of the important network measures.",
"title": ""
},
{
"docid": "8a7bd0858a51380ed002b43b08a1c9f1",
"text": "Unbiased language is a requirement for reference sources like encyclopedias and scientific texts. Bias is, nonetheless, ubiquitous, making it crucial to understand its nature and linguistic realization and hence detect bias automatically. To this end we analyze real instances of human edits designed to remove bias from Wikipedia articles. The analysis uncovers two classes of bias: framing bias, such as praising or perspective-specific words, which we link to the literature on subjectivity; and epistemological bias, related to whether propositions that are presupposed or entailed in the text are uncontroversially accepted as true. We identify common linguistic cues for these classes, including factive verbs, implicatives, hedges, and subjective intensifiers. These insights help us develop features for a model to solve a new prediction task of practical importance: given a biased sentence, identify the bias-inducing word. Our linguistically-informed model performs almost as well as humans tested on the same task.",
"title": ""
},
{
"docid": "45d57f01218522609d6ef93de61ea491",
"text": "We consider the problem of finding a ranking of a set of elements that is “closest to” a given set of input rankings of the elements; more precisely, we want to find a permutation that minimizes the Kendall-tau distance to the input rankings, where the Kendall-tau distance is defined as the sum over all input rankings of the number of pairs of elements that are in a different order in the input ranking than in the output ranking. If the input rankings are permutations, this problem is known as the Kemeny rank aggregation problem. This problem arises for example in building meta-search engines for Web search, aggregating viewers’ rankings of movies, or giving recommendations to a user based on several different criteria, where we can think of having one ranking of the alternatives for each criterion. Many of the approximation algorithms and heuristics that have been proposed in the literature are either positional, comparison sort or local search algorithms. The rank aggregation problem is a special case of the (weighted) feedback arc set problem, but in the feedback arc set problem we use only information about the preferred relative ordering of pairs of elements to find a ranking of the elements, whereas in the case of the rank aggregation problem, we have additional information in the form of the complete input rankings. The positional methods are the only algorithms that use this additional information. Since the rank aggregation problem is NP-hard, none of these algorithms is guaranteed to find the optimal solution, and different algorithms will provide different solutions. We give theoretical and practical evidence that a combination of these different approaches gives algorithms that are superior to the individual algorithms. Theoretically, we give lower bounds on the performance for many of the “pure” methods. Practically, we perform an extensive evaluation of the “pure” algorithms and ∗Institute for Theoretical Computer Science, Tsinghua University, Beijing, China. [email protected]. Research performed in part while the author was at Nature Source Genetics, Ithaca, NY. †Institute for Theoretical Computer Science, Tsinghua University, Beijing, China. [email protected]. Research partly supported by NSF grant CCF-0514628 and performed in part while the author was at the School of Operations Research and Information Engineering at Cornell University, Ithaca, NY. combinations of different approaches. We give three recommendations for which (combination of) methods to use based on whether a user wants to have a very fast, fast or reasonably fast algorithm.",
"title": ""
},
{
"docid": "fd62cb306e6e39e7ead79696591746b2",
"text": "Many data mining techniques have been proposed for mining useful patterns in text documents. However, how to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase)-based approaches should perform better than the term-based ones, but many experiments do not support this hypothesis. This paper presents an innovative and effective pattern discovery technique which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance.",
"title": ""
},
{
"docid": "b845aaa999c1ed9d99cb9e75dff11429",
"text": "We present a new space-efficient approach, (SparseDTW ), to compute the Dynamic Time Warping (DTW ) distance between two time series that always yields the optimal result. This is in contrast to other known approaches which typically sacrifice optimality to attain space efficiency. The main idea behind our approach is to dynamically exploit the existence of similarity and/or correlation between the time series. The more the similarity between the time series the less space required to compute the DTW between them. To the best of our knowledge, all other techniques to speedup DTW, impose apriori constraints and do not exploit similarity characteristics that may be present in the data. We conduct experiments and demonstrate that SparseDTW outperforms previous approaches.",
"title": ""
},
{
"docid": "3181171d92ce0a8d3a44dba980c0cc5f",
"text": "Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.",
"title": ""
},
{
"docid": "cd176e795fe52784e27a1c001979709b",
"text": "[Purpose] The purpose of this study was to identify the influence of relaxation exercises for the masticator muscles on the limited ROM and pain of temporomandibular joint dysfunction (TMD). [Subjects and Methods] The subjects were 10 men and 31 women in their 20s and 30s. They were randomly divided into no treatment, active exercises and relaxation exercise for the masticator muscle groups. The exercise groups performed exercises three times or more a day over a period of four weeks, performing exercise for 10 minutes each time. Before and after the four weeks, all the subjects were measured for ROM, deviation, occlusion, and pain in the temporomandibular joint. [Results] ROM, deviation and pain showed statistically significant in improvements after the intervention in the active exercise and relaxation exercise for the masticator muscle groups. Deviation also showed a statistically significant difference between the active exercise and relaxation exercise groups. [Conclusion] The results verify that as with active exercises, relaxation exercises for the masticatory muscles are an effective treatment for ROM and pain in TMD. Particularly, masticatory muscle relaxation exercises were found to be a treatment that is also effective for deviation.",
"title": ""
},
{
"docid": "08765f109452855227eb85395e4c49b1",
"text": "and on their differing feelings toward the politicians (in this case, across liking, trusting, and feeling affiliated with the candidates). After 16 test runs, the voters did indeed change their attitudes and feelings toward the candidates in different and yet generally realistic ways, and even changed their attitudes about other issues based on what a candidate extolled.",
"title": ""
},
{
"docid": "1331dc5705d4b416054341519126f32f",
"text": "There is a large tradition of work in moral psychology that explores the capacity for moral judgment by focusing on the basic capacity to distinguish moral violations (e.g. hitting another person) from conventional violations (e.g. playing with your food). However, only recently have there been attempts to characterize the cognitive mechanisms underlying moral judgment (e.g. Cognition 57 (1995) 1; Ethics 103 (1993) 337). Recent evidence indicates that affect plays a crucial role in mediating the capacity to draw the moral/conventional distinction. However, the prevailing account of the role of affect in moral judgment is problematic. This paper argues that the capacity to draw the moral/conventional distinction depends on both a body of information about which actions are prohibited (a Normative Theory) and an affective mechanism. This account leads to the prediction that other normative prohibitions that are connected to an affective mechanism might be treated as non-conventional. An experiment is presented that indicates that \"disgust\" violations (e.g. spitting at the table), are distinguished from conventional violations along the same dimensions as moral violations.",
"title": ""
},
{
"docid": "fb84f9d8a88c3afd5e3eb2f290989b72",
"text": "With higher reliability requirements in clusters and data centers, RAID-6 has gained popularity due to its capability to tolerate concurrent failures of any two disks, which has been shown to be of increasing importance in large scale storage systems. Among various implementations of erasure codes in RAID-6, a typical set of codes known as Maximum Distance Separable (MDS) codes aim to offer data protection against disk failures with optimal storage efficiency. However, because of the limitation of horizontal parity or diagonal/anti-diagonal parities used in MDS codes, storage systems based on RAID-6 suffers from unbalanced I/O and thus low performance and reliability. To address this issue, in this paper, we propose a new parity called Horizontal-Diagonal Parity (HDP), which takes advantages of both horizontal and diagonal/anti-diagonal parities. The corresponding MDS code, called HDP code, distributes parity elements uniformly in each disk to balance the I/O workloads. HDP also achieves high reliability via speeding up the recovery under single or double disk failure. Our analysis shows that HDP provides better balanced I/O and higher reliability compared to other popular MDS codes.",
"title": ""
},
{
"docid": "a11f1155f3a9805f7c17284c99eed109",
"text": "This paper presents the architecture and design of a high-performance asynchronous Huffman decoder for compressed-code embedded processors. In such processors, embedded programs are stored in compressed form in instruction ROM, then are decompressed on demand during instruction cache refill. The Huffman decoder is used as a code decompression engine. The circuit is non-pipelined, and is implemented as an iterative self-timed ring. It achieves a high-speed decode rate with very low area overhead. Simulations using Lsim show an average throughput of 32 bits/25 ns on the output side (or 163 MBytes/sec, or 1303 Mbit/sec), corresponding to about 889 Mbit/sec on the input side. The area of the design is extremely small: under 1 mm in a 0.8 micron fullcustom layout. The decoder is estimated to have higher throughput than any comparable synchronous Huffman decoder (after normalizing for feature size and voltage), yet is much smaller than synchronous designs. Its performance is also 83% faster than a recently published asynchronous Huffman decoder using the same technology.",
"title": ""
},
{
"docid": "90c2121fc04c0c8d9c4e3d8ee7b8ecc0",
"text": "Measuring similarity between two data objects is a more challenging problem for data mining and knowledge discovery tasks. The traditional clustering algorithms have been mainly stressed on numerical data, the implicit property of which can be exploited to define distance function between the data points to define similarity measure. The problem of similarity becomes more complex when the data is categorical which do not have a natural ordering of values or can be called as non geometrical attributes. Clustering on relational data sets when majority of its attributes are of categorical types makes interesting facts. No earlier work has been done on clustering categorical attributes of relational data set types making use of the property of functional dependency as parameter to measure similarity. This paper is an extension of earlier work on clustering relational data sets where domains are unique and similarity is context based and introduces a new notion of similarity based on dependency of an attribute on other attributes prevalent in the relational data set. This paper also gives a brief overview of popular similarity measures of categorical attributes. This novel similarity measure can be used to apply on tuples and their respective values. The important property of categorical domain is that they have smaller number of attribute values. The similarity measure of relational data sets then can be applied to the smaller data sets for efficient results.",
"title": ""
},
{
"docid": "00357ea4ef85efe5cd2080e064ddcd06",
"text": "The cumulative match curve (CMC) is used as a measure of 1: m identification system performance. It judges the ranking capabilities of an identification system. The receiver operating characteristic curve (ROC curve) of a verification system, on the other hand, expresses the quality of a 1:1 matcher. The ROC plots the false accept rate (FAR) of a 1:1 matcher versus the false reject rate (FRR) of the matcher. We show that the CMC is also related to the FAR and FRR of a 1:1 matcher, i.e., the matcher that is used to rank the candidates by sorting the scores. This has as a consequence that when a 1:1 matcher is used for identification, that is, for sorting match scores from high to low, the CMC does not offer any additional information beyond the FAR and FRR curves. The CMC is just another way of displaying the data and can be computed from the FAR and FRR.",
"title": ""
},
{
"docid": "212f128450a141b5b4c83c8c57d14677",
"text": "Local Authority road networks commonly include roads with different functional characteristics and a variety of construction types, which require maintenance solutions tailored to their needs. Given this background, on local road network, pavement management is founded on the experience of the agency engineers and is often constrained by low budgets and a variety of environmental and external requirements. This paper forms part of a research work that investigates the use of digital techniques for obtaining field data in order to increase safety and reduce labour cost requirements using a semi-automated distress collection and measurement system. More specifically, a definition of a distress detection procedure is presented which aims at producing a result complying more closely to the distress identification manuals and protocols. The process comprises the following two steps: Automated pavement image collection. Images are collected using the high speed digital acquisition system of the Mobile Laboratory designed and implemented by the Department of Civil and Environmental Engineering of the University of Catania; Distress Detection. By way of the Pavement Distress Analyser (PDA), a specialised software, images are adjusted to eliminate their optical distortion. Cracks, potholes and patching are automatically detected and subsequently classified by means of an operator assisted approach. An intense, experimental field survey has made it possible to establish that the procedure obtains more consistent distress measurements than a manual survey thus increasing its repeatability, reducing costs and increasing safety during the survey. Moreover, the pilot study made it possible to validate results coming from a survey carried out under normal traffic conditions, concluding that it is feasible to integrate the procedure into a roadway pavement management system.",
"title": ""
},
{
"docid": "9d30cfbc7d254882e92cad01f5bd17c7",
"text": "Data from culture studies have revealed that Enterococcus faecalis is occasionally isolated from primary endodontic infections but frequently recovered from treatment failures. This molecular study was undertaken to investigate the prevalence of E. faecalis in endodontic infections and to determine whether this species is associated with particular forms of periradicular diseases. Samples were taken from cases of untreated teeth with asymptomatic chronic periradicular lesions, acute apical periodontitis, or acute periradicular abscesses, and from root-filled teeth associated with asymptomatic chronic periradicular lesions. DNA was extracted from the samples, and a 16S rDNA-based nested polymerase chain reaction assay was used to identify E. faecalis. This species occurred in seven of 21 root canals associated with asymptomatic chronic periradicular lesions, in one of 10 root canals associated with acute apical periodontitis, and in one of 19 pus samples aspirated from acute periradicular abscesses. Statistical analysis showed that E. faecalis was significantly more associated with asymptomatic cases than with symptomatic ones. E. faecalis was detected in 20 of 30 cases of persistent endodontic infections associated with root-filled teeth. When comparing the frequencies of this species in 30 cases of persistent infections with 50 cases of primary infections, statistical analysis demonstrated that E. faecalis was strongly associated with persistent infections. The average odds of detecting E. faecalis in cases of persistent infections associated with treatment failure were 9.1. The results of this study indicated that E. faecalis is significantly more associated with asymptomatic cases of primary endodontic infections than with symptomatic ones. Furthermore, E. faecalis was much more likely to be found in cases of failed endodontic therapy than in primary infections.",
"title": ""
},
{
"docid": "b81f831c1152bb6a8812ad800324a6cd",
"text": "Measures of semantic similarity between concepts are widely used in Natural Language Processing. In this article, we show how six existing domain-independent measures can be adapted to the biomedical domain. These measures were originally based on WordNet, an English lexical database of concepts and relations. In this research, we adapt these measures to the SNOMED-CT ontology of medical concepts. The measures include two path-based measures, and three measures that augment path-based measures with information content statistics from corpora. We also derive a context vector measure based on medical corpora that can be used as a measure of semantic relatedness. These six measures are evaluated against a newly created test bed of 30 medical concept pairs scored by three physicians and nine medical coders. We find that the medical coders and physicians differ in their ratings, and that the context vector measure correlates most closely with the physicians, while the path-based measures and one of the information content measures correlates most closely with the medical coders. We conclude that there is a role both for more flexible measures of relatedness based on information derived from corpora, as well as for measures that rely on existing ontological structures.",
"title": ""
}
] |
scidocsrr
|
46ebfa26fb7981c876cf3c7a2cfae58d
|
Understanding Information
|
[
{
"docid": "aa32bff910ce6c7b438dc709b28eefe3",
"text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: [email protected] 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science",
"title": ""
}
] |
[
{
"docid": "e59136e0d0a710643a078b58075bd8cd",
"text": "PURPOSE\nEpidemiological evidence suggests that chronic consumption of fruit-based flavonoids is associated with cognitive benefits; however, the acute effects of flavonoid-rich (FR) drinks on cognitive function in the immediate postprandial period require examination. The objective was to investigate whether consumption of FR orange juice is associated with acute cognitive benefits over 6 h in healthy middle-aged adults.\n\n\nMETHODS\nMales aged 30-65 consumed a 240-ml FR orange juice (272 mg) and a calorie-matched placebo in a randomized, double-blind, counterbalanced order on 2 days separated by a 2-week washout. Cognitive function and subjective mood were assessed at baseline (prior to drink consumption) and 2 and 6 h post consumption. The cognitive battery included eight individual cognitive tests. A standardized breakfast was consumed prior to the baseline measures, and a standardized lunch was consumed 3 h post-drink consumption.\n\n\nRESULTS\nChange from baseline analysis revealed that performance on tests of executive function and psychomotor speed was significantly better following the FR drink compared to the placebo. The effects of objective cognitive function were supported by significant benefits for subjective alertness following the FR drink relative to the placebo.\n\n\nCONCLUSIONS\nThese data demonstrate that consumption of FR orange juice can acutely enhance objective and subjective cognition over the course of 6 h in healthy middle-aged adults.",
"title": ""
},
{
"docid": "2690f802022b273d41b3131aa982b91b",
"text": "Deep neural networks are demonstrating excellent performance on several classical vision problems. However, these networks are vulnerable to adversarial examples, minutely modified images that induce arbitrary attacker-chosen output from the network. We propose a mechanism to protect against these adversarial inputs based on a generative model of the data. We introduce a pre-processing step that projects on the range of a generative model using gradient descent before feeding an input into a classifier. We show that this step provides the classifier with robustness against first-order, substitute model, and combined adversarial attacks. Using a min-max formulation, we show that there may exist adversarial examples even in the range of the generator, natural-looking images extremely close to the decision boundary for which the classifier has unjustifiedly high confidence. We show that adversarial training on the generative manifold can be used to make a classifier that is robust to these attacks. Finally, we show how our method can be applied even without a pre-trained generative model using a recent method called the deep image prior. We evaluate our method on MNIST, CelebA and Imagenet and show robustness against the current state of the art attacks.",
"title": ""
},
{
"docid": "1c5e17c7acff27e3b10aecf15c5809e7",
"text": "Recent years witness a growing interest in nonstandard epistemic logics of “knowing whether”, “knowing what”, “knowing how” and so on. These logics are usually not normal, i.e., the standard axioms and reasoning rules for modal logic may be invalid. In this paper, we show that the conditional “knowing value” logic proposed by Wang and Fan [10] can be viewed as a disguised normal modal logic by treating the negation of Kv operator as a special diamond. Under this perspective, it turns out that the original first-order Kripke semantics can be greatly simplified by introducing a ternary relation R i in standard Kripke models which associates one world with two i-accessible worlds that do not agree on the value of constant c. Under intuitive constraints, the modal logic based on such Kripke models is exactly the one studied by Wang and Fan [10,11]. Moreover, there is a very natural binary generalization of the “knowing value” diamond, which, surprisingly, does not increase the expressive power of the logic. The resulting logic with the binary diamond has a transparent normal modal system which sharpens our understanding of the “knowing value” logic and simplifies some previous hard problems.",
"title": ""
},
{
"docid": "0ee27f9045935db4241e9427bed2af59",
"text": "As a new generation of deep-sea Autonomous Underwater Vehicle (AUV), Qianlong I is a 6000m rated glass deep-sea manganese nodules detection AUV which based on the CR01 and the CR02 deep-sea AUVs and developed by Shenyang Institute of Automation, the Chinese Academy of Sciences from 2010. The Qianlong I was tested in the thousand-isles lake in Zhejiang Province of China during November 2012 to March 2013 and the sea trials were conducted in the South China Sea during April 20-May 2, 2013 after the lake tests and the ocean application completed in October 2013. This paper describes two key problems encountered in the process of developing Qianlong I, including the launch and recovery systems development and variable buoyancy system development. Results from the recent lake and sea trails are presented, and future missions and development plans are discussed.",
"title": ""
},
{
"docid": "98d1c35aeca5de703cec468b2625dc72",
"text": "Congenital adrenal hyperplasia was described in London by Phillips (1887) who reported four cases of spurious hermaphroditism in one family. Fibiger (1905) noticed that there was enlargement of the adrenal glands in some infants who had died after prolonged vomiting and dehydration. Butler, Ross, and Talbot (1939) reported a case which showed serum electrolyte changes similar to those of Addison's disease. Further developments had to await the synthesis of cortisone. The work ofWilkins, Lewis, Klein, and Rosemberg (1950) showed that cortisone could alleviate the disorder and suppress androgen secretion. Bartter, Albright, Forbes, Leaf, Dempsey, and Carroll (1951) suggested that, in congenital adrenal hyperplasia, there might be a primary impairment of synthesis of cortisol (hydrocortisone, compound F) and a secondary rise of pituitary adrenocorticotrophin (ACTH) production. This was confirmed by Jailer, Louchart, and Cahill (1952) who showed that ACTH caused little increase in the output of cortisol in such cases. In the same year, Snydor, Kelley, Raile, Ely, and Sayers (1953) found an increased level ofACTH in the blood of affected patients. Studies of enzyme systems were carried out. Jailer, Gold, Vande Wiele, and Lieberman (1955) and Frantz, Holub, and Jailer (1960) produced evidence that the most common site for the biosynthetic block was in the C-21 hydroxylating system. Eberlein and Bongiovanni (1955) showed that there was a C-l 1 hydroxylation defect in patients with the hypertensive form of congenital adrenal hyperplasia, and Bongiovanni (1961) and Bongiovanni and Kellenbenz (1962), showed that in some patients there was a further type of enzyme defect, a 3-(-hydroxysteroid dehydrogenase deficiency, an enzyme which is required early in the metabolic pathway. Prader and Siebenmann (1957) described a female infant who had adrenal insufficiency and congenital lipoid hyperplasia of the",
"title": ""
},
{
"docid": "2466ac1ce3d54436f74b5bb024f89662",
"text": "In this paper we discuss our work on applying media theory to the creation of narrative augmented reality (AR) experiences. We summarize the concepts of remediation and media forms as they relate to our work, argue for their importance to the development of a new medium such as AR, and present two example AR experiences we have designed using these conceptual tools. In particular, we focus on leveraging the interaction between the physical and virtual world, remediating existing media (film, stage and interactive CD-ROM), and building on the cultural expectations of our users.",
"title": ""
},
{
"docid": "bf03f941bcf921a44d0a34ec2161ee34",
"text": "Epidermolytic ichthyosis (EI) is a rare autosomal dominant genodermatosis that presents at birth as a bullous disease, followed by a lifelong ichthyotic skin disorder. Essentially, it is a defective keratinization caused by mutations of keratin 1 (KRT1) or keratin 10 (KRT10) genes, which lead to skin fragility, blistering, and eventually hyperkeratosis. Successful management of EI in the newborn period can be achieved through a thoughtful, directed, and interdisciplinary or multidisciplinary approach that encompasses family support. This condition requires meticulous care to avoid associated morbidities such as infection and dehydration. A better understanding of the disrupted barrier protection of the skin in these patients provides a basis for management with daily bathing, liberal emollients, pain control, and proper nutrition as the mainstays of treatment. In addition, this case presentation will include discussions on the pathophysiology, complications, differential diagnosis, and psychosocial and ethical issues.",
"title": ""
},
{
"docid": "b8b96789191e5afa48bea1d9e92443d5",
"text": "Methionine, cysteine, homocysteine, and taurine are the 4 common sulfur-containing amino acids, but only the first 2 are incorporated into proteins. Sulfur belongs to the same group in the periodic table as oxygen but is much less electronegative. This difference accounts for some of the distinctive properties of the sulfur-containing amino acids. Methionine is the initiating amino acid in the synthesis of virtually all eukaryotic proteins; N-formylmethionine serves the same function in prokaryotes. Within proteins, many of the methionine residues are buried in the hydrophobic core, but some, which are exposed, are susceptible to oxidative damage. Cysteine, by virtue of its ability to form disulfide bonds, plays a crucial role in protein structure and in protein-folding pathways. Methionine metabolism begins with its activation to S-adenosylmethionine. This is a cofactor of extraordinary versatility, playing roles in methyl group transfer, 5'-deoxyadenosyl group transfer, polyamine synthesis, ethylene synthesis in plants, and many others. In animals, the great bulk of S-adenosylmethionine is used in methylation reactions. S-Adenosylhomocysteine, which is a product of these methyltransferases, gives rise to homocysteine. Homocysteine may be remethylated to methionine or converted to cysteine by the transsulfuration pathway. Methionine may also be metabolized by a transamination pathway. This pathway, which is significant only at high methionine concentrations, produces a number of toxic endproducts. Cysteine may be converted to such important products as glutathione and taurine. Taurine is present in many tissues at higher concentrations than any of the other amino acids. It is an essential nutrient for cats.",
"title": ""
},
{
"docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db",
"text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.",
"title": ""
},
{
"docid": "372182b4ac2681ceedb9d78e9f38343d",
"text": "A 12-bit 10-GS/s interleaved (IL) pipeline analog-to-digital converter (ADC) is described in this paper. The ADC achieves a signal to noise and distortion ratio (SNDR) of 55 dB and a spurious free dynamic range (SFDR) of 66 dB with a 4-GHz input signal, is fabricated in the 28-nm CMOS technology, and dissipates 2.9 W. Eight pipeline sub-ADCs are interleaved to achieve 10-GS/s sample rate, and mismatches between sub-ADCs are calibrated in the background. The pipeline sub-ADCs employ a variety of techniques to lower power, like avoiding a dedicated sample-and-hold amplifier (SHA-less), residue scaling, flash background calibration, dithering and inter-stage gain error background calibration. A push–pull input buffer optimized for high-frequency linearity drives the interleaved sub-ADCs to enable >7-GHz bandwidth. A fast turn-ON bootstrapped switch enables 100-ps sampling. The ADC also includes the ability to randomize the sub-ADC selection pattern to further reduce residual interleaving spurs.",
"title": ""
},
{
"docid": "eb956188486caa595b7f38d262781af7",
"text": "Due to the competitiveness of the computing industry, software developers are pressured to quickly deliver new code releases. At the same time, operators are expected to update and keep production systems stable at all times. To overcome the development–operations barrier, organizations have started to adopt Infrastructure as Code (IaC) tools to efficiently deploy middleware and applications using automation scripts. These automations comprise a series of steps that should be idempotent to guarantee repeatability and convergence. Rigorous testing is required to ensure that the system idempotently converges to a desired state, starting from arbitrary states. We propose and evaluate a model-based testing framework for IaC. An abstracted system model is utilized to derive state transition graphs, based on which we systematically generate test cases for the automation. The test cases are executed in light-weight virtual machine environments. Our prototype targets one popular IaC tool (Chef), but the approach is general. We apply our framework to a large base of public IaC scripts written by operators, showing that it correctly detects non-idempotent automations.",
"title": ""
},
{
"docid": "b3790611437e1660b7c222adcb26b510",
"text": "There have been increasing interests in the robotics community in building smaller and more agile autonomous micro aerial vehicles (MAVs). In particular, the monocular visual-inertial system (VINS) that consists of only a camera and an inertial measurement unit (IMU) forms a great minimum sensor suite due to its superior size, weight, and power (SWaP) characteristics. In this paper, we present a tightly-coupled nonlinear optimization-based monocular VINS estimator for autonomous rotorcraft MAVs. Our estimator allows the MAV to execute trajectories at 2 m/s with roll and pitch angles up to 30 degrees. We present extensive statistical analysis to verify the performance of our approach in different environments with varying flight speeds.",
"title": ""
},
{
"docid": "7f61235bb8b77376936256dcf251ee0b",
"text": "These practical guidelines for the biological treatment of personality disorders in primary care settings were developed by an international Task Force of the World Federation of Societies of Biological Psychiatry (WFSBP). They embody the results of a systematic review of all available clinical and scientific evidence pertaining to the biological treatment of three specific personality disorders, namely borderline, schizotypal and anxious/avoidant personality disorder in addition to some general recommendations for the whole field. The guidelines cover disease definition, classification, epidemiology, course and current knowledge on biological underpinnings, and provide a detailed overview on the state of the art of clinical management. They deal primarily with biological treatment (including antidepressants, neuroleptics, mood stabilizers and some further pharmacological agents) and discuss the relative significance of medication within the spectrum of treatment strategies that have been tested for patients with personality disorders, up to now. The recommendations should help the clinician to evaluate the efficacy spectrum of psychotropic drugs and therefore to select the drug best suited to the specific psychopathology of an individual patient diagnosed for a personality disorder.",
"title": ""
},
{
"docid": "0122057f9fd813efd9f9e0db308fe8d9",
"text": "Noun phrases in queries are identified and classified into four types: proper names, dictionary phrases, simple phrases and complex phrases. A document has a phrase if all content words in the phrase are within a window of a certain size. The window sizes for different types of phrases are different and are determined using a decision tree. Phrases are more important than individual terms. Consequently, documents in response to a query are ranked with matching phrases given a higher priority. We utilize WordNet to disambiguate word senses of query terms. Whenever the sense of a query term is determined, its synonyms, hyponyms, words from its definition and its compound words are considered for possible additions to the query. Experimental results show that our approach yields between 23% and 31% improvements over the best-known results on the TREC 9, 10 and 12 collections for short (title only) queries, without using Web data.",
"title": ""
},
{
"docid": "5416e2a3f5a1855f19814eecec85092a",
"text": "Code clones are exactly or nearly similar code fragments in the code-base of a software system. Existing studies show that clones are directly related to bugs and inconsistencies in the code-base. Code cloning (making code clones) is suspected to be responsible for replicating bugs in the code fragments. However, there is no study on the possibilities of bug-replication through cloning process. Such a study can help us discover ways of minimizing bug-replication. Focusing on this we conduct an empirical study on the intensities of bug-replication in the code clones of the major clone-types: Type 1, Type 2, and Type 3. According to our investigation on thousands of revisions of six diverse subject systems written in two different programming languages, C and Java, a considerable proportion (i.e., up to 10%) of the code clones can contain replicated bugs. Both Type 2 and Type 3 clones have higher tendencies of having replicated bugs compared to Type 1 clones. Thus, Type 2 and Type 3 clones are more important from clone management perspectives. The extent of bug-replication in the buggy clone classes is generally very high (i.e., 100% in most of the cases). We also find that overall 55% of all the bugs experienced by the code clones can be replicated bugs. Our study shows that replication of bugs through cloning is a common phenomenon. Clone fragments having method-calls and if-conditions should be considered for refactoring with high priorities, because such clone fragments have high possibilities of containing replicated bugs. We believe that our findings are important for better maintenance of software systems, in particular, systems with code clones.",
"title": ""
},
{
"docid": "ea95f4475bb65f7ea0f270387919df47",
"text": "The field of supramolecular chemistry focuses on the non-covalent interactions between molecules that give rise to molecular recognition and self-assembly processes. Since most non-covalent interactions are relatively weak and form and break without significant activation barriers, many supramolecular systems are under thermodynamic control. Hence, traditionally, supramolecular chemistry has focused predominantly on systems at equilibrium. However, more recently, self-assembly processes that are governed by kinetics, where the outcome of the assembly process is dictated by the assembly pathway rather than the free energy of the final assembled state, are becoming topical. Within the kinetic regime it is possible to distinguish between systems that reside in a kinetic trap and systems that are far from equilibrium and require a continuous supply of energy to maintain a stationary state. In particular, the latter systems have vast functional potential, as they allow, in principle, for more elaborate structural and functional diversity of self-assembled systems - indeed, life is a prime example of a far-from-equilibrium system. In this Review, we compare the different thermodynamic regimes using some selected examples and discuss some of the challenges that need to be addressed when developing new functional supramolecular systems.",
"title": ""
},
{
"docid": "4d87a5793186fc1dcaa51abcc06135a7",
"text": "PURPOSE OF REVIEW\nArboviruses have been associated with central and peripheral nervous system injuries, in special the flaviviruses. Guillain-Barré syndrome (GBS), transverse myelitis, meningoencephalitis, ophthalmological manifestations, and other neurological complications have been recently associated to Zika virus (ZIKV) infection. In this review, we aim to analyze the epidemiological aspects, possible pathophysiology, and what we have learned about the clinical and laboratory findings, as well as treatment of patients with ZIKV-associated neurological complications.\n\n\nRECENT FINDINGS\nIn the last decades, case series have suggested a possible link between flaviviruses and development of GBS. Recently, large outbreaks of ZIKV infection in Asia and the Americas have led to an increased incidence of GBS in these territories. Rapidly, several case reports and case series have reported an increase of all clinical forms and electrophysiological patterns of GBS, also including cases with associated central nervous system involvement. Finally, cases suggestive of acute transient polyneuritis, as well as acute and progressive postinfectious neuropathies associated to ZIKV infection have been reported, questioning the usually implicated mechanisms of neuronal injury.\n\n\nSUMMARY\nThe recent ZIKV outbreaks have triggered the occurrence of a myriad of neurological manifestations likely associated to this arbovirosis, in special GBS and its variants.",
"title": ""
},
{
"docid": "f312bfe7f80fdf406af29bfde635fa36",
"text": "In two studies, a newly devised test (framed-line test) was used to examine the hypothesis that individuals engaging in Asian cultures are more capable of incorporating contextual information and those engaging in North American cultures are more capable of ignoring contextual information. On each trial, participants were presented with a square frame, within which was printed a vertical line. Participants were then shown another square frame of the same or different size and asked to draw a line that was identical to the first line in either absolute length (absolute task) or proportion to the height of the surrounding frame (relative task). The results supported the hypothesis: Whereas Japanese were more accurate in the relative task, Americans were more accurate in the absolute task. Moreover, when engaging in another culture, individuals tended to show the cognitive characteristic common in the host culture.",
"title": ""
},
{
"docid": "b213afb537bbc4c476c760bb8e8f2997",
"text": "Recommender system has been demonstrated as one of the most useful tools to assist users' decision makings. Several recommendation algorithms have been developed and implemented by both commercial and open-source recommendation libraries. Context-aware recommender system (CARS) emerged as a novel research direction during the past decade and many contextual recommendation algorithms have been proposed. Unfortunately, no recommendation engines start to embed those algorithms in their kits, due to the special characteristics of the data format and processing methods in the domain of CARS. This paper introduces an open-source Java-based context-aware recommendation engine named as CARSKit which is recognized as the 1st open source recommendation library specifically designed for CARS. It implements the state-of-the-art context-aware recommendation algorithms, and we will showcase the ease with which CARSKit allows recommenders to be configured and evaluated in this demo.",
"title": ""
},
{
"docid": "101c03b85e3cc8518a158d89cc9b3b39",
"text": "Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.",
"title": ""
}
] |
scidocsrr
|
401613996c8be16d0e24fdc5932e2923
|
Across Disciplinary and Organizational Boundaries
|
[
{
"docid": "ce3d81c74ef3918222ad7d2e2408bdb0",
"text": "This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.\nA key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.\nSection 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.",
"title": ""
}
] |
[
{
"docid": "cd1fd8340276cc5aab392a7e5136056e",
"text": "We propose a novel two-step mining and optimization framework for inferring the root cause of anomalies that appear in road traffic data. We model road traffic as a time-dependent flow on a network formed by partitioning a city into regions bounded by major roads. In the first step we identify link anomalies based on their deviation from their historical traffic profile. However, link anomalies on their own shed very little light on what caused them to be anomalous. In the second step we take a generative approach by modeling the flow in a network in terms of the origin-destination (OD) matrix which physically relates the latent flow between origin and destination and the observable flow on the links. The key insight is that instead of using all of link traffic as the observable vector we only use the link anomaly vector. By solving an L1 inverse problem we infer the routes (the origin-destination pairs) which gave rise to the link anomalies. Experiments on a very large GPS data set consisting on nearly eight hundred million data points demonstrate that we can discover routes which can clearly explain the appearance of link anomalies. The use of optimization techniques to explain observable anomalies in a generative fashion is, to the best of our knowledge, entirely novel.",
"title": ""
},
{
"docid": "5c2297cf5892ebf9864850dc1afe9cbf",
"text": "In this paper, we propose a novel technique for generating images in the 3D domain from images with high degree of geometrical transformations. By coalescing two popular concurrent methods that have seen rapid ascension to the machine learning zeitgeist in recent years: GANs (Goodfellow et. al.) and Capsule networks (Sabour, Hinton et. al.) we present: CapsGAN. We show that CapsGAN performs better than or equal to traditional CNN based GANs in generating images with high geometric transformations using rotated MNIST. In the process, we also show the efficacy of using capsules architecture in the GANs domain. Furthermore, we tackle the Gordian Knot in training GANs the performance control and training stability by experimenting with using Wasserstein distance (gradient clipping, penalty) and Spectral Normalization. The experimental findings of this paper should propel the application of capsules and GANs in the still exciting and nascent domain of 3D image generation, and plausibly video (frame) generation.",
"title": ""
},
{
"docid": "e769f52b6e10ea1cf218deb8c95f4803",
"text": "To facilitate the task of reading and searching information, it became necessary to find a way to reduce the size of documents without affecting the content. The solution is in Automatic text summarization system, it allows, from an input text to produce another smaller and more condensed without losing relevant data and meaning conveyed by the original text. The research works carried out on this area have experienced lately strong progress especially in English language. However, researches in Arabic text summarization are very few and are still in their beginning. In this paper we expose a literature review of recent techniques and works on automatic text summarization field research, and then we focus our discussion on some works concerning automatic text summarization in some languages. We will discuss also some of the main problems that affect the quality of automatic text summarization systems. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "66844a6bce975f8e3e32358f0e0d1fb7",
"text": "The recent advent of DNA sequencing technologies facilitates the use of genome sequencing data that provide means for more informative and precise classification and identification of members of the Bacteria and Archaea. Because the current species definition is based on the comparison of genome sequences between type and other strains in a given species, building a genome database with correct taxonomic information is of paramount need to enhance our efforts in exploring prokaryotic diversity and discovering novel species as well as for routine identifications. Here we introduce an integrated database, called EzBioCloud, that holds the taxonomic hierarchy of the Bacteria and Archaea, which is represented by quality-controlled 16S rRNA gene and genome sequences. Whole-genome assemblies in the NCBI Assembly Database were screened for low quality and subjected to a composite identification bioinformatics pipeline that employs gene-based searches followed by the calculation of average nucleotide identity. As a result, the database is made of 61 700 species/phylotypes, including 13 132 with validly published names, and 62 362 whole-genome assemblies that were identified taxonomically at the genus, species and subspecies levels. Genomic properties, such as genome size and DNA G+C content, and the occurrence in human microbiome data were calculated for each genus or higher taxa. This united database of taxonomy, 16S rRNA gene and genome sequences, with accompanying bioinformatics tools, should accelerate genome-based classification and identification of members of the Bacteria and Archaea. The database and related search tools are available at www.ezbiocloud.net/.",
"title": ""
},
{
"docid": "5c954622071b23cf53c9c8cfcb65d7c0",
"text": "With the increasing popularity of the Semantic Web, more and more data becomes available in RDF with SPARQL as a query language. Data sets, however, can become too big to be managed and queried on a single server in a scalable way. Existing distributed RDF stores approach this problem using data partitioning, aiming at limiting the communication between servers and exploiting parallelism. This paper proposes a distributed SPARQL engine that combines a graph partitioning technique with workload-aware replication of triples across partitions, enabling efficient query execution even for complex queries from the workload. Furthermore, it discusses query optimization techniques for producing efficient execution plans for ad-hoc queries not contained in the workload.",
"title": ""
},
{
"docid": "d5a4c2d61e7d65f1972ed934f399847e",
"text": "We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor/action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor/action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.",
"title": ""
},
{
"docid": "d0b29493c64e787ed88ad8166d691c3d",
"text": "Mobile apps have to satisfy various privacy requirements. Notably, app publishers are often obligated to provide a privacy policy and notify users of their apps’ privacy practices. But how can a user tell whether an app behaves as its policy promises? In this study we introduce a scalable system to help analyze and predict Android apps’ compliance with privacy requirements. We discuss how we customized our system in a collaboration with the California Office of the Attorney General. Beyond its use by regulators and activists our system is also meant to assist app publishers and app store owners in their internal assessments of privacy requirement compliance. Our analysis of 17,991 free Android apps shows the viability of combining machine learning-based privacy policy analysis with static code analysis of apps. Results suggest that 71% of apps tha lack a privacy policy should have one. Also, for 9,050 apps that have a policy, we find many instances of potential inconsistencies between what the app policy seems to state and what the code of the app appears to do. In particular, as many as 41% of these apps could be collecting location information and 17% could be sharing such with third parties without disclosing so in their policies. Overall, each app exhibits a mean of 1.83 potential privacy requirement inconsistencies.",
"title": ""
},
{
"docid": "d9123053892ce671665a3a4a1694a57c",
"text": "Visual perceptual learning (VPL) is defined as a long-term improvement in performance on a visual task. In recent years, the idea that conscious effort is necessary for VPL to occur has been challenged by research suggesting the involvement of more implicit processing mechanisms, such as reinforcement-driven processing and consolidation. In addition, we have learnt much about the neural substrates of VPL and it has become evident that changes in visual areas and regions beyond the visual cortex can take place during VPL.",
"title": ""
},
{
"docid": "b5babae9b9bcae4f87f5fe02459936de",
"text": "The study evaluated the effects of formocresol (FC), ferric sulphate (FS), calcium hydroxide (Ca[OH](2)), and mineral trioxide aggregate (MTA) as pulp dressing agents in pulpotomized primary molars. Sixteen children each with at least four primary molars requiring pulpotomy were selected. Eighty selected teeth were divided into four groups and treated with one of the pulpotomy agent. The children were recalled for clinical and radiographic examination every 6 months during 2 years of follow-up. Eleven children with 56 teeth arrived for clinical and radiographic follow-up evaluation at 24 months. The follow-up evaluations revealed that the success rate was 76.9% for FC, 73.3% for FS, 46.1% for Ca(OH)(2), and 66.6% for MTA. In conclusion, Ca(OH)(2)is less appropriate for primary teeth pulpotomies than the other pulpotomy agents. FC and FS appeared to be superior to the other agents. However, there was no statistically significant difference between the groups.",
"title": ""
},
{
"docid": "10e92b73fcd1b89e820dc0cdfac1b70f",
"text": "With an aim of provisioning fast, reliable and low cost services to the users, the cloud-computing technology has progressed leaps and bounds. But, adjacent to its development is ever increasing ability of malicious users to compromise its security from outside as well as inside. The Network Intrusion Detection System (NIDS) techniques has gone a long way in detection of known and unknown attacks. The methods of detection of intrusion and deployment of NIDS in cloud environment are dependent on the type of services being rendered by the cloud. It is also important that the cloud administrator is able to determine the malicious intensions of the attackers and various methods of attack. In this paper, we carry out the integration of NIDS module and Honeypot Networks in Cloud environment with objective to mitigate the known and unknown attacks. We also propose method to generate and update signatures from information derived from the proposed integrated model. Using sandboxing environment, we perform dynamic malware analysis of binaries to derive conclusive evidence of malicious attacks.",
"title": ""
},
{
"docid": "f55c50c210079d2a28a0c3fd4f32db9b",
"text": "OBJECTIVE\nMeta-analyses of behavior change (BC) interventions typically find large heterogeneity in effectiveness and small effects. This study aimed to assess the effectiveness of active BC interventions designed to promote physical activity and healthy eating and investigate whether theoretically specified BC techniques improve outcome.\n\n\nDESIGN\nInterventions, evaluated in experimental or quasi-experimental studies, using behavioral and/or cognitive techniques to increase physical activity and healthy eating in adults, were systematically reviewed. Intervention content was reliably classified into 26 BC techniques and the effects of individual techniques, and of a theoretically derived combination of self-regulation techniques, were assessed using meta-regression.\n\n\nMAIN OUTCOME MEASURES\nValid outcomes of physical activity and healthy eating.\n\n\nRESULTS\nThe 122 evaluations (N = 44,747) produced an overall pooled effect size of 0.31 (95% confidence interval = 0.26 to 0.36, I(2) = 69%). The technique, \"self-monitoring,\" explained the greatest amount of among-study heterogeneity (13%). Interventions that combined self-monitoring with at least one other technique derived from control theory were significantly more effective than the other interventions (0.42 vs. 0.26).\n\n\nCONCLUSION\nClassifying interventions according to component techniques and theoretically derived technique combinations and conducting meta-regression enabled identification of effective components of interventions designed to increase physical activity and healthy eating.",
"title": ""
},
{
"docid": "7e4a4e76ba976a24151b243148a2feb4",
"text": "Amodel based clustering procedure for data of mixed type, clustMD, is developed using a latent variable model. It is proposed that a latent variable, following a mixture of Gaussian distributions, generates the observed data of mixed type. The observed data may be any combination of continuous, binary, ordinal or nominal variables. clustMD employs a parsimonious covariance structure for the latent variables, leading to a suite of six clustering models that vary in complexity and provide an elegant and unified approach to clustering mixed data. An expectation maximisation (EM) algorithm is used to estimate clustMD; in the presence of nominal data a Monte Carlo EM algorithm is required. The clustMD model is illustrated by clustering simulated mixed type data and prostate cancer patients, on whom mixed data have been recorded.",
"title": ""
},
{
"docid": "6998297aeba2e02133a6d62aa94508be",
"text": "License Plate Detection and Recognition System is an image processing technique used to identify a vehicle by its license plate. Here we propose an accurate and robust method of license plate detection and recognition from an image using contour analysis. The system is composed of two phases: the detection of the license plate, and the character recognition. The license plate detection is performed for obtaining the candidate region of the vehicle license plate and determined using the edge based text detection technique. In the recognition phase, the contour analysis is used to recognize the characters after segmenting each character. The performance of the proposed system has been tested on various images and provides better results.",
"title": ""
},
{
"docid": "12b115e3b759fcb87956680d6e89d7aa",
"text": "The calibration system presented in this article enables to calculate optical parameters i.e. intrinsic and extrinsic of both thermal and visual cameras used for 3D reconstruction of thermal images. Visual cameras are in stereoscopic set and provide a pair of stereo images of the same object which are used to perform 3D reconstruction of the examined object [8]. The thermal camera provides information about temperature distribution on the surface of an examined object. In this case the term of 3D reconstruction refers to assigning to each pixel of one of the stereo images (called later reference image) a 3D coordinate in the respective camera reference frame [8]. The computed 3D coordinate is then re-projected on to the thermograph and thus to the known 3D position specific temperature is assigned. In order to remap the 3D coordinates on to thermal image it is necessary to know the position of thermal camera against visual camera and therefore a calibration of the set of the three cameras must be performed. The presented calibration system includes special calibration board (fig.1) whose characteristic points of well known position are recognizable both by thermal and visual cameras. In order to detect calibration board characteristic points’ image coordinates, especially in thermal camera, a new procedure was designed.",
"title": ""
},
{
"docid": "d8bd48a231374a82f31e6363881335c4",
"text": "Adversarial examples are inputs to machine learning models designed to cause the model to make a mistake. They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation. In NLP, however, most example generation strategies produce input text by using known, pre-specified semantic transformations, requiring significant manual effort and in-depth understanding of the problem and domain. In this paper, we investigate the problem of automatically generating adversarial examples that violate a set of given First-Order Logic constraints in Natural Language Inference (NLI). We reduce the problem of identifying such adversarial examples to a combinatorial optimisation problem, by maximising a quantity measuring the degree of violation of such constraints and by using a language model for generating linguisticallyplausible examples. Furthermore, we propose a method for adversarially regularising neural NLI models for incorporating background knowledge. Our results show that, while the proposed method does not always improve results on the SNLI and MultiNLI datasets, it significantly and consistently increases the predictive accuracy on adversarially-crafted datasets – up to a 79.6% relative improvement – while drastically reducing the number of background knowledge violations. Furthermore, we show that adversarial examples transfer among model architectures, and that the proposed adversarial training procedure improves the robustness of NLI models to adversarial examples.",
"title": ""
},
{
"docid": "f945b645e492e2b5c6c2d2d4ea6c57ae",
"text": "PURPOSE\nThe aim of this review was to look at relevant data and research on the evolution of ventral hernia repair.\n\n\nMETHODS\nResources including books, research, guidelines, and online articles were reviewed to provide a concise history of and data on the evolution of ventral hernia repair.\n\n\nRESULTS\nThe evolution of ventral hernia repair has a very long history, from the recognition of ventral hernias to its current management, with significant contributions from different authors. Advances in surgery have led to more cases of ventral hernia formation, and this has required the development of new techniques and new materials for ventral hernia management. The biocompatibility of prosthetic materials has been important in mesh development. The functional anatomy and physiology of the abdominal wall has become important in ventral hernia management. New techniques in abdominal wall closure may prevent or reduce the incidence of ventral hernia in the future.\n\n\nCONCLUSION\nThe management of ventral hernia is continuously evolving as it responds to new demands and new technology in surgery.",
"title": ""
},
{
"docid": "edf548598375ea1e36abd57dd3bad9c7",
"text": "processes associated with social identity. Group identification, as self-categorization, constructs an intragroup prototypicality gradient that invests the most prototypical member with the appearance of having influence; the appearance arises because members cognitively and behaviorally conform to the prototype. The appearance of influence becomes a reality through depersonalized social attraction processes that makefollowers agree and comply with the leader's ideas and suggestions. Consensual social attraction also imbues the leader with apparent status and creates a status-based structural differentiation within the group into leader(s) and followers, which has characteristics ofunequal status intergroup relations. In addition, afundamental attribution process constructs a charismatic leadership personality for the leader, which further empowers the leader and sharpens the leader-follower status differential. Empirical supportfor the theory is reviewed and a range of implications discussed, including intergroup dimensions, uncertainty reduction and extremism, power, and pitfalls ofprototype-based leadership.",
"title": ""
},
{
"docid": "23ba216f846eab3ff8c394ad29b507bf",
"text": "The emergence of large-scale freeform shapes in architecture poses big challenges to the fabrication of such structures. A key problem is the approximation of the design surface by a union of patches, so-called panels, that can be manufactured with a selected technology at reasonable cost, while meeting the design intent and achieving the desired aesthetic quality of panel layout and surface smoothness. The production of curved panels is mostly based on molds. Since the cost of mold fabrication often dominates the panel cost, there is strong incentive to use the same mold for multiple panels. We cast the major practical requirements for architectural surface paneling, including mold reuse, into a global optimization framework that interleaves discrete and continuous optimization steps to minimize production cost while meeting user-specified quality constraints. The search space for optimization is mainly generated through controlled deviation from the design surface and tolerances on positional and normal continuity between neighboring panels. A novel 6-dimensional metric space allows us to quickly compute approximate inter-panel distances, which dramatically improves the performance of the optimization and enables the handling of complex arrangements with thousands of panels. The practical relevance of our system is demonstrated by paneling solutions for real, cutting-edge architectural freeform design projects.",
"title": ""
},
{
"docid": "5d8ad5dd91a0f59112809ee6dc154e0e",
"text": "In this work we propose a neural network based image descriptor suitable for image patch matching, which is an important task in many computer vision applications. Our approach is influenced by recent success of deep convolutional neural networks (CNNs) in object detection and classification tasks. We develop a model which maps the raw input patch to a low dimensional feature vector so that the distance between representations is small for similar patches and large otherwise. As a distance metric we utilize L2 norm, i.e. Euclidean distance, which is fast to evaluate and used in most popular hand-crafted descriptors, such as SIFT. According to the results, our approach outperforms state-of-the-art L2-based descriptors and can be considered as a direct replacement of SIFT. In addition, we conducted experiments with batch normalization and histogram equalization as a preprocessing method of the input data. The results confirm that these techniques further improve the performance of the proposed descriptor. Finally, we show promising preliminary results by appending our CNNs with recently proposed spatial transformer networks and provide a visualisation and interpretation of their impact.",
"title": ""
},
{
"docid": "b1f3c073ec058b0b73c524aa2d381e5f",
"text": "A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the bla CARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that bla CARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this bla CARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of bla CARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by bla CARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification.",
"title": ""
}
] |
scidocsrr
|
b8da19fad461631e51beaf0db2eb9356
|
Combining ontologies for requirements elicitation
|
[
{
"docid": "9309ce05609d1cbdadcdc89fe8937473",
"text": "There is an increase use of ontology-driven approaches to support requirements engineering (RE) activities, such as elicitation, analysis, specification, validation and management of requirements. However, the RE community still lacks a comprehensive understanding of how ontologies are used in RE process. Thus, the main objective of this work is to investigate and better understand how ontologies support RE as well as identify to what extent they have been applied to this field. In order to meet our goal, we conducted a systematic literature review (SLR) to identify the primary studies on the use of ontologies in RE, following a predefined review protocol. We then identified the main RE phases addressed, the requirements modelling styles that have been used in conjunction with ontologies, the types of requirements that have been supported by the use of ontologies and the ontology languages that have been adopted. We also examined the types of contributions reported and looked for evidences of the benefits of ontology-driven RE. In summary, the main findings of this work are: (1) there are empirical evidences of the benefits of using ontologies in RE activities both in industry and academy, specially for reducing ambiguity, inconsistency and incompleteness of requirements; (2) the majority of studies only partially address the RE process; (3) there is a great diversity of RE modelling styles supported by ontologies; (4) most studies addressed only functional requirements; (5) several studies describe the use/development of tools to support different types of ontology-driven RE approaches; (6) about half of the studies followed W3C recommendations on ontology-related languages; and (7) a great variety of RE ontologies were identified; nevertheless, none of them has been broadly adopted by the community. Finally, we conclude this work by showing several promising research opportunities that are quite important and interesting but underexplored in current research and practice.",
"title": ""
}
] |
[
{
"docid": "40ce0aeda9fe2aaf9e7907bf5b4568d3",
"text": "This paper descr ibes an implementation of a wireless mobile ad hoc network with radio nodes mounted at fixed sites, on ground vehicles, and in small (10kg) UAVs. The ad hoc networking allows any two nodes to communicate either directly or through an arbitrary number of other nodes which act as relays. We envision two scenar ios for this type of network. In the first, the UAV acts as a prominent radio node that connects disconnected ground radios. In the second, the networking enables groups of UAVs to communicate with each other to extend small UAVs' operational scope and range. The network consists of mesh network radios assembled from low-cost commercial off the shelf components. The radio is an IEEE 802.11b (WiFi) wireless inter face and is controlled by an embedded computer . The network protocol is an implementation of the Dynamic Source Routing ad hoc networking protocol. The radio is mounted either in an environmental enclosure for outdoor fixed and vehicle mounting or directly in our custom built UAVs. A monitor ing architecture has been embedded into the radios for detailed per formance character ization and analysis. This paper descr ibes these components and per formance results measured at an outdoor test range.",
"title": ""
},
{
"docid": "c337226d663e69ecde67ff6f35ba7654",
"text": "In this paper, we presented a new model for cyber crime investigation procedure which is as follows: readiness phase, consulting with profiler, cyber crime classification and investigation priority decision, damaged cyber crime scene investigation, analysis by crime profiler, suspects tracking, injurer cyber crime scene investigation, suspect summon, cyber crime logical reconstruction, writing report.",
"title": ""
},
{
"docid": "da1f6c51918080b4fdebe8fcda7f36d7",
"text": "We propose the propagation filter as a novel image filtering operator, with the goal of smoothing over neighboring image pixels while preserving image context like edges or textural regions. In particular, our filter does not to utilize explicit spatial kernel functions as bilateral and guided filters do. We will show that our propagation filter can be viewed as a robust estimator, which minimizes the expected difference between the filtered and desirable image outputs. We will also relate propagation filtering to belief propagation, and suggest techniques if further speedup of the filtering process is necessary. In our experiments, we apply our propagation filter to a variety of applications such as image denoising, smoothing, fusion, and high-dynamic-range (HDR) compression. We will show that improved performance over existing image filters can be achieved.",
"title": ""
},
{
"docid": "efc6daba6a41478f79b3a150274e6af0",
"text": "Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, (2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and (3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps, leg damage adaptations, as well as climbing over high obstacles. Furthermore, we demonstrate that the newly developed recurrent network based approach to online forward models outperforms the adaptive neuron forward models, which have hitherto been the state of the art, to model a subset of similar walking behaviors in walking robots.",
"title": ""
},
{
"docid": "e26a625e45954d9e9260743c5888a2eb",
"text": "This paper proposes a framework for understanding the benefits of CRM systems. Analysis of multiple case studies is used to validate and extend the framework. The framework will provide practitioners with a means of defining objectives for CRM projects and for conducting post-implementation reviews. It will also provide academics with a systematic approach to exploring CRM system benefits.",
"title": ""
},
{
"docid": "850223b7efdea78735c8226582a2b67d",
"text": "In this paper, the performance of Long Range (LoRa) Internet of Things (IoT) technology is investigated. By considering Chirp Spread Spectrum (CSS) technique of LoRa, an approximation of the Bit Error Rate (BER) is presented and evaluated through intensive simulations. Unlike previous works which present the BER of LoRa in terms of the ratio of energy ber bit to noise ratio only without any proofing, our presented work expresses BER in terms of LoRa's modulation patterns such as the spreading factor, the code rate, the symbol frequency and the SNR. Numerical results are carried out in order to investigate the LoRa performance and to illustrate the accuracy of the new BER expression.",
"title": ""
},
{
"docid": "61c9e301973b555e32c88a19fcce6752",
"text": "This is a tutorial on logic programming and Prolog appropriate for a course on programming languages for students familiar with imperative programming.",
"title": ""
},
{
"docid": "a723657b2d042ad0fec5129860091192",
"text": "In this paper, we develop a new deep neural network which can extract discriminative and generalizable global descriptors from the raw 3D point cloud. Specifically, two novel modules, Adaptive Local Feature Extraction and Graph-based Neighborhood Aggregation, are designed and integrated into our network. This contributes to extract the local features adequately, reveal the spatial distribution of the point cloud, and find out the local structure and neighborhood relations of each part in a large-scale point cloud with an end-to-end manner. Furthermore, we utilize the network output for point cloud based analysis and retrieval tasks to achieve large-scale place recognition and environmental analysis. We tested our approach on the Oxford RobotCar dataset. The results for place recognition increased the existing state-of-the-art result (PointNetVLAD) from 81.01% to 94.92%. Moreover, we present an application to analyze the large-scale environment by evaluating the uniqueness of each location in the map, which can be applied to localization and loop-closure tasks, which are crucial for robotics and self-driving applications.",
"title": ""
},
{
"docid": "3432d7e904f96973522a46934a6ceb82",
"text": "The increase in funding for e-learning is a good move that requires monitoring, as e-learning, like any other information systems, is a long-term investment with uncertainty for returns. This necessitates the need for e-learning evaluation to determine post adoption success. The evaluation of e-learning is a complex process, that requires comprehensive tools for measuring post adoption success. Subsequently, leading the study to investigate approaches utilised for performing post adoption e-learning success. Thereafter, proposing an adapted IS success integrated model for measuring post adoption e-learning success in developing country context, specifically, South Africa. A systematic literature review method was adopted to achieve an inductive study. The study objectives influenced the key words employed for searching relevant literature. They also drove the decision to follow thematic analysis method. Through a systematic review the study discovered that the Delone & Mclean IS success model is the most adopted and applied for measuring post adoption e-learning success. The model is also received as an effective tool to comprehensively gauge post adoption e-learning success. Additionally, the model is flexible and modifiable to suite different contexts. The study suggests further investigation and use of the adapted IS success model within HEIs to comprehensively assess post adoption e-learning success in South Africa. Future studies should be focused on conducting in-depth literature review on approaches utilised for assessing e-learning success post adoption. They should also do testing to determine the suitable process flow and constructs of an evaluation model for developing country contexts.",
"title": ""
},
{
"docid": "96db5cbe83ce9fbee781b8cc26d97fc8",
"text": "We present a novel method to obtain a 3D Euclidean reconstruction of both the background and moving objects in a video sequence. We assume that, multiple objects are moving rigidly on a ground plane observed by a moving camera. The video sequence is first segmented into static background and motion blobs by a homography-based motion segmentation method. Then classical \"Structure from Motion\" (SfM) techniques are applied to obtain a Euclidean reconstruction of the static background. The motion blob corresponding to each moving object is treated as if there were a static object observed by a hypothetical moving camera, called a \"virtual camera\". This virtual camera shares the same intrinsic parameters with the real camera but moves differently due to object motion. The same SfM techniques are applied to estimate the 3D shape of each moving object and the pose of the virtual camera. We show that the unknown scale of moving objects can be approximately determined by the ground plane, which is a key contribution of this paper. Another key contribution is that we prove that the 3D motion of moving objects can be solved from the virtual camera motion with a linear constraint imposed on the object translation. In our approach, a planartranslation constraint is formulated: \"the 3D instantaneous translation of moving objects must be parallel to the ground plane\". Results on real-world video sequences demonstrate the effectiveness and robustness of our approach.",
"title": ""
},
{
"docid": "90b1d0a8670e74ff3549226acd94973e",
"text": "Language identification is the task of automatically detecting the language(s) present in a document based on the content of the document. In this work, we address the problem of detecting documents that contain text from more than one language (multilingual documents). We introduce a method that is able to detect that a document is multilingual, identify the languages present, and estimate their relative proportions. We demonstrate the effectiveness of our method over synthetic data, as well as real-world multilingual documents collected from the web.",
"title": ""
},
{
"docid": "43f9cd44dee709339fe5b11eb73b15b6",
"text": "Mutual interference of radar systems has been identified as one of the major challenges for future automotive radar systems. In this work the interference of frequency (FMCW) and phase modulated continuous wave (PMCW) systems is investigated by means of simulations. All twofold combinations of the aforementioned systems are considered. The interference scenario follows a typical use-case from the well-known MOre Safety for All by Radar Interference Mitigation (MOSARIM) study. The investigated radar systems operate with similar system parameters to guarantee a certain comparability, but with different waveform durations, and chirps with different slopes and different phase code sequences, respectively. Since the effects in perfect synchrony are well understood, we focus on the cases where both systems exhibit a certain asynchrony. It is shown that the energy received from interferers can cluster in certain Doppler bins in the range-Doppler plane when systems exhibit a slight asynchrony.",
"title": ""
},
{
"docid": "98c286ed333b19a8aa5c811ca4e03505",
"text": "Empirical evidence suggests that neural networks with ReLU activations generalize better with over-parameterization. However, there is currently no theoretical analysis that explains this observation. In this work, we study a simplified learning task with over-parameterized convolutional networks that empirically exhibits the same qualitative phenomenon. For this setting, we provide a theoretical analysis of the optimization and generalization performance of gradient descent. Specifically, we prove data-dependent sample complexity bounds which show that overparameterization improves the generalization performance of gradient descent.",
"title": ""
},
{
"docid": "9e67148718b994c60d9b8fce1b18ad17",
"text": "Images with high resolution are desirable in many applications such as medical imaging, video surveillance, astronomy etc. In medical imaging, images are obtained for medical investigative purposes and for providing information about the anatomy, the physiologic and metabolic activities of the volume below the skin. Medical imaging is an important diagnosis instrument to determine the presence of certain diseases. Therefore increasing the image resolution should significantly improve the diagnosis ability for corrective treatment. Furthermore, a better resolution may substantially improve automatic detection and image segmentation results. The arrival of digital medical imaging technologies such as Computerized Tomography (CT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI) etc. has revolutionized modern medicine. Despite the advances in acquisition technology and the performance of optimized reconstruction algorithms over the two last decades, it is not easy to obtain an image at a desired resolution due to imaging environments, the limitations of physical imaging systems as well as quality-limiting factors such as Noise and Blur. A solution to this problem is the use of Super Resolution (SR) techniques which can be used for processing of such images. Various methods have been described over the years to generate and form algorithms which can be used for building on this concept of Super resolution. This paper details few of the types of medical imaginary, various techniques used to perform super resolution and the current trends which are being followed for the implementation of this concept.",
"title": ""
},
{
"docid": "b151d236ce17b4d03b384a29dbb91330",
"text": "To investigate the blood supply to the nipple areola complex (NAC) on thoracic CT angiograms (CTA) to improve breast pedicle design in reduction mammoplasty. In a single centre, CT scans of the thorax were retrospectively reviewed for suitability by a cardiothoracic radiologist. Suitable scans had one or both breasts visible in extended fields, with contrast enhancement of breast vasculature in a female patient. The arterial sources, intercostal space perforated, glandular/subcutaneous course, vessel entry point, and the presence of periareolar anastomoses were recorded for the NAC of each breast. From 69 patients, 132 breasts were suitable for inclusion. The most reproducible arterial contribution to the NAC was perforating branches arising from the internal thoracic artery (ITA) (n = 108, 81.8%), followed by the long thoracic artery (LTA) (n = 31, 23.5%) and anterior intercostal arteries (AI) (n = 21, 15.9%). Blood supply was superficial versus deep in (n = 86, 79.6%) of ITA sources, (n = 28, 90.3%) of LTA sources, and 10 (47.6%) of AI sources. The most vascularly reliable breast pedicle would be asymmetrical in 7.9% as a conservative estimate. We suggest that breast CT angiography can provide valuable information about NAC blood supply to aid customised pedicle design, especially in high-risk, large-volume breast reductions where the risk of vascular-dependent complications is the greatest and asymmetrical dominant vasculature may be present. Superficial ITA perforator supplies are predominant in a majority of women, followed by LTA- and AIA-based sources, respectively.",
"title": ""
},
{
"docid": "f2a82b5c783286106227f936f79903a0",
"text": "It is now almost a century since Piper used a string galvanometer to study muscle activity and initiated the first studies in electromyography (EMG). But not many volumes have emerged since then to describe the fascinating developments in the field of muscle physiology and diagnostics. Such a synthesis is long overdue, and to accomplish this goal, Merletti and Parker have elicited contributions from an internationally famed group of EMG research experts. As a result, Electromyography: Physiology, Engineering and Non-Invasive Applications has materialized as a pioneering volume that captures the remarkable revolution in the field of skeletal muscle electrodiagnostics and research. Most of the book’s 18 chapters are very good or outstanding. Repetitions are present, but they appear to be didactic rather than distracting. The introductory chapter offers superb reading on the basic biophysical and (electro)physiological aspects of EMG signal generation. The chapters on EMG methodologies, decomposition techniques, surface biophysical issues, signal conditioning and extraction, simulation/modeling, and myoelectric manifestations, written by senior authors, are superlative. The book concludes with the description of various advancements and relevant clinical applications in the areas ranging from neurology and ergonomics to exercise physiology, from rehabilitation medicine to biofeedback and prostheses control. The chapters are succinct and contain numerous clinical pearls of wisdom. A few exceptions are the chapters on signal extraction, EMG modeling and simulation, and myoelectric manifestations, which are wordy and, though informative, not as authoritative as the others. Applications are generally well covered, but the depth of the practical information useful to busy researchers varies from excellent, as in the instances of ergonomics, movement and gait analysis, and rehabilitation medicine, to brittle, as in the instance of exercise physiology. One criticism of the book is that little attention is focused on the beguiling issue of mechanomyography (vibromyography), which is developing as a contestant with superior muscle diagnostic potential against EMG. Overall, this textbook is a harmonious work that can certainly be appreciated by graduate students and researchers from different backgrounds such as bioengineering, life sciences, exercise sciences, sports health, neuro(physio)logy, and occupational medicine. Electromyography is certainly the best single reference book currently available in the field and more importantly, this book comes with the assurance that what is presented is reliable, objective, and up to date.",
"title": ""
},
{
"docid": "9381ba0001262dd29d7ca74a98a56fc7",
"text": "Despite several advances in information retrieval systems and user interfaces, the specification of queries over text-based document collections remains a challenging problem. Query specification with keywords is a popular solution. However, given the widespread adoption of gesture-driven interfaces such as multitouch technologies in smartphones and tablets, the lack of a physical keyboard makes query specification with keywords inconvenient. We present BinGO, a novel gestural approach to querying text databases that allows users to refine their queries using a swipe gesture to either \"like\" or \"dislike\" candidate documents as well as express the reasons they like or dislike a document by swiping through automatically generated \"reason bins\". Such reasons refine a user's query with additional keywords. We present an online and efficient bin generation algorithm that presents reason bins at gesture articulation. We motivate and describe BinGo's unique interface design choices. Based on our analysis and user studies, we demonstrate that query specification by swiping through reason bins is easy and expressive.",
"title": ""
},
{
"docid": "7a1f409eea5e0ff89b51fe0a26d6db8d",
"text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.",
"title": ""
},
{
"docid": "4016ad494a953023f982b8a4876bc8c1",
"text": "Visual tracking is one of the most important field of computer vision. It has immense number of applications ranging from surveillance to hi-fi military applications. This paper is based on the application developed for automatic visual tracking and fire control system for anti-aircraft machine gun (AAMG). Our system mainly consists of camera, as visual sensor; mounted on a 2D-moving platform attached with 2GHz embedded system through RS-232 and AAMG mounted on the same moving platform. Camera and AAMG are both bore-sighted. Correlation based template matching algorithm has been used for automatic visual tracking. This is the algorithm used in civilian and military automatic target recognition, surveillance and tracking systems. The algorithm does not give robust performance in different environments, especially in clutter and obscured background, during tracking. So, motion and prediction algorithms have been integrated with it to achieve robustness and better performance for real-time tracking. Visual tracking is also used to calculate lead angle, which is a vital component of such fire control systems. Lead is angular correction needed to compensate for the target motion during the time of flight of the projectile, to accurately hit the target. Although at present lead computation is not robust due to some limitation as lead calculation mostly relies on gunner intuition. Even then by the integrated implementation of lead angle with visual tracking and control algorithm for moving platform, we have been able to develop a system which detects tracks and destroys the target of interest.",
"title": ""
},
{
"docid": "3e63c8a5499966f30bd3e6b73494ff82",
"text": "Events can be understood in terms of their temporal structure. The authors first draw on several bodies of research to construct an analysis of how people use event structure in perception, understanding, planning, and action. Philosophy provides a grounding for the basic units of events and actions. Perceptual psychology provides an analogy to object perception: Like objects, events belong to categories, and, like objects, events have parts. These relationships generate 2 hierarchical organizations for events: taxonomies and partonomies. Event partonomies have been studied by looking at how people segment activity as it happens. Structured representations of events can relate partonomy to goal relationships and causal structure; such representations have been shown to drive narrative comprehension, memory, and planning. Computational models provide insight into how mental representations might be organized and transformed. These different approaches to event structure converge on an explanation of how multiple sources of information interact in event perception and conception.",
"title": ""
}
] |
scidocsrr
|
016bf355adcc396c31dacc83da145b0e
|
Personality as a predictor of Business Social Media Usage: an Empirical Investigation of Xing Usage Patterns
|
[
{
"docid": "627b14801c8728adf02b75e8eb62896f",
"text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.",
"title": ""
},
{
"docid": "ee6d70f4287f1b43e1c36eba5f189523",
"text": "Received: 10 March 2008 Revised: 31 May 2008 2nd Revision: 27 July 2008 Accepted: 11 August 2008 Abstract For more than a century, concern for privacy (CFP) has co-evolved with advances in information technology. The CFP refers to the anxious sense of interest that a person has because of various types of threats to the person’s state of being free from intrusion. Research studies have validated this concept and identified its consequences. For example, research has shown that the CFP can have a negative influence on the adoption of information technology; but little is known about factors likely to influence such concern. This paper attempts to fill that gap. Because privacy is said to be a part of a more general ‘right to one’s personality’, we consider the so-called ‘Big Five’ personality traits (agreeableness, extraversion, emotional stability, openness to experience, and conscientiousness) as factors that can influence privacy concerns. Protection motivation theory helps us to explain this influence in the context of an emerging pervasive technology: location-based services. Using a survey-based approach, we find that agreeableness, conscientiousness, and openness to experience each affect the CFP. These results have implications for the adoption, the design, and the marketing of highly personalized new technologies. European Journal of Information Systems (2008) 17, 387–402. doi:10.1057/ejis.2008.29",
"title": ""
},
{
"docid": "5a5fbde8e0e264410fe23322a9070a39",
"text": "By asking users of career-oriented social networking sites I investigated their job search behavior. For further IS-theorizing I integrated the number of a user's contacts as an own construct into Venkatesh's et al. UTAUT2 model, which substantially rose its predictive quality from 19.0 percent to 80.5 percent concerning the variance of job search success. Besides other interesting results I found a substantial negative relationship between the number of contacts and job search success, which supports the experience of practitioners but contradicts scholarly findings. The results are useful for scholars and practitioners.",
"title": ""
}
] |
[
{
"docid": "a4f2a82daf86314363ceeac34cba7ed9",
"text": "As a vital task in natural language processing, relation classification aims to identify relation types between entities from texts. In this paper, we propose a novel Att-RCNN model to extract text features and classify relations by combining recurrent neural network (RNN) and convolutional neural network (CNN). This network structure utilizes RNN to extract higher level contextual representations of words and CNN to obtain sentence features for the relation classification task. In addition to this network structure, both word-level and sentence-level attention mechanisms are employed in Att-RCNN to strengthen critical words and features to promote the model performance. Moreover, we conduct experiments on four distinct datasets: SemEval-2010 task 8, SemEval-2018 task 7 (two subtask datasets), and KBP37 dataset. Compared with the previous public models, Att-RCNN has the overall best performance and achieves the highest $F_{1}$ score, especially on the KBP37 dataset.",
"title": ""
},
{
"docid": "fcf2fd920ac463e505e68aa02baef795",
"text": "Channel modeling is a critical topic when considering designing, learning, or evaluating the performance of any communications system. Most prior work in designing or learning new modulation schemes has focused on using highly simplified analytic channel models such as additive white Gaussian noise (AWGN), Rayleigh fading channels or similar. Recently, we proposed the usage of a generative adversarial networks (GANs) to jointly approximate a wireless channel response model (e.g. from real black box measurements) and optimize for an efficient modulation scheme over it using machine learning. This approach worked to some degree, but was unable to produce accurate probability distribution functions (PDFs) representing the stochastic channel response. In this paper, we focus specifically on the problem of accurately learning a channel PDF using a variational GAN, introducing an architecture and loss function which can accurately capture stochastic behavior. We illustrate where our prior method failed and share results capturing the performance of such as system over a range of realistic channel distributions.",
"title": ""
},
{
"docid": "681b46b159c7b5df2b1bf99e9f0064fd",
"text": "Purpose – The purpose of this paper is to examine the factors within the technology-organization-environment (TOE) framework that affect the decision to adopt electronic commerce (EC) and extent of EC adoption, as well as adoption and non-adoption of different EC applications within smalland medium-sized enterprises (SMEs). Design/methodology/approach – A questionnaire-based survey was conducted to collect data from 235 managers or owners of manufacturing SMEs in Iran. The data were analyzed by employing factorial analysis and relevant hypotheses were derived and tested by multiple and logistic regression analysis. Findings – EC adoption within SMEs is affected by perceived relative advantage, perceived compatibility, CEO’s innovativeness, information intensity, buyer/supplier pressure, support from technology vendors, and competition. Similarly, description on determinants of adoption and non-adoption of different EC applications has been provided. Research limitations/implications – Cross-sectional data of this research tend to have certain limitations when it comes to explaining the direction of causality of the relationships among the variables, which will change overtime. Practical implications – The findings offer valuable insights to managers, IS experts, and policy makers responsible for assisting SMEs with entering into the e-marketplace. Vendors should collaborate with SMEs to enhance the compatibility of EC applications with these businesses. To enhance the receptiveness of EC applications, CEOs, innovativeness and perception toward EC advantages should also be aggrandized. Originality/value – This study is perhaps one of the first to use a wide range of variables in the light of TOE framework to comprehensively assess EC adoption behavior, both in terms of initial and post-adoption within SMEs in developing countries, as well adoption and non-adoption of simple and advanced EC applications such as electronic supply chain management systems.",
"title": ""
},
{
"docid": "fdd790d33300c19cb0c340903e503b02",
"text": "We present a simple method for evergrowing extraction of predicate paraphrases from news headlines in Twitter. Analysis of the output of ten weeks of collection shows that the accuracy of paraphrases with different support levels is estimated between 60-86%. We also demonstrate that our resource is to a large extent complementary to existing resources, providing many novel paraphrases. Our resource is publicly available, continuously expanding based on daily news.",
"title": ""
},
{
"docid": "8a35d871317a372445a5f25eb7610e77",
"text": "Wireless Sensor Networks (WSNs) have their own unique nature of distributed resources and dynamic topology. This introduces very special requirements that should be met by the proposed routing protocols for the WSNs. A Wireless Sensor Network routing protocol is a standard which controls the number of nodes that come to an agreement about the way to route packets between all the computing devices in mobile wireless networks. Today, wireless networks are becoming popular and many routing protocols have been proposed in the literature. Considering these protocols we made a survey on the WSNs energy-efficient routing techniques which are used for Health Care Communication Systems concerning especially the Flat Networks Protocols that have been developed in recent years. Then, as related work, we discuss each of the routing protocols belonging to this category and conclude with a comparison of them.",
"title": ""
},
{
"docid": "6e993c4f537dfb8c73980dd56aead6d8",
"text": "A novel compact 4 × 4 Butler matrix using only microstrip couplers and a crossover is proposed in this letter. Compared with the conventional Butler matrix, the proposed one avoids the interconnecting mismatch loss and imbalanced amplitude introduced by the phase shifter. The measurements show accurate phase differences of 45±0.8° and -135±0.9° with an amplitude imbalance less than 0.4 dB. The 10 dB return loss bandwidth is 20.1%.",
"title": ""
},
{
"docid": "ffef3f247f0821eee02b8d8795ddb21c",
"text": "A broadband polarization reconfigurable rectenna is proposed, which can operate in three polarization modes. The receiving antenna of the rectenna is a polarization reconfigurable planar monopole antenna. By installing switches on the feeding network, the antenna can switch to receive electromagnetic (EM) waves with different polarizations, including linear polarization (LP), right-hand and left-hand circular polarizations (RHCP/LHCP). To achieve stable conversion efficiency of the rectenna (nr) in all the modes within a wide frequency band, a tunable matching network is inserted between the rectifying circuit and the antenna. The measured nr changes from 23.8% to 31.9% in the LP mode within 5.1-5.8 GHz and from 22.7% to 24.5% in the CP modes over 5.8-6 GHz. Compared to rectennas with conventional broadband matching network, the proposed rectenna exhibits more stable conversion efficiency.",
"title": ""
},
{
"docid": "f26d34a762ce2c8ffd1c92ec0a86d56a",
"text": "Despite recent interest in digital fabrication, there are still few algorithms that provide control over how light propagates inside a solid object. Existing methods either work only on the surface or restrict themselves to light diffusion in volumes. We use multi-material 3D printing to fabricate objects with embedded optical fibers, exploiting total internal reflection to guide light inside an object. We introduce automatic fiber design algorithms together with new manufacturing techniques to route light between two arbitrary surfaces. Our implicit algorithm optimizes light transmission by minimizing fiber curvature and maximizing fiber separation while respecting constraints such as fiber arrival angle. We also discuss the influence of different printable materials and fiber geometry on light propagation in the volume and the light angular distribution when exiting the fiber. Our methods enable new applications such as surface displays of arbitrary shape, touch-based painting of surfaces, and sensing a hemispherical light distribution in a single shot.",
"title": ""
},
{
"docid": "441a6a879e0723c00f48796fd4bb1a91",
"text": "Recent research on Low Power Wide Area Network (LPWAN) technologies which provide the capability of serving massive low power devices simultaneously has been very attractive. The LoRaWAN standard is one of the most successful developments. Commercial pilots are seen in many countries around the world. However, the feasibility of large scale deployments, for example, for smart city applications need to be further investigated. This paper provides a comprehensive case study of LoRaWAN to show the feasibility, scalability, and reliability of LoRaWAN in realistic simulated scenarios, from both technical and economic perspectives. We develop a Matlab based LoRaWAN simulator to offer a software approach of performance evaluation. A practical LoRaWAN network covering Greater London area is implemented. Its performance is evaluated based on two typical city monitoring applications. We further present an economic analysis and develop business models for such networks, in order to provide a guideline for commercial network operators, IoT vendors, and city planners to investigate future deployments of LoRaWAN for smart city applications.",
"title": ""
},
{
"docid": "bee35be37795d274dfbb185036fb8ae9",
"text": "This paper presents a human--machine interface to control exoskeletons that utilizes electrical signals from the muscles of the operator as the main means of information transportation. These signals are recorded with electrodes attached to the skin on top of selected muscles and reflect the activation of the observed muscle. They are evaluated by a sophisticated but simplified biomechanical model of the human body to derive the desired action of the operator. A support action is computed in accordance to the desired action and is executed by the exoskeleton. The biomechanical model fuses results from different biomechanical and biomedical research groups and performs a sensible simplification considering the intended application. Some of the model parameters reflect properties of the individual human operator and his or her current body state. A calibration algorithm for these parameters is presented that relies exclusively on sensors mounted on the exoskeleton. An exoskeleton for knee joint support was designed and constructed to verify the model and to investigate the interaction between operator and machine in experiments with force support during everyday movements.",
"title": ""
},
{
"docid": "631b6c1bce729a25c02f499464df7a4f",
"text": "Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevant sentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences.",
"title": ""
},
{
"docid": "5dad2c804c4718b87ae6ee9d7cc5a054",
"text": "The masquerade attack, where an attacker takes on the identity of a legitimate user to maliciously utilize that user’s privileges, poses a serious threat to the security of information systems. Such attacks completely undermine traditional security mechanisms due to the trust imparted to user accounts once they have been authenticated. Many attempts have been made at detecting these attacks, yet achieving high levels of accuracy remains an open challenge. In this paper, we discuss the use of a specially tuned sequence alignment algorithm, typically used in bioinformatics, to detect instances of masquerading in sequences of computer audit data. By using the alignment algorithm to align sequences of monitored audit data with sequences known to have been produced by the user, the alignment algorithm can discover areas of similarity and derive a metric that indicates the presence or absence of masquerade attacks. Additionally, we present several scoring systems, methods for accommodating variations in user behavior, and heuristics for decreasing the computational requirements of the algorithm. Our technique is evaluated against the standard masquerade detection dataset provided by Schonlau et al. [14, 13], and the results show that the use of the sequence alignment technique provides, to our knowledge, the best results of all masquerade detection techniques to date.",
"title": ""
},
{
"docid": "3157970218dc3761576345c0e01e3121",
"text": "This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu",
"title": ""
},
{
"docid": "e79abaaa50d8ab8938f1839c7e4067f9",
"text": "We review the objectives and techniques used in the control of horizontal axis wind turbines at the individual turbine level, where controls are applied to the turbine blade pitch and generator. The turbine system is modeled as a flexible structure operating in the presence of turbulent wind disturbances. Some overview of the various stages of turbine operation and control strategies used to maximize energy capture in below rated wind speeds is given, but emphasis is on control to alleviate loads when the turbine is operating at maximum power. After reviewing basic turbine control objectives, we provide an overview of the common basic linear control approaches and then describe more advanced control architectures and why they may provide significant advantages.",
"title": ""
},
{
"docid": "c99f6ba5851e497206d444d0780a3ef0",
"text": "Digital backchannel systems have been proven useful to help a lecturer gather real-time online feedback from students in a lecture environment. However, the large number of posts made during a lecture creates a major hurdle for the lecturer to promptly analyse them and take actions accordingly in time. To tackle this problem, we propose a solution that analyses the sentiment of students' feedback and visualises the morale trend of the student population to the lecturer in real time. In this paper, we present the user interface for morale visualisation and playback of ranked posts as well as the techniques for sentiment analysis and morale computation.",
"title": ""
},
{
"docid": "a10b0a69ba7d3f902590b35cf0d5ea32",
"text": "This article distills insights from historical, sociological, and psychological perspectives on marriage to develop the suffocation model of marriage in America. According to this model, contemporary Americans are asking their marriage to help them fulfill different sets of goals than in the past. Whereas they ask their marriage to help them fulfill their physiological and safety needs much less than in the past, they ask it to help them fulfill their esteem and self-actualization needs much more than in the past. Asking the marriage to help them fulfill the latter, higher level needs typically requires sufficient investment of time and psychological resources to ensure that the two spouses develop a deep bond and profound insight into each other’s essential qualities. Although some spouses are investing sufficient resources—and reaping the marital and psychological benefits of doing so—most are not. Indeed, they are, on average, investing less than in the past. As a result, mean levels of marital quality and personal well-being are declining over time. According to the suffocation model, spouses who are struggling with an imbalance between what they are asking from their marriage and what they are investing in it have several promising options for corrective action: intervening to optimize their available resources, increasing their investment of resources in the marriage, and asking less of the marriage in terms of facilitating the fulfillment of spouses’ higher needs. Discussion explores the implications of the suffocation model for understanding dating and courtship, sociodemographic variation, and marriage beyond American’s borders.",
"title": ""
},
{
"docid": "72bbd468c00ae45979cce3b771e4c2bf",
"text": "Twitter is a popular microblogging and social networking service with over 100 million users. Users create short messages pertaining to a wide variety of topics. Certain topics are highlighted by Twitter as the most popular and are known as “trending topics.” In this paper, we will outline methodologies of detecting and identifying trending topics from streaming data. Data from Twitter’s streaming API will be collected and put into documents of equal duration. Data collection procedures will allow for analysis over multiple timespans, including those not currently associated with Twitter-identified trending topics. Term frequency-inverse document frequency analysis and relative normalized term frequency analysis are performed on the documents to identify the trending topics. Relative normalized term frequency analysis identifies unigrams, bigrams, and trigrams as trending topics, while term frequcny-inverse document frequency analysis identifies unigrams as trending topics.",
"title": ""
},
{
"docid": "753c52924fadee65697f09d00b4bb187",
"text": "Although labelled graphical, many modelling languages represent important model parts as structured text. We benefit from sophisticated text editors when we use programming languages, but we neglect the same technology when we edit the textual parts of graphical models. Recent advances in generative engineering of textual model editors make the development of such sophisticated text editors practical, even for the smallest textual constructs of graphical languages. In this paper, we present techniques to embed textual model editors into graphical model editors and prove our approach for EMF-based textual editors and graphical editors created with GMF.",
"title": ""
},
{
"docid": "86e0c7b70de40fcd5179bf3ab67bc3a4",
"text": "The development of a scale to assess drug and other treatment effects on severely mentally retarded individuals was described. In the first stage of the project, an initial scale encompassing a large number of behavior problems was used to rate 418 residents. The scale was then reduced to an intermediate version, and in the second stage, 509 moderately to profoundly retarded individuals were rated. Separate factor analyses of the data from the two samples resulted in a five-factor scale comprising 58 items. The factors of the Aberrant Behavior Checklist have been labeled as follows: (I) Irritability, Agitation, Crying; (II) Lethargy, Social Withdrawal; (III) Stereotypic Behavior; (IV) Hyperactivity, Noncompliance; and (V) Inappropriate Speech. Average subscale scores were presented for the instrument, and the results were compared with empirically derived rating scales of childhood psychopathology and with factor analytic work in the field of mental retardation.",
"title": ""
}
] |
scidocsrr
|
1fe02f64d20bd188e4b5e086afa854cf
|
Utilizing correlated node mobility for efficient DTN routing
|
[
{
"docid": "1ec4415f1ff6dd2da304cba01e4d6e0c",
"text": "In disruption-tolerant networks (DTNs), network topology constantly changes and end-to-end paths can hardly be sustained. However, social network properties are observed in many DTNs and tend to be stable over time. To utilize the social network properties to facilitate packet forwarding, we present LocalCom, a community-based epidemic forwarding scheme that efficiently detects the community structure using limited local information and improves the forwarding efficiency based on the community structure. We define similarity metrics according to nodes’ encounter history to depict the neighboring relationship between each pair of nodes. A distributed algorithm which only utilizes local information is then applied to detect communities, and the formed communities have strong intra-community connections. We also present two schemes to mark and prune gateways that connect communities to control redundancy and facilitate inter-community packet forwarding. Extensive real-trace-driven simulation results are presented to support the effectiveness of our scheme.",
"title": ""
}
] |
[
{
"docid": "54abb89b518916b86b306c4a6996dc5c",
"text": "Recent clinical trials of gene therapy have shown remarkable therapeutic benefits and an excellent safety record. They provide evidence for the long-sought promise of gene therapy to deliver 'cures' for some otherwise terminal or severely disabling conditions. Behind these advances lie improved vector designs that enable the safe delivery of therapeutic genes to specific cells. Technologies for editing genes and correcting inherited mutations, the engagement of stem cells to regenerate tissues and the effective exploitation of powerful immune responses to fight cancer are also contributing to the revitalization of gene therapy.",
"title": ""
},
{
"docid": "ed2464f8cf0495e10d8b2a75a7d8bc3b",
"text": "Personalized services such as news recommendations are becoming an integral part of our digital lives. The problem is that they extract a steep cost in terms of privacy. The service providers collect and analyze user's personal data to provide the service, but can infer sensitive information about the user in the process. In this work we ask the question \"How can we provide personalized news recommendation without sharing sensitive data with the provider?\"\n We propose a local private intelligence assistance framework (PrIA), which collects user data and builds a profile about the user and provides recommendations, all on the user's personal device. It decouples aggregation and personalization: it uses the existing aggregation services on the cloud to obtain candidate articles but makes the personalized recommendations locally. Our proof-of-concept implementation and small scale user study shows the feasibility of a local news recommendation system. In building a private profile, PrIA avoids sharing sensitive information with the cloud-based recommendation service. However, the trade-off is that unlike cloud-based services, PrIA cannot leverage collective knowledge from large number of users. We quantify this trade-off by comparing PrIA with Google's cloud-based recommendation service. We find that the average precision of PrIA's recommendation is only 14% lower than that of Google's service. Rather than choose between privacy or personalization, this result motivates further study of systems that can provide both with acceptable trade-offs.",
"title": ""
},
{
"docid": "877e7654a4e42ab270a96e87d32164fd",
"text": "The presence of gender stereotypes in many aspects of society is a well-known phenomenon. In this paper, we focus on studying such stereotypes and bias in Hindi movie industry (Bollywood). We analyze movie plots and posters for all movies released since 1970. The gender bias is detected by semantic modeling of plots at inter-sentence and intrasentence level. Different features like occupation, introduction of cast in text, associated actions and descriptions are captured to show the pervasiveness of gender bias and stereotype in movies. We derive a semantic graph and compute centrality of each character and observe similar bias there. We also show that such bias is not applicable for movie posters where females get equal importance even though their character has little or no impact on the movie plot. Furthermore, we explore the movie trailers to estimate on-screen time for males and females and also study the portrayal of emotions by gender in them. The silver lining is that our system was able to identify 30 movies over last 3 years where such stereotypes were broken.",
"title": ""
},
{
"docid": "24c744337d831e541f347bbdf9b6b48a",
"text": "Modelling and animation of crawler UGV's caterpillars is a complicated task, which has not been completely resolved in ROS/Gazebo simulators. In this paper, we proposed an approximation of track-terrain interaction of a crawler UGV, perform modelling and simulation of Russian crawler robot \"Engineer\" within ROS/Gazebo and visualize its motion in ROS/RViz software. Finally, we test the proposed model in heterogeneous robot group navigation scenario within uncertain Gazebo environment.",
"title": ""
},
{
"docid": "4621f0bd002f8bd061dd0b224f27977c",
"text": "Organisations increasingly perceive their employees as a great asset that needs to be cared for; however, at the same time, they view employees as one of the biggest potential threats to their cyber security. Employees are widely acknowledged to be responsible for security breaches in organisations, and it is important that these are given as much attention as are technical issues. A significant number of researchers have argued that non-compliance with information security policy is one of the major challenges facing organisations. This is primarily considered to be a human problem rather than a technical issue. Thus, it is not surprising that employees are one of the major underlying causes of breaches in information security. In this paper, academic literature and reports of information security institutes relating to policy compliance are reviewed. The objective is to provide an overview of the key challenges surrounding the successful implementation of information security policies. A further aim is to investigate the factors that may have an influence upon employees' behaviour in relation to information security policy. As a result, challenges to information security policy have been classified into four main groups: security policy promotion; noncompliance with security policy; security policy management and updating; and shadow security. Furthermore, the factors influencing behaviour have been divided into organisational and human factors. Ultimately, this paper concludes that continuously subjecting users to targeted awareness raising and dynamically monitoring their adherence to information security policy should increase the compliance level.",
"title": ""
},
{
"docid": "f2d2979ca63d47ba33fffb89c16b9499",
"text": "Shor and Grover demonstrated that a quantum computer can outperform any classical computer in factoring numbers and in searching a database by exploiting the parallelism of quantum mechanics. Whereas Shor's algorithm requires both superposition and entanglement of a many-particle system, the superposition of single-particle quantum states is sufficient for Grover's algorithm. Recently, the latter has been successfully implemented using Rydberg atoms. Here we propose an implementation of Grover's algorithm that uses molecular magnets, which are solid-state systems with a large spin; their spin eigenstates make them natural candidates for single-particle systems. We show theoretically that molecular magnets can be used to build dense and efficient memory devices based on the Grover algorithm. In particular, one single crystal can serve as a storage unit of a dynamic random access memory device. Fast electron spin resonance pulses can be used to decode and read out stored numbers of up to 105, with access times as short as 10-10 seconds. We show that our proposal should be feasible using the molecular magnets Fe8 and Mn12.",
"title": ""
},
{
"docid": "e60813a8d102dc818ebe7db75c39a4f8",
"text": "OBJECTIVE\nThe behavioral binaural masking level difference (BMLD) is believed to reflect brain stem processing. However, this conflicts with transient auditory evoked potential research that indicates the auditory brain stem and middle latency responses do not demonstrate the BMLD. The objective of the present study is to investigate the brain stem and cortical mechanisms underlying the BMLD in humans using the brain stem and cortical auditory steady-state responses (ASSRs).\n\n\nDESIGN\nA 500-Hz pure tone, amplitude-modulated (AM) at 80 Hz and 7 (or 13) Hz, was used to elicit brain stem and cortical ASSRs, respectively. The masker was a 200-Hz-wide noise centered on 500 Hz. Eleven adult subjects with normal hearing were tested. Both ASSR (brain stem and cortical) and behavioral thresholds for diotic AM stimuli (when the signal and noise are in phase binaurally: SoNo) and dichotic AM stimuli (when either the signal or noise is 180 degrees out-of-phase between the two ears: SpiNo, SoNpi) were investigated. ASSR and behavioral BMLDs were obtained by subtracting the threshold for the dichotic stimuli from that for the diotic stimuli, respectively. Effects for modulation rate, signal versus noise phase changes, and behavioral versus ASSR measure on the BMLD were investigated.\n\n\nRESULTS\nBehavioral BMLDs (mean = 8.5 to 10.5 dB) obtained are consistent with results from past research. The ASSR results are similar to the pattern of results previously found for the transient auditory brain stem responses and the N1-P2 cortical auditory evoked potential, in that only the cortical ASSRs (7 or 13 Hz) demonstrate BMLDs (mean = 5.8 dB); the brain stem ASSRs (80 Hz) (mean = 1.5 dB) do not. The ASSR results differ from the previous transient N1-P2 studies, however, in that the cortical ASSRs show a BMLD only when there is a change in the signal interaural phase, but not for changes of noise interaural phase.\n\n\nCONCLUSIONS\nResults suggest that brain processes underlying the BMLD occur either in a different pathway or beyond the brain stem auditory processing underlying the 80-Hz ASSR. Results also suggest that the cortical ASSRs have somewhat different neural sources than the transient N1-P2 responses, and that they may reflect the output of neural populations that previous research has shown to be insensitive to binaural differences in noise.",
"title": ""
},
{
"docid": "6fd71fe20e959bfdde866ff54b2b474b",
"text": "The IETF developed the RPL routing protocol for Low power and Lossy Networks (LLNs). RPL allows for automated setup and maintenance of the routing tree for a meshed network using a common objective, such as energy preservation or most stable routes. To handle failing nodes and other communication disturbances, RPL includes a number of error correction functions for such situations. These error handling mechanisms, while maintaining a functioning routing tree, introduce an additional complexity to the routing process. Being a relatively new protocol, the effect of the error handling mechanisms within RPL needs to be analyzed. This paper presents an experimental analysis of RPL’s error correction mechanisms by using the Contiki RPL implementation along with an SNMP agent to monitor the performance of RPL.",
"title": ""
},
{
"docid": "bffc44d02edaa8a699c698185e143d22",
"text": "Photoplethysmography (PPG) technology has been used to develop small, wearable, pulse rate sensors. These devices, consisting of infrared light-emitting diodes (LEDs) and photodetectors, offer a simple, reliable, low-cost means of monitoring the pulse rate noninvasively. Recent advances in optical technology have facilitated the use of high-intensity green LEDs for PPG, increasing the adoption of this measurement technique. In this review, we briefly present the history of PPG and recent developments in wearable pulse rate sensors with green LEDs. The application of wearable pulse rate monitors is discussed.",
"title": ""
},
{
"docid": "66f0474d3f68a8a3b4bbc721a0607e38",
"text": "Binary Division is one of the most crucial and silicon-intensive and of immense importance in the field of hardware implementation. A Divider is one of the key hardware blocks in most of applications such as digital signal processing, encryption and decryption algorithms in cryptography and in other logical computations. Being sequential type of operation, it is more prominent in terms of computational complexity and latency. This paper deals with the novel division algorithm for single precision floating point division Verilog Code is written and implemented on Virtex-5 FPGA series. Power dissipation has been reduced. Moreover, significant improvement has been observed in terms of area-utilisation and latency bounds. KeywordsSingle precision, Binary Division, Long Division, Vedic, Virtex, FPGA, IEEE-754.",
"title": ""
},
{
"docid": "116463e16452d6847c94f662a90ac2ef",
"text": "The ubiquity of mobile devices with global positioning functionality (e.g., GPS and AGPS) and Internet connectivity (e.g., 3G andWi-Fi) has resulted in widespread development of location-based services (LBS). Typical examples of LBS include local business search, e-marketing, social networking, and automotive traffic monitoring. Although LBS provide valuable services for mobile users, revealing their private locations to potentially untrusted LBS service providers pose privacy concerns. In general, there are two types of LBS, namely, snapshot and continuous LBS. For snapshot LBS, a mobile user only needs to report its current location to a service provider once to get its desired information. On the other hand, a mobile user has to report its location to a service provider in a periodic or on-demand manner to obtain its desired continuous LBS. Protecting user location privacy for continuous LBS is more challenging than snapshot LBS because adversaries may use the spatial and temporal correlations in the user's location samples to infer the user's location information with higher certainty. Such user location trajectories are also very important for many applications, e.g., business analysis, city planning, and intelligent transportation. However, publishing such location trajectories to the public or a third party for data analysis could pose serious privacy concerns. Privacy protection in continuous LBS and trajectory data publication has increasingly drawn attention from the research community and industry. In this survey, we give an overview of the state-of-the-art privacy-preserving techniques in these two problems.",
"title": ""
},
{
"docid": "b4ed15850674851fb7e479b7181751d7",
"text": "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.",
"title": ""
},
{
"docid": "a00065c171175b84cf299718d0b29dde",
"text": "Semantic object segmentation in video is an important step for large-scale multimedia analysis. In many cases, however, semantic objects are only tagged at video-level, making them difficult to be located and segmented. To address this problem, this paper proposes an approach to segment semantic objects in weakly labeled video via object detection. In our approach, a novel video segmentation-by-detection framework is proposed, which first incorporates object and region detectors pre-trained on still images to generate a set of detection and segmentation proposals. Based on the noisy proposals, several object tracks are then initialized by solving a joint binary optimization problem with min-cost flow. As such tracks actually provide rough configurations of semantic objects, we thus refine the object segmentation while preserving the spatiotemporal consistency by inferring the shape likelihoods of pixels from the statistical information of tracks. Experimental results on Youtube-Objects dataset and SegTrack v2 dataset demonstrate that our method outperforms state-of-the-arts and shows impressive results.",
"title": ""
},
{
"docid": "b5f2072ebee7b06bf14981f3a328ec67",
"text": "Scenario generation is an important step in the operation and planning of power systems with high renewable penetrations. In this work, we proposed a data-driven approach for scenario generation using generative adversarial networks, which is based on two interconnected deep neural networks. Compared with existing methods based on probabilistic models that are often hard to scale or sample from, our method is data-driven, and captures renewable energy production patterns in both temporal and spatial dimensions for a large number of correlated resources. For validation, we use wind and solar times-series data from NREL integration data sets. We demonstrate that the proposed method is able to generate realistic wind and photovoltaic power profiles with full diversity of behaviors. We also illustrate how to generate scenarios based on different conditions of interest by using labeled data during training. For example, scenarios can be conditioned on weather events (e.g., high wind day, intense ramp events, or large forecasts errors) or time of the year (e.g., solar generation for a day in July). Because of the feedforward nature of the neural networks, scenarios can be generated extremely efficiently without sophisticated sampling techniques.",
"title": ""
},
{
"docid": "73c7c4ddfa01fb2b14c6a180c3357a55",
"text": "Neurodevelopmental treatment according to Dr. K. and B. Bobath can be supplemented by hippotherapy. At proper control and guidance, an improvement in posture tone, inhibition of pathological movement patterns, facilitation of normal automatical reactions and the promotion of sensorimotor perceptions is achieved. By adjustment to the swaying movements of the horse, the child feels how to retain straightening alignment, symmetry and balance. By pleasure in this therapy, the child can be motivated to satisfactory cooperation and accepts the therapy horse as its friend. The results of hippotherapy for 27 children afflicted with cerebral palsy permit a conclusion as to the value of this treatment for movement and behaviour disturbance to the drawn.",
"title": ""
},
{
"docid": "342b72bf32937104ae80ae275c8c9585",
"text": "In this paper, we introduce a Radio Frequency IDentification (RFID) based smart shopping system, KONARK, which helps users to checkout items faster and to track purchases in real-time. In parallel, our solution also provides the shopping mall owner with information about user interest on particular items. The central component of KONARK system is a customized shopping cart having a RFID reader which reads RFID tagged items. To provide check-out facility, our system detects in-cart items with almost 100% accuracy within 60s delay by exploiting the fact that the physical level information (RSSI, phase, doppler, read rate etc.) of in-cart RFID tags are different than outside tags. KONARK also detects user interest with 100% accuracy by exploiting the change in physical level parameters of RFID tag on the object user interacted with. In general, KONARK has been shown to perform with reasonably high accuracy in different mobility speeds in a mock-up of a shopping mall isle.",
"title": ""
},
{
"docid": "648a5479933eb4703f1d2639e0c3b5c7",
"text": "The Surgery Treatment Modality Committee of the Korean Gynecologic Oncologic Group (KGOG) has determined to develop a surgical manual to facilitate clinical trials and to improve communication between investigators by standardizing and precisely describing operating procedures. The literature on anatomic terminology, identification of surgical components, and surgical techniques were reviewed and discussed in depth to develop a surgical manual for gynecologic oncology. The surgical procedures provided here represent the minimum requirements for participating in a clinical trial. These procedures should be described in the operation record form, and the pathologic findings obtained from the procedures should be recorded in the pathologic report form. Here, we focused on radical hysterectomy and lymphadenectomy, and we developed a KGOG classification for those conditions.",
"title": ""
},
{
"docid": "9d24bc6143bdb22692d0c40f38307612",
"text": "This paper proposes a new image denoising approach using adaptive signal modeling and adaptive soft-thresholding. It improves the image quality by regularizing all the patches in image based on distribution modeling in transform domain. Instead of using a global model for all patches, it employs content adaptive models to address the non-stationarity of image signals. The distribution model of each patch is estimated individually and can vary for different transform bands and for different patch locations. In particular, we allow the distribution model for each individual patch to have non-zero expectation. To estimate the expectation and variance parameters for the transform bands of a particular patch, we exploit the non-local correlation of image and collect a set of similar patches as data samples to form the distribution. Irrelevant patches are excluded so that this non-local based modeling is more accurate than global modeling. Adaptive soft-thresholding is employed since we observed that the distribution of non-local samples can be approximated by Laplacian distribution. Experimental results show that the proposed scheme outperforms the state-of-the-art denoising methods such as BM3D and CSR in both the PSNR and the perceptual quality.",
"title": ""
},
{
"docid": "fdefbb2ed3185eadb4657879d9776d34",
"text": "Convenient monitoring of vital signs, particularly blood pressure(BP), is critical to improve the effectiveness of health-care and prevent chronic diseases. This study presents a user-friendly, low-cost, real-time, and non-contact technique for BP measurement based on the detection of photoplethysmography (PPG) using a regular webcam. Leveraging features extracted from photoplethysmograph, an individual's BP can be estimated using a neural network. Experiments were performed on 20 human participants during three different daytime slots given the influence of background illumination. Compared against the systolic blood pressure and diastolic blood pressure readings collected from a commercially available BP monitor, the proposed technique achieves an average error rate of 9.62% (Systolic BP) and 11.63% (Diastolic BP) for the afternoon session, and 8.4% (Systolic BP) and 11.18% (Diastolic BP) for the evening session. The proposed technique can be easily extended to the camera on any mobile device and thus be widely used in a pervasive manner.",
"title": ""
}
] |
scidocsrr
|
be1251672e2ef44c457d70a7d89cb546
|
Understanding MOOC students: motivations and behaviours indicative of MOOC completion
|
[
{
"docid": "a7eff25c60f759f15b41c85ac5e3624f",
"text": "Connectivist massive open online courses (cMOOCs) represent an important new pedagogical approach ideally suited to the network age. However, little is known about how the learning experience afforded by cMOOCs is suited to learners with different skills, motivations, and dispositions. In this study, semi-structured interviews were conducted with 29 participants on the Change11 cMOOC. These accounts were analyzed to determine patterns of engagement and factors affecting engagement in the course. Three distinct types of engagement were recognized – active participation, passive participation, and lurking. In addition, a number of key factors that mediated engagement were identified including confidence, prior experience, and motivation. This study adds to the overall understanding of learning in cMOOCs and provides additional empirical data to a nascent research field. The findings provide an insight into how the learning experience afforded by cMOOCs suits the diverse range of learners that may coexist within a cMOOC. These insights can be used by designers of future cMOOCs to tailor the learning experience to suit the diverse range of learners that may choose to learn in this way.",
"title": ""
}
] |
[
{
"docid": "b80ab14d0908a2a66a4c5a020860a6ac",
"text": "We evaluate U.S. firms’ leverage determinants by studying how 1,801 firms paid for 2,073 very large investments during the period 1989-2006. This approach complements existing empirical work on capital structure, which typically estimates regression models for a broad set of CRSP/Compustat firms. If firms making large investments generally raise new external funds, their securities issuances should provide information about managers’ attitudes toward leverage. Our data indicate that large investments are mostly externally financed and that firms issue securities that tend to move them quite substantially toward target debt ratios. Firms also tend to issue more equity following a share price runup or when the market-to-book ratio is high. We find little support for the standard pecking order hypothesis.",
"title": ""
},
{
"docid": "e53c7f8890d3bf49272e08d4446703a4",
"text": "In orthogonal frequency-division multiplexing (OFDM) systems, it is generally assumed that the channel response is static in an OFDM symbol period. However, the assumption does not hold in high-mobility environments. As a result, intercarrier interference (ICI) is induced, and system performance is degraded. A simple remedy for this problem is the application of the zero-forcing (ZF) equalizer. Unfortunately, the direct ZF method requires the inversion of an N times N ICI matrix, where N is the number of subcarriers. When N is large, the computational complexity can become prohibitively high. In this paper, we first propose a low-complexity ZF method to solve the problem in single-input-single-output (SISO) OFDM systems. The main idea is to explore the special structure inherent in the ICI matrix and apply Newton's iteration for matrix inversion. With our formulation, fast Fourier transforms (FFTs) can be used in the iterative process, reducing the complexity from O (N3) to O (N log2 N). Another feature of the proposed algorithm is that it can converge very fast, typically in one or two iterations. We also analyze the convergence behavior of the proposed method and derive the theoretical output signal-to-interference-plus-noise ratio (SINR). For a multiple-input-multiple-output (MIMO) OFDM system, the complexity of the ZF method becomes more intractable. We then extend the method proposed for SISO-OFDM systems to MIMO-OFDM systems. It can be shown that the computational complexity can be reduced even more significantly. Simulations show that the proposed methods perform almost as well as the direct ZF method, while the required computational complexity is reduced dramatically.",
"title": ""
},
{
"docid": "0cc16f8fe35cbf169de8263236d08166",
"text": "In this paper, we revisit a generally accepted opinion: implementing Elliptic Curve Cryptosystem (ECC) over GF (2) on sensor motes using small word size is not appropriate because XOR multiplication over GF (2) is not efficiently supported by current low-powered microprocessors. Although there are some implementations over GF (2) on sensor motes, their performances are not satisfactory enough to be used for wireless sensor networks (WSNs). We have found that a field multiplication over GF (2) are involved in a number of redundant memory accesses and its inefficiency is originated from this problem. Moreover, the field reduction process also requires many redundant memory accesses. Therefore, we propose some techniques for reducing unnecessary memory accesses. With the proposed strategies, the running time of field multiplication and reduction over GF (2) can be decreased by 21.1% and 24.7%, respectively. These savings noticeably decrease execution times spent in Elliptic Curve Digital Signature Algorithm (ECDSA) operations (signing and verification) by around 15% ∼ 19%. We present TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve – a kind of TinyOS package supporting elliptic curve operations) which is the fastest ECC implementation over GF (2) on 8-bit sensor motes using ATmega128L as far as we know. Through comparisons with existing software implementations of ECC built in C or hybrid of C and inline assembly on sensor motes, we show that TinyECCK outperforms them in terms of running time, code size and supporting services. Furthermore, we show that a field multiplication over GF (2) can be faster than that over GF (p) on 8-bit ATmega128L processor by comparing TinyECCK with TinyECC, a well-known ECC implementation over GF (p). TinyECCK with sect163k1 can compute a scalar multiplication within 1.14 secs on a MICAz mote at the expense of 5,592-byte of ROM and 618-byte of RAM. Furthermore, it can also generate a signature and verify it in 1.37 and 2.32 secs with 13,748-byte of ROM and 1,004-byte of RAM. 2 Seog Chung Seo et al.",
"title": ""
},
{
"docid": "a0c1f145f423052b6e8059c5849d3e34",
"text": "Improved methods of assessment and research design have established a robust and causal association between stressful life events and major depressive episodes. The chapter reviews these developments briefly and attempts to identify gaps in the field and new directions in recent research. There are notable shortcomings in several important topics: measurement and evaluation of chronic stress and depression; exploration of potentially different processes of stress and depression associated with first-onset versus recurrent episodes; possible gender differences in exposure and reactivity to stressors; testing kindling/sensitization processes; longitudinal tests of diathesis-stress models; and understanding biological stress processes associated with naturally occurring stress and depressive outcomes. There is growing interest in moving away from unidirectional models of the stress-depression association, toward recognition of the effects of contexts and personal characteristics on the occurrence of stressors, and on the likelihood of progressive and dynamic relationships between stress and depression over time-including effects of childhood and lifetime stress exposure on later reactivity to stress.",
"title": ""
},
{
"docid": "f0eb42b522eadddaff7ebf479f791193",
"text": "High-density and low-leakage 1W1R 2-port (2P) SRAM is realized by 6T 1-port SRAM bitcell with double pumping internal clock in 16 nm FinFET technology. Proposed clock generator with address latch circuit enables robust timing design without sever setup/hold margin. We designed a 256 kb 1W1R 2P SRAM macro which achieves the highest density of 6.05 Mb/mm2. Measured data shows that a 313 ps of read-access-time is observed at 0.8 V. Standby leakage power in resume standby (RS) mode is reduced by 79% compared to the conventional dual-port SRAM without RS.",
"title": ""
},
{
"docid": "fddbcbdb0de1c7d49fe5545f3ab1bdfa",
"text": "Photovoltaic Systems (PVS) can be easily integrated in residential buildings hence they will be the main responsible of making low-voltage grid power flow bidirectional. Control issues on both the PV side and on the grid side have received much attention from manufacturers, competing for efficiency and low distortion and academia proposing new ideas soon become state-of-the-art. This paper aims at reviewing part of these topics (MPPT, current and voltage control) leaving to a future paper to complete the scenario. Implementation issues on Digital Signal Processor (DSP), the mandatory choice in this market segment, are discussed.",
"title": ""
},
{
"docid": "2639f5d735abed38ed4f7ebf11072087",
"text": "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.",
"title": ""
},
{
"docid": "c10ac9c3117627b2abb87e268f5de6b1",
"text": "Now days, the number of crime over children is increasing day by day. the implementation of School Security System(SSS) via RFID to avoid crime, illegal activates by students and reduce worries among parents. The project is the combination of latest Technology using RFID, GPS/GSM, image processing, WSN and web based development using Php,VB.net language apache web server and SQL. By using RFID technology it is easy track the student thus enhances the security and safety in selected zone. The information about student such as in time and out time from Bus and campus will be recorded to web based system and the GPS/GSM system automatically sends information (SMS / Phone Call) toothier parents. That the student arrived to Bus/Campus safely.",
"title": ""
},
{
"docid": "0a80057b2c43648e668809e185a68fe6",
"text": "A seminar that surveys state-of-the-art microprocessors offers an excellent forum for students to see how computer architecture techniques are employed in practice and for them to gain a detailed knowledge of the state of the art in microprocessor design. Princeton and the University of Virginia have developed such a seminar, organized around student presentations and a substantial research project. The course can accommodate a range of students, from advanced undergraduates to senior graduate students. The course can also be easily adapted to a survey of embedded processors. This paper describes the version taught at the University of Virginia and lessons learned from the experience.",
"title": ""
},
{
"docid": "5c7a66c440b73b9ff66cd73c8efb3718",
"text": "Image captioning is a crucial task in the interaction of computer vision and natural language processing. It is an important way that help human understand the world better. There are many studies on image English captioning, but little work on image Chinese captioning because of the lack of the corresponding datasets. This paper focuses on image Chinese captioning by using abundant English datasets for the issue. In this paper, a method of adding English information to image Chinese captioning is proposed. We validate the use of English information with state-of-the art performance on the datasets: Flickr8K-CN.",
"title": ""
},
{
"docid": "780e49047bdacda9862c51338aa1397f",
"text": "We consider stochastic volatility models under parameter uncertainty and investigate how model derived prices of European options are affected. We let the pricing parameters evolve dynamically in time within a specified region, and formalise the problem as a control problem where the control acts on the parameters to maximise/minimise the option value. Through a dual representation with backward stochastic differential equations, we obtain explicit equations for Heston’s model and investigate several numerical solutions thereof. In an empirical study, we apply our results to market data from the S&P 500 index where the model is estimated to historical asset prices. We find that the conservative model-prices cover 98% of the considered market-prices for a set of European call options.",
"title": ""
},
{
"docid": "cdd43b3baa9849441817b5f31d7cb0e0",
"text": "Traffic light control systems are widely used to monitor and control the flow of automobiles through the junction of many roads. They aim to realize smooth motion of cars in the transportation routes. However, the synchronization of multiple traffic light systems at adjacent intersections is a complicated problem given the various parameters involved. Conventional systems do not handle variable flows approaching the junctions. In addition, the mutual interference between adjacent traffic light systems, the disparity of cars flow with time, the accidents, the passage of emergency vehicles, and the pedestrian crossing are not implemented in the existing traffic system. This leads to traffic jam and congestion. We propose a system based on PIC microcontroller that evaluates the traffic density using IR sensors and accomplishes dynamic timing slots with different levels. Moreover, a portable controller device is designed to solve the problem of emergency vehicles stuck in the overcrowded roads.",
"title": ""
},
{
"docid": "3886cc26572b2d82c23790ad52342222",
"text": "This paper presents a quantitative human performance model of making single-stroke pen gestures within certain error constraints in terms of production time. Computed from the properties of Curves, Line segments, and Corners (CLC) in a gesture stroke, the model may serve as a foundation for the design and evaluation of existing and future gesture-based user interfaces at the basic motor control efficiency level, similar to the role of previous \"laws of action\" played to pointing, crossing or steering-based user interfaces. We report and discuss our experimental results on establishing and validating the CLC model, together with other basic empirical findings in stroke gesture production.",
"title": ""
},
{
"docid": "6346955de2fa46e5c109ada42b4e9f77",
"text": "Retinopathy of prematurity (ROP) is a disease that can cause blindness in very low birthweight infants. The incidence of ROP is closely correlated with the weight and the gestational age at birth. Despite current therapies, ROP continues to be a highly debilitating disease. Our advancing knowledge of the pathogenesis of ROP has encouraged investigations into new antivasculogenic therapies. The purpose of this article is to review the findings on the pathophysiological mechanisms that contribute to the transition between the first and second phases of ROP and to investigate new potential therapies. Oxygen has been well characterized for the key role that it plays in retinal neoangiogenesis. Low or high levels of pO2 regulate the normal or abnormal production of hypoxia-inducible factor 1 and vascular endothelial growth factors (VEGF), which are the predominant regulators of retinal angiogenesis. Although low oxygen saturation appears to reduce the risk of severe ROP when carefully controlled within the first few weeks of life, the optimal level of saturation still remains uncertain. IGF-1 and Epo are fundamentally required during both phases of ROP, as alterations in their protein levels can modulate disease progression. Therefore, rhIGF-1 and rhEpo were tested for their abilities to prevent the loss of vasculature during the first phase of ROP, whereas anti-VEGF drugs were tested during the second phase. At present, previous hypotheses concerning ROP should be amended with new pathogenetic theories. Studies on the role of genetic components, nitric oxide, adenosine, apelin and β-adrenergic receptor have revealed new possibilities for the treatment of ROP. The genetic hypothesis that single-nucleotide polymorphisms within the β-ARs play an active role in the pathogenesis of ROP suggests the concept of disease prevention using β-blockers. In conclusion, all factors that can mediate the progression from the avascular to the proliferative phase might have significant implications for the further understanding and treatment of ROP.",
"title": ""
},
{
"docid": "e6548454f46962b5ce4c5d4298deb8e7",
"text": "The use of SVM (Support Vector Machines) in detecting e-mail as spam or nonspam by incorporating feature selection using GA (Genetic Algorithm) is investigated. An GA approach is adopted to select features that are most favorable to SVM classifier, which is named as GA-SVM. Scaling factor is exploited to measure the relevant coefficients of feature to the classification task and is estimated by GA. Heavy-bias operator is introduced in GA to promote sparse in the scaling factors of features. So, feature selection is performed by eliminating irrelevant features whose scaling factor is zero. The experiment results on UCI Spam database show that comparing with original SVM classifier, the number of support vector decreases while better classification results are achieved based on GA-SVM.",
"title": ""
},
{
"docid": "e881c2ab6abc91aa8e7cbe54d861d36d",
"text": "Tracing traffic using commodity hardware in contemporary highspeed access or aggregation networks such as 10-Gigabit Ethernet is an increasingly common yet challenging task. In this paper we investigate if today’s commodity hardware and software is in principle able to capture traffic from a fully loaded Ethernet. We find that this is only possible for data rates up to 1 Gigabit/s without reverting to using special hardware due to, e. g., limitations with the current PC buses. Therefore, we propose a novel way for monitoring higher speed interfaces (e. g., 10-Gigabit) by distributing their traffic across a set of lower speed interfaces (e. g., 1-Gigabit). This opens the next question: which system configuration is capable of monitoring one such 1-Gigabit/s interface? To answer this question we present a methodology for evaluating the performance impact of different system components including different CPU architectures and different operating system. Our results indicate that the combination of AMD Opteron with FreeBSD outperforms all others, independently of running in singleor multi-processor mode. Moreover, the impact of packet filtering, running multiple capturing applications, adding per packet analysis load, saving the captured packets to disk, and using 64-bit OSes is investigated.",
"title": ""
},
{
"docid": "1f278ddc0d643196ff584c7ea82dc89b",
"text": "We consider an approximate version of a fundamental geometric search problem, polytope membership queries. Given a convex polytope P in REd, presented as the intersection of halfspaces, the objective is to preprocess P so that, given a query point q, it is possible to determine efficiently whether q lies inside P subject to an error bound ε. Previous solutions to this problem were based on straightforward applications of classic polytope approximation techniques by Dudley (1974) and Bentley et al. (1982). The former yields minimum storage, and the latter yields constant query time. A space-time tradeoff can be obtained by interpolating between the two. We present the first significant improvements to this tradeoff. For example, using the same storage as Dudley, we reduce the query time from O(1/ε(d-1)/2) to O(1/ε(d-1)/4). Our approach is based on a very simple algorithm. Both lower bounds and upper bounds on the performance of the algorithm are presented.\n To establish the relevance of our results, we introduce a reduction from approximate nearest neighbor searching to approximate polytope membership queries. We show that our tradeoff provides significant improvements to the best known space-time tradeoffs for approximate nearest neighbor searching. Furthermore, this is achieved with constructions that are much simpler than existing methods.",
"title": ""
},
{
"docid": "d2a1ecb8ad28ed5ba75460827341f741",
"text": "Most word representation methods assume that each word owns a single semantic vector. This is usually problematic because lexical ambiguity is ubiquitous, which is also the problem to be resolved by word sense disambiguation. In this paper, we present a unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense.1 The basic idea is that both word sense representation (WSR) and word sense disambiguation (WSD) will benefit from each other: (1) highquality WSR will capture rich information about words and senses, which should be helpful for WSD, and (2) high-quality WSD will provide reliable disambiguated corpora for learning better sense representations. Experimental results show that, our model improves the performance of contextual word similarity compared to existing WSR methods, outperforms stateof-the-art supervised methods on domainspecific WSD, and achieves competitive performance on coarse-grained all-words WSD.",
"title": ""
},
{
"docid": "1043fd2e3eb677a768e922f5daf5a5d0",
"text": "A transformer magnetizing current offset for a phase-shift full-bridge (PSFB) converter is dealt in this paper. A model of this current offset is derived and it is presented as a first order system having a pole at a low frequency when the effects from the parasitic components and the switching transition are considered. A digital offset compensator eliminating this current offset is proposed and designed considering the interference in an output voltage regulation. The performances of the proposed compensator are verified by experiments with a 1.2kW PSFB converter. The saturation of the transformer is prevented by this compensator.",
"title": ""
}
] |
scidocsrr
|
92a2784f998c9ccf7ff30d4b2a9ae296
|
Conception, development and implementation of an e-Government maturity model in public agencies
|
[
{
"docid": "82fa51c143159f2b85f9d2e5b610e30d",
"text": "Strategies are systematic and long-term approaches to problems. Federal, state, and local governments are investing in the development of strategies to further their e-government goals. These strategies are based on their knowledge of the field and the relevant resources available to them. Governments are communicating these strategies to practitioners through the use of practical guides. The guides provide direction to practitioners as they consider, make a case for, and implement IT initiatives. This article presents an analysis of a selected set of resources government practitioners use to guide their e-government efforts. A selected review of current literature on the challenges to information technology initiatives is used to create a framework for the analysis. A gap analysis examines the extent to which IT-related research is reflected in the practical guides. The resulting analysis is used to identify a set of commonalities across the practical guides and a set of recommendations for future development of practitioner guides and future research into e-government initiatives. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "9686ae3ca715c325e616c001b445531b",
"text": "IA-32 Execution Layer (IA-32 EL) is a newtechnology that executes IA-32 applications onIntel® Itanium® processor family systems.Currently, support for IA-32 applications onItanium-based platforms is achieved usinghardware circuitry on the Itanium processors.This capability will be enhanced with IA-32EL-software that will ship with Itanium-basedoperating systems and will convert IA-32instructions into Itanium instructions viadynamic translation.In this paper, we describeaspects of the IA-32 Execution Layertechnology, including the general two-phasetranslation architecture and the usage of asingle translator for multiple operatingsystems.The paper provides details of someof the technical challenges such as preciseexception, emulation of FP, MMXTM, and Intel®Streaming SIMD Extension instructions, andmisalignment handling.Finally, the paperpresents some performance results.",
"title": ""
},
{
"docid": "1f4ccaef3ff81f9680b152a3e7b3d178",
"text": "We propose a method for forecasting high-dimensional data (hundreds of attributes, trillions of attribute combinations) for a duration of several months. Our motivating application is guaranteed display advertising, a multi-billion dollar industry, whereby advertisers can buy targeted (high-dimensional) user visits from publishers many months or even years in advance. Forecasting high-dimensional data is challenging because of the many possible attribute combinations that need to be forecast. To address this issue, we propose a method whereby only a sub-set of attribute combinations are explicitly forecast and stored, while the other combinations are dynamically forecast on-the-fly using high-dimensional attribute correlation models. We evaluate various attribute correlation models, from simple models that assume the independence of attributes to more sophisticated sample-based models that fully capture the correlations in a high-dimensional space. Our evaluation using real-world display advertising data sets shows that fully capturing high-dimensional correlations leads to significant forecast accuracy gains. A variant of the proposed method has been implemented in the context of Yahoo!'s guaranteed display advertising system.",
"title": ""
},
{
"docid": "36b4c028bcd92115107cf245c1e005c8",
"text": "CAPTCHA is now almost a standard security technology, and has found widespread application in commercial websites. Usability and robustness are two fundamental issues with CAPTCHA, and they often interconnect with each other. This paper discusses usability issues that should be considered and addressed in the design of CAPTCHAs. Some of these issues are intuitive, but some others have subtle implications for robustness (or security). A simple but novel framework for examining CAPTCHA usability is also proposed.",
"title": ""
},
{
"docid": "ab2f1f27b11a5a41ff6b2b79bc044c2f",
"text": "ABSTACT: Trajectory tracking has been an extremely active research area in robotics in the past decade.In this paper, a kinematic model of two wheel mobile robot for reference trajectory tracking is analyzed and simulated. For controlling the wheeled mobile robot PID controllers are used. For finding the optimal parameters of PID controllers, in this work particle swarm optimization (PSO) is used. The proposed methodology is shown to be a successful solutionfor solving the problem.",
"title": ""
},
{
"docid": "9ca90172c5beff5922b4f5274ef61480",
"text": "In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep-learning ecosystem to provide a tunable balance between performance, power consumption, and programmability. In this article, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics, which include the supported applications, architectural choices, design space exploration methods, and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete, and in-depth evaluation of CNN-to-FPGA toolflows.",
"title": ""
},
{
"docid": "306136e7ffd6b1839956d9f712afbda2",
"text": "Dynamic scheduling cloud resources according to the change of the load are key to improve cloud computing on-demand service capabilities. This paper proposes a load-adaptive cloud resource scheduling model based on ant colony algorithm. By real-time monitoring virtual machine of performance parameters, once judging overload, it schedules fast cloud resources using ant colony algorithm to bear some load on the load-free node. So that it can meet changing load requirements. By analyzing an example result, the model can meet the goals and requirements of self-adaptive cloud resources scheduling and improve the efficiency of the resource utilization.",
"title": ""
},
{
"docid": "99bf50d4a382d9ed8548b3be3d91acd4",
"text": "We present a new descriptor for tactile 3D object classification. It is invariant to object movement and simple to construct, using only the relative geometry of points on the object surface. We demonstrate successful classification of 185 objects in 10 categories, at sparse to dense surface sampling rate in point cloud simulation, with an accuracy of 77.5% at the sparsest and 90.1% at the densest. In a physics-based simulation, we show that contact clouds resembling the object shape can be obtained by a series of gripper closures using a robotic hand equipped with sparse tactile arrays. Despite sparser sampling of the object's surface, classification still performs well, at 74.7%. On a real robot, we show the ability of the descriptor to discriminate among different object instances, using data collected by a tactile hand.",
"title": ""
},
{
"docid": "5b76ef357e706d81b31fd9fabb8ea685",
"text": "This paper reports the design and development of aluminum nitride (AlN) piezoelectric RF resonant voltage amplifiers for Internet of Things (IoT) applications. These devices can provide passive and highly frequency selective voltage gain to RF backends with a capacitive input to drastically enhance sensitivity and to reduce power consumption of the transceiver. Both analytical and finite element models (FEM) have been utilized to identify the optimal designs. Consequently, an AlN voltage amplifier with an open circuit gain of 7.27 and a fractional bandwidth (FBW) of 0.11 % has been demonstrated. This work provides a material-agnostic framework for analytically optimizing piezoelectric voltage amplifiers.",
"title": ""
},
{
"docid": "7ee31d080b3cd7632c25c22b378e6d91",
"text": "Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in general. So SGD does perform variational inference, but for a different loss than the one used to compute the gradients. Even more surprisingly, SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components. We prove that such “out-of-equilibrium” behavior is a consequence of highly nonisotropic gradient noise in SGD; the covariance matrix of mini-batch gradients for deep networks has a rank as small as 1% of its dimension. We provide extensive empirical validation of these claims. This article summarizes the findings in [1]. See the longer version for background, detailed results and proofs.",
"title": ""
},
{
"docid": "bed5efa3e268ef0fd2f3ae750b26aad4",
"text": "In this paper, we describe our recent results in the development of a new class of soft, continuous backbone (“continuum”) robot manipulators. Our work is strongly motivated by the dexterous appendages found in cephalopods, particularly the arms and suckers of octopus, and the arms and tentacles of squid. Our ongoing investigation of these animals reveals interesting and unexpected functional aspects of their structure and behavior. The arrangement and dynamic operation of muscles and connective tissue observed in the arms of a variety of octopus species motivate the underlying design approach for our soft manipulators. These artificial manipulators feature biomimetic actuators, including artificial muscles based on both electro-active polymers (EAP) and pneumatic (McKibben) muscles. They feature a “clean” continuous backbone design, redundant degrees of freedom, and exhibit significant compliance that provides novel operational capacities during environmental interaction and object manipulation. The unusual compliance and redundant degrees of freedom provide strong potential for application to delicate tasks in cluttered and/or unstructured environments. Our aim is to endow these compliant robotic mechanisms with the diverse and dexterous grasping behavior observed in octopuses. To this end, we are conducting fundamental research into the manipulation tactics, sensory biology, and neural control of octopuses. This work in turn leads to novel approaches to motion planning and operator interfaces for the robots. The paper describes the above efforts, along with the results of our development of a series of continuum tentacle-like robots, demonstrating the unique abilities of biologically-inspired design.",
"title": ""
},
{
"docid": "140d6d345aa6d486a30e596dde25a8ef",
"text": "This research uses the absorptive capacity (ACAP) concept as a theoretical lens to study the effect of e-business upon the competitive performance of SMEs, addressing the following research issue: To what extent are manufacturing SMEs successful in developing their potential and realized ACAP in line with their entrepreneurial orientation? A survey study of 588 manufacturing SMEs found that their e-business capabilities, considered as knowledge acquisition and assimilation capabilities have an indirect effect on their competitive performance that is mediated by their knowledge transformation and exploitation capabilities, and insofar as these capabilities are developed as a result of a more entrepreneurial orientation on their part. Finally, the effect of this orientation on the SMEs' competitive performance appears to be totally mediated by their ACAP.",
"title": ""
},
{
"docid": "e41ae766a1995f918184efb73b2212b7",
"text": "Among the signature schemes most widely deployed in practice are the DSA (Digital Signature Algorithm) and its elliptic curves variant ECDSA. They are represented in many international standards, including IEEE P1363, ANSI X9.62, and FIPS 186-4. Their popularity stands in stark contrast to the absence of rigorous security analyses: Previous works either study modified versions of (EC)DSA or provide a security analysis of unmodified ECDSA in the generic group model. Unfortunately, works following the latter approach assume abstractions of non-algebraic functions over generic groups for which it remains unclear how they translate to the security of ECDSA in practice. For instance, it has been pointed out that prior results in the generic group model actually establish strong unforgeability of ECDSA, a property that the scheme de facto does not possess. As, further, no formal results are known for DSA, understanding the security of both schemes remains an open problem. In this work we propose GenericDSA, a signature framework that subsumes both DSA and ECDSA in unmodified form. It carefully models the \"modulo q\" conversion function of (EC)DSA as a composition of three independent functions. The two outer functions mimic algebraic properties in the function's domain and range, the inner one is modeled as a bijective random oracle. We rigorously prove results on the security of GenericDSA that indicate that forging signatures in (EC)DSA is as hard as solving discrete logarithms. Importantly, our proofs do not assume generic group behavior.",
"title": ""
},
{
"docid": "07300a47b34574012b6b7efbd0bb66ea",
"text": "The incidence of diabetes and its associated micro- and macrovascular complications is greatly increasing worldwide. The most prevalent vascular complications of both type 1 and type 2 diabetes include nephropathy, retinopathy, neuropathy and cardiovascular diseases. Evidence suggests that both genetic and environmental factors are involved in these pathologies. Clinical trials have underscored the beneficial effects of intensive glycaemic control for preventing the progression of complications. Accumulating evidence suggests a key role for epigenetic mechanisms such as DNA methylation, histone post-translational modifications in chromatin, and non-coding RNAs in the complex interplay between genes and the environment. Factors associated with the pathology of diabetic complications, including hyperglycaemia, growth factors, oxidant stress and inflammatory factors can lead to dysregulation of these epigenetic mechanisms to alter the expression of pathological genes in target cells such as endothelial, vascular smooth muscle, retinal and cardiac cells, without changes in the underlying DNA sequence. Furthermore, long-term persistence of these alterations to the epigenome may be a key mechanism underlying the phenomenon of ‘metabolic memory’ and sustained vascular dysfunction despite attainment of glycaemic control. Current therapies for most diabetic complications have not been fully efficacious, and hence a study of epigenetic mechanisms that may be involved is clearly warranted as they can not only shed novel new insights into the pathology of diabetic complications, but also lead to the identification of much needed new drug targets. In this review, we highlight the emerging role of epigenetics and epigenomics in the vascular complications of diabetes and metabolic memory.",
"title": ""
},
{
"docid": "7cfffa8e9d1e1fb39082c5aba75034b3",
"text": "BACKGROUND\nAttempted separation of craniopagus twins has continued to be associated with devastating results since the first partially successful separation with one surviving twin in 1952. To understand the factors that contribute to successful separation in the modern era of neuroimaging and modern surgical techniques, the authors reviewed and analyzed cases reported since 1995.\n\n\nMETHODS\nAll reported cases of craniopagus twin separation attempts from 1995 to 2015 were identified using PubMed (n = 19). In addition, the Internet was searched for additional unreported separation attempts (n = 5). The peer-reviewed cases were used to build a categorical database containing information on each twin pair, including sex; date of birth; date of surgery; multiple- versus single-stage surgery; angular versus vertical conjoining; nature of shared cerebral venous system; and the presence of other comorbidities identified as cardiovascular, genitourinary, and craniofacial. The data were analyzed to find factors associated with successful separation (survival of both twins at postoperative day 30).\n\n\nRESULTS\nVertical craniopagus is associated with successful separation (p < 0.001). No statistical significance was attributed to the nature of the shared cerebral venous drainage or the other variables examined. Multiple-stage operations and surgery before 12 months of age are associated with a trend toward statistical significance for successful separation.\n\n\nCONCLUSIONS\nThe authors' analysis indicates that vertical craniopagus twins have the highest likelihood of successful separation. Additional factors possibly associated with successful separation include the nature of the shared sinus system, surgery at a young age, and the use of staged separations.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.",
"title": ""
},
{
"docid": "2afcc7c1fb9dadc3d46743c991e15bac",
"text": "This paper describes the design of a robot head, developed in the framework of the RobotCub project. This project goals consists on the design and construction of a humanoid robotic platform, the iCub, for studying human cognition. The final platform would be approximately 90 cm tall, with 23 kg and with a total number of 53 degrees of freedom. For its size, the iCub is the most complete humanoid robot currently being designed, in terms of kinematic complexity. The eyes can also move, as opposed to similarly sized humanoid platforms. Specifications are made based on biological anatomical and behavioral data, as well as tasks constraints. Different concepts for the neck design (flexible, parallel and serial solutions) are analyzed and compared with respect to the specifications. The eye structure and the proprioceptive sensors are presented, together with some discussion of preliminary work on the face design",
"title": ""
},
{
"docid": "beadaf1625fc4e07d3511d46ee68e6e4",
"text": "The prevention of accidents is one of the most important goals of ad hoc networks in smart cities. When an accident happens, dynamic sensors (e.g., citizens with smart phones or tablets, smart vehicles and buses, etc.) could shoot a video clip of the accident and send it through the ad hoc network. With a video message, the level of seriousness of the accident could be much better evaluated by the authorities (e.g., health care units, police and ambulance drivers) rather than with just a simple text message. Besides, other citizens would be rapidly aware of the incident. In this way, smart dynamic sensors could participate in reporting a situation in the city using the ad hoc network so it would be possible to have a quick reaction warning citizens and emergency units. The deployment of an efficient routing protocol to manage video-warning messages in mobile Ad hoc Networks (MANETs) has important benefits by allowing a fast warning of the incident, which potentially can save lives. To contribute with this goal, we propose a multipath routing protocol to provide video-warning messages in MANETs using a novel game-theoretical approach. As a base for our work, we start from our previous work, where a 2-players game-theoretical routing protocol was proposed to provide video-streaming services over MANETs. In this article, we further generalize the analysis made for a general number of N players in the MANET. Simulations have been carried out to show the benefits of our proposal, taking into account the mobility of the nodes and the presence of interfering traffic. Finally, we also have tested our approach in a vehicular ad hoc network as an incipient start point to develop a novel proposal specifically designed for VANETs.",
"title": ""
},
{
"docid": "58891611a4d9992a671f620a8f753e71",
"text": "Many existing structures located in seismic regions are inadequate based on current seismic design codes. In addition, a number of major earthquakes during recent years have underscored the importance of mitigation to reduce seismic risk. Seismic retrofitting of existing structures is one of the most effective methods of reducing this risk. In recent years, a significant amount of research has been devoted to the study of various strengthening techniques to enhance the seismic performance of RC structures. However, the seismic performance of the structure may not be improved by retrofitting or rehabilitation unless the engineer selects an appropriate intervention technique based on seismic evaluation of the structure. Therefore, the basic requirements of rehabilitation and investigations of various retrofit techniques should be considered before selecting retrofit schemes. In this report, the characteristics of various intervention techniques are discussed and the relationship between retrofit and structural characteristics is also described. In addition, several case study structures for which retrofit techniques have been applied are presented.",
"title": ""
},
{
"docid": "6483733f9cfd2eaacb5f368e454416db",
"text": "A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.",
"title": ""
},
{
"docid": "e14cd8d955d80591f905b3858c9b5d09",
"text": "With the advent of the Internet of Things (IoT), security has emerged as a major design goal for smart connected devices. This explosion in connectivity created a larger attack surface area. Software-based approaches have been applied for security purposes; however, these methods must be extended with security-oriented technologies that promote hardware as the root of trust. The ARM TrustZone can enable trusted execution environments (TEEs), but existing solutions disregard real-time needs. Here, the authors demonstrate why TrustZone is becoming a reference technology for securing IoT edge devices, and how enhanced TEEs can help meet industrial IoT applications real-time requirements.",
"title": ""
},
{
"docid": "21502c42ef7a8e342334b93b1b5069d6",
"text": "Motivations to engage in retail online shopping can include both utilitarian and hedonic shopping dimensions. To cater to these consumers, online retailers can create a cognitively and esthetically rich shopping environment, through sophisticated levels of interactive web utilities and features, offering not only utilitarian benefits and attributes but also providing hedonic benefits of enjoyment. Since the effect of interactive websites has proven to stimulate online consumer’s perceptions, this study presumes that websites with multimedia rich interactive utilities and features can influence online consumers’ shopping motivations and entice them to modify or even transform their original shopping predispositions by providing them with attractive and enhanced interactive features and controls, thus generating a positive attitude towards products and services offered by the retailer. This study seeks to explore the effects of Web interactivity on online consumer behavior through an attitudinal model of technology acceptance.",
"title": ""
}
] |
scidocsrr
|
d4ae2ecbedc5d4f4ad132ea12c164a88
|
THE SELFIE PHENOMENON : THE IDEA OF SELF-PRESENTATION AND ITS IMPLICATIONS AMONG YOUNG WOMEN A
|
[
{
"docid": "157a96adf7909134a14f8abcc7a2655c",
"text": "Social networking sites like MySpace, Facebook, and StudiVZ are popular means of communicating personality. Recent theoretical and empirical considerations of homepages and Web 2.0 platforms show that impression management is a major motive for actively participating in social networking sites. However, the factors that determine the specific form of self-presentation and the extent of self-disclosure on the Internet have not been analyzed. In an exploratory study, we investigated the relationship between self-reported (offline) personality traits and (online) self-presentation in social networking profiles. A survey among 58 users of the German Web 2.0 site, StudiVZ.net, and a content analysis of the respondents’ profiles showed that self-efficacy with regard to impression management is strongly related to the number of virtual friends, the level of profile detail, and the style of the personal photo. The results also indicate a slight influence of extraversion, whereas there was no significant effect for self-esteem.",
"title": ""
}
] |
[
{
"docid": "66b154f935e66a78895e17318921f36a",
"text": "Metaheuristic algorithms have been a very important topic in computer science since the start of evolutionary computing the Genetic Algorithms 1950s. By now these metaheuristic algorithms have become a very large family with successful applications in industry. A challenge which is always pondered on, is finding the suitable metaheuristic algorithm for a certain problem. The choice sometimes may have to be made after trying through many experiments or by the experiences of human experts. As each of the algorithms have their own strengths in solving different kinds of problems, in this paper we propose a framework of metaheuristic brick-up system. The flexibility of brick-up (like Lego) offers users to pick a collection of fundamental functions of metaheuristic algorithms that were known to perform well in the past. In order to verify this brickup concept, in this paper we propose to use the Monte Carlo method with upper confidence bounds applied to a decision tree in selecting appropriate functional pieces. This paper validates the basic concept and discusses the further works.",
"title": ""
},
{
"docid": "890b1ed209b3e34c5b460dce310ee08f",
"text": "INTRODUCTION\nThe adequate use of compression in venous leg ulcer treatment is equally important to patients as well as clinicians. Currently, there is a lack of clarity on contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients.\n\n\nMETHODS\nThe project aimed to optimize prevention, treatment and maintenance approaches by recognizing contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients. A literature review was conducted of current guidelines on venous leg ulcer prevention, management and maintenance.\n\n\nRESULTS\nSearches took place from 29th February 2016 to 30th April 2016 and were prospectively limited to publications in the English and German languages and publication dates were between January 2009 and April 2016. Twenty Guidelines, clinical pathways and consensus papers on compression therapy for venous leg ulcer treatment and for venous disease, were included. Guidelines agreed on the following absolute contraindications: Arterial occlusive disease, heart failure and ankle brachial pressure index (ABPI) <0.5, but gave conflicting recommendations on relative contraindications, risks and adverse events. Moreover definitions were unclear and not consistent.\n\n\nCONCLUSIONS\nEvidence-based guidance is needed to inform clinicians on risk factor, adverse effects, complications and contraindications. ABPI values need to be specified and details should be given on the type of compression that is safe to use. Ongoing research challenges the present recommendations, shifting some contraindications into a list of potential indications. Complications of compression can be prevented when adequate assessment is performed and clinicians are skilled in applying compression.",
"title": ""
},
{
"docid": "4129881d5ff6f510f6deb23fd5b29afa",
"text": "Childbirth is an intricate process which is marked by an increased cervical dilation rate caused due to steady increments in the frequency and strength of uterine contractions. The contractions may be characterized by its strength, duration and frequency (count) - which are monitored through Tocography. However, the procedure is prone to subjectivity and an automated approach for the classification of the contractions is needed. In this paper, we use three different Weighted K-Nearest Neighbor classifiers and Decision Trees to classify the contractions into three types: Mild, Moderate and Strong. Further, we note the fact that our training data consists of fewer samples of Contractions as compared to those of Non-contractions - resulting in “Class Imbalance”. Hence, we use the Synthetic Minority Oversampling Technique (SMOTE) in conjunction with the K-NN classifier and Decision Trees to alleviate the problems of the same. The ground truth for Tocography signals was established by a doctor having an experience of 36 years in Obstetrics and Gynaecology. The annotations are in three categories: Mild (33 samples), Moderate (64 samples) and Strong (96 samples), amounting to a total of 193 contractions whereas the number of Non-contraction samples was 1217. Decision Trees using SMOTE performed the best with accuracies of 95%, 98.25% and 100% for the aforementioned categories, respectively. The sensitivities achieved for the same are 96.67%, 96.52% and 100% whereas the specificities amount to 93.33%, 100% and 100%, respectively. Our method may be used to monitor the labour progress efficiently.",
"title": ""
},
{
"docid": "cf3048e512d5d4eab62eef01627fe8d7",
"text": "In this paper, we present simulation results and analysis of 3-D magnetic flux leakage (MFL) signals due to the occurrence of a surface-breaking defect in a ferromagnetic specimen. The simulations and analysis are based on a magnetic dipole-based analytical model, presented in a previous paper. We exploit the tractability of the model and its amenability to simulation to analyze properties of the model as well as of the MFL fields it predicts, such as scale-invariance, effect of lift-off and defect shape, the utility of the tangential MFL component, and the sensitivity of MFL fields to parameters. The simulations and analysis show that the tangential MFL component is indeed a potentially critical part of MFL testing. It is also shown that the MFL field of a defect varies drastically with lift-off. We also exploit the model to develop a lift-off compensation technique which enables the prediction of the size of the defect for a range of lift-off values.",
"title": ""
},
{
"docid": "bfe8e4093219080ef7c377a67184ff00",
"text": "A clothoid has the property that its curvature varies linearly with arclength. This is a useful feature for the path of a vehicle whose turning radius is controlled as a linear function of the distance travelled. Highways, railways and the paths of car-like robots may be composed of straight line segments, clothoid segments and circular arcs. Control polylines are used in computer aided design and computer aided geometric design applications to guide composite curves during the design phase. This article examines the use of a control polyline to guide a curve composed of segments of clothoids, straight lines, and circular arcs. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3f48327ca2125df3a6da0c1e54131013",
"text": "Background: We investigated the value of magnetic resonance imaging (MRI) in the evaluation of sex-reassignment surgery in male-to-female transsexual patients. Methods: Ten male-to-female transsexual patients who underwent sex-reassignment surgery with inversion of combined penile and scrotal skin flaps for vaginoplasty were examined after surgery with MRI. Turbo spin-echo T2-weighted and spin-echo T1-weighted images were obtained in sagittal, coronal, and axial planes with a 1.5-T superconductive magnet. Images were acquired with and without an inflatable silicon vaginal tutor. The following parameters were evaluated: neovaginal depth, neovaginal inclination in the sagittal plane, presence of remnants of the corpus spongiosum and corpora cavernosa, and thickness of the rectovaginal septum. Results: The average neovaginal depth was 7.9 cm (range = 5–10 cm). The neovagina had a correct oblique inclination in the sagittal plane in four patients, no inclination in five, and an incorrect inclination in one. In seven patients, MRI showed remnants of the corpora cavernosa and/or of the corpus spongiosum; in three patients, no remnants were detected. The average thickness of the rectovaginal septum was 4 mm (range = 3–6 mm). Conclusion: MRI allows a detailed assessment of the pelvic anatomy after genital reconfiguration and provides information that can help the surgeon to adopt the most correct surgical approach.",
"title": ""
},
{
"docid": "c3b691cd3671011278ecd30563b27245",
"text": "We formalize weighted dependency parsing as searching for maximum spanning trees (MSTs) in directed graphs. Using this representation, the parsing algorithm of Eisner (1996) is sufficient for searching over all projective trees in O(n3) time. More surprisingly, the representation is extended naturally to non-projective parsing using Chu-Liu-Edmonds (Chu and Liu, 1965; Edmonds, 1967) MST algorithm, yielding anO(n2) parsing algorithm. We evaluate these methods on the Prague Dependency Treebank using online large-margin learning techniques (Crammer et al., 2003; McDonald et al., 2005) and show that MST parsing increases efficiency and accuracy for languages with non-projective dependencies.",
"title": ""
},
{
"docid": "2b3929da96949056bc473e8da947cebe",
"text": "This paper presents “Value-Difference Based Exploration” (VDBE), a method for balancing the exploration/exploitation dilemma inherent to reinforcement learning. The proposed method adapts the exploration parameter of ε-greedy in dependence of the temporal-difference error observed from value-function backups, which is considered as a measure of the agent’s uncertainty about the environment. VDBE is evaluated on a multi-armed bandit task, which allows for insight into the behavior of the method. Preliminary results indicate that VDBE seems to be more parameter robust than commonly used ad hoc approaches such as ε-greedy or softmax.",
"title": ""
},
{
"docid": "8bbe111daad27eba937699e87e195ee5",
"text": "The global impact of Alzheimer’s disease (AD) continues to increase, and focused efforts are needed to address this immense public health challenge. National leaders have set a goal to prevent or effectively treat AD by 2025. In this paper, we discuss the path to 2025, and what is feasible in this time frame given the realities and challenges of AD drug development, with a focus on disease-modifying therapies (DMTs). Under the current conditions, only drugs currently in late Phase 1 or later will have a chance of being approved by 2025. If pipeline attrition rates remain high, only a few compounds at best will meet this time frame. There is an opportunity to reduce the time and risk of AD drug development through an improvement in trial design; better trial infrastructure; disease registries of well-characterized participant cohorts to help with more rapid enrollment of appropriate study populations; validated biomarkers to better detect disease, determine risk and monitor disease progression as well as predict disease response; more sensitive clinical assessment tools; and faster regulatory review. To implement change requires efforts to build awareness, educate and foster engagement; increase funding for both basic and clinical research; reduce fragmented environments and systems; increase learning from successes and failures; promote data standardization and increase wider data sharing; understand AD at the basic biology level; and rapidly translate new knowledge into clinical development. Improved mechanistic understanding of disease onset and progression is central to more efficient AD drug development and will lead to improved therapeutic approaches and targets. The opportunity for more than a few new therapies by 2025 is small. Accelerating research and clinical development efforts and bringing DMTs to market sooner would have a significant impact on the future societal burden of AD. As these steps are put in place and plans come to fruition, e.g., approval of a DMT, it can be predicted that momentum will build, the process will be self-sustaining, and the path to 2025, and beyond, becomes clearer.",
"title": ""
},
{
"docid": "4566a0adb9496f765eebe1dd3afb08e9",
"text": "According to medical reports, cancers are big problems in the world society. In this paper we are supposed to predict breast cancer recurrence by multi-layer perceptron with two different outputs, a deep neural network as a feature extraction and multi-layer perceptron as a classifier, rough neural network with two different outputs, and finally, support vector machine. Then, we compare the results achieved by each method. It can be understood that rough neural network with two outputs leads to the highest accuracy and the lowest variance among other structures.",
"title": ""
},
{
"docid": "1a38695797b921e35e0987eeed11c95d",
"text": "We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded in data in this way may be easier to learn, generalize better, and be less dependent on accurate prior models than, for example, POMDP state representations. Building on prior work by Jaeger and by Rivest and Schapire, in this paper we compare and contrast a linear specialization of the predictive approach with the state representations used in POMDPs and in k-order Markov models. Ours is the first specific formulation of the predictive idea that includes both stochasticity and actions (controls). We show that any system has a linear predictive state representation with number of predictions no greater than the number of states in its minimal POMDP model. In predicting or controlling a sequence of observations, the concepts of state and state estimation inevitably arise. There have been two dominant approaches. The generative-model approach, typified by research on partially observable Markov decision processes (POMDPs), hypothesizes a structure for generating observations and estimates its state and state dynamics. The history-based approach, typified by k-order Markov methods, uses simple functions of past observations as state, that is, as the immediate basis for prediction and control. (The data flow in these two approaches are diagrammed in Figure 1.) Of the two, the generative-model approach is more general. The model's internal state gives it temporally unlimited memorythe ability to remember an event that happened arbitrarily long ago--whereas a history-based approach can only remember as far back as its history extends. The bane of generative-model approaches is that they are often strongly dependent on a good model of the system's dynamics. Most uses of POMDPs, for example, assume a perfect dynamics model and attempt only to estimate state. There are algorithms for simultaneously estimating state and dynamics (e.g., Chrisman, 1992), analogous to the Baum-Welch algorithm for the uncontrolled case (Baum et al., 1970), but these are only effective at tuning parameters that are already approximately correct (e.g., Shatkay & Kaelbling, 1997). observations (and actions) (a) state 1-----1-----1..rep'n observations¢E (and actions) / state t/' rep'n 1-step --+ . delays",
"title": ""
},
{
"docid": "5ee490a307a0b6108701225170690386",
"text": "An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in \"normal\" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity.",
"title": ""
},
{
"docid": "923b4025d22bc146c53fb4c90f43ef72",
"text": "In this paper we describe preliminary approaches for contentbased recommendation of Pinterest boards to users. We describe our representation and features for Pinterest boards and users, together with a supervised recommendation model. We observe that features based on latent topics lead to better performance than features based on userassigned Pinterest categories. We also find that using social signals (repins, likes, etc.) can improve recommendation quality.",
"title": ""
},
{
"docid": "de6581719d2bc451695a77d43b091326",
"text": "Keyphrases are useful for a variety of tasks in information retrieval systems and natural language processing, such as text summarization, automatic indexing, clustering/classification, ontology learning and building and conceptualizing particular knowledge domains, etc. However, assigning these keyphrases manually is time consuming and expensive in term of human resources. Therefore, there is a need to automate the task of extracting keyphrases. A wide range of techniques of keyphrase extraction have been proposed, but they are still suffering from the low accuracy rate and poor performance. This paper presents a state of the art of automatic keyphrase extraction approaches to identify their strengths and weaknesses. We also discuss why some techniques perform better than others and how can we improve the task of automatic keyphrase extraction.",
"title": ""
},
{
"docid": "60a3538ec6a64af6f8fd447ed0fb79f5",
"text": "Several Pinned Photodiode (PPD) CMOS Image Sensors (CIS) are designed, manufactured, characterized and exposed biased to ionizing radiation up to 10 kGy(SiO2 ). In addition to the usually reported dark current increase and quantum efficiency drop at short wavelengths, several original radiation effects are shown: an increase of the pinning voltage, a decrease of the buried photodiode full well capacity, a large change in charge transfer efficiency, the creation of a large number of Total Ionizing Dose (TID) induced Dark Current Random Telegraph Signal (DC-RTS) centers active in the photodiode (even when the Transfer Gate (TG) is accumulated) and the complete depletion of the Pre-Metal Dielectric (PMD) interface at the highest TID leading to a large dark current and the loss of control of the TG on the dark current. The proposed mechanisms at the origin of these degradations are discussed. It is also demonstrated that biasing (i.e., operating) the PPD CIS during irradiation does not enhance the degradations compared to sensors grounded during irradiation.",
"title": ""
},
{
"docid": "fb729bf4edf25f082a4808bd6bb0961d",
"text": "The paper reports some of the reasons behind the low use of Information and Communication Technology (ICT) by teachers. The paper has reviewed a number or studies from different parts of the world and paid greater attention to Saudi Arabia. The literature reveals a number of factors that hinder teachers’ use of ICT. This paper will focus on lack of access to technology, lack of training and lack of time.",
"title": ""
},
{
"docid": "c23dc5fdb8c2d3b7314d895bbcb13832",
"text": "Wireless power transfer (WPT) is a promising new solution to provide convenient and perpetual energy supplies to wireless networks. In practice, WPT is implementable by various technologies such as inductive coupling, magnetic resonate coupling, and electromagnetic (EM) radiation, for short-/mid-/long-range applications, respectively. In this paper, we consider the EM or radio signal enabled WPT in particular. Since radio signals can carry energy as well as information at the same time, a unified study on simultaneous wireless information and power transfer (SWIPT) is pursued. Specifically, this paper studies a multiple-input multiple-output (MIMO) wireless broadcast system consisting of three nodes, where one receiver harvests energy and another receiver decodes information separately from the signals sent by a common transmitter, and all the transmitter and receivers may be equipped with multiple antennas. Two scenarios are examined, in which the information receiver and energy receiver are separated and see different MIMO channels from the transmitter, or co-located and see the identical MIMO channel from the transmitter. For the case of separated receivers, we derive the optimal transmission strategy to achieve different tradeoffs for maximal information rate versus energy transfer, which are characterized by the boundary of a so-called rate-energy (R-E) region. For the case of co-located receivers, we show an outer bound for the achievable R-E region due to the potential limitation that practical energy harvesting receivers are not yet able to decode information directly. Under this constraint, we investigate two practical designs for the co-located receiver case, namely time switching and power splitting, and characterize their achievable R-E regions in comparison to the outer bound.",
"title": ""
},
{
"docid": "960c37997d6138f8fd58728a1f976c9e",
"text": "Hundreds of highly conserved distal cis-regulatory elements have been characterized so far in vertebrate genomes. Many thousands more are predicted on the basis of comparative genomics. However, in stark contrast to the genes that they regulate, in invertebrates virtually none of these regions can be traced by using sequence similarity, leaving their evolutionary origins obscure. Here we show that a class of conserved, primarily non-coding regions in tetrapods originated from a previously unknown short interspersed repetitive element (SINE) retroposon family that was active in the Sarcopterygii (lobe-finned fishes and terrestrial vertebrates) in the Silurian period at least 410 million years ago (ref. 4), and seems to be recently active in the ‘living fossil’ Indonesian coelacanth, Latimeria menadoensis. Using a mouse enhancer assay we show that one copy, 0.5 million bases from the neuro-developmental gene ISL1, is an enhancer that recapitulates multiple aspects of Isl1 expression patterns. Several other copies represent new, possibly regulatory, alternatively spliced exons in the middle of pre-existing Sarcopterygian genes. One of these, a more than 200-base-pair ultraconserved region, 100% identical in mammals, and 80% identical to the coelacanth SINE, contains a 31-amino-acid-residue alternatively spliced exon of the messenger RNA processing gene PCBP2 (ref. 6). These add to a growing list of examples in which relics of transposable elements have acquired a function that serves their host, a process termed ‘exaptation’, and provide an origin for at least some of the many highly conserved vertebrate-specific genomic sequences.",
"title": ""
},
{
"docid": "103e3212f2d1302c7a901be0d3f46e31",
"text": "This article explores dominant discourses surrounding male and female genital cutting. Over a similar period of time, these genital operations have separately been subjected to scrutiny and criticism. However, although critiques of female circumcision have been widely taken up, general public opinion toward male circumcision remains indifferent. This difference cannot merely be explained by the natural attributes and effects of these practices. Rather, attitudes toward genital cutting reflect historically and culturally specific understandings of the human body. In particular, I suggest that certain problematic understandings of male and female sexuality are deeply implicated in the dominant Western discourses on genital surgery.",
"title": ""
},
{
"docid": "3c577fcd0d0876af4aa031affa3bd168",
"text": "Domain-specific Internet of Things (IoT) applications are becoming more and more popular. Each of these applications uses their own technologies and terms to describe sensors and their measurements. This is a difficult task to help users build generic IoT applications to combine several domains. To explicitly describe sensor measurements in uniform way, we propose to enrich them with semantic web technologies. Domain knowledge is already defined in more than 200 ontology and sensor-based projects that we could reuse to build cross-domain IoT applications. There is a huge gap to reason on sensor measurements without a common nomenclature and best practices to ease the automation of generic IoT applications. We present our Machine-to-Machine Measurement (M3) framework and share lessons learned to improve existing standards such as oneM2M, ETSI M2M, W3C Web of Things and W3C Semantic Sensor Network.",
"title": ""
}
] |
scidocsrr
|
db01634ad7cfb96719323ef5b1cedf2b
|
Learning and Game AI
|
[
{
"docid": "a583c568e3c2184e5bda272422562a12",
"text": "Video games are primarily designed for the players. However, video game spectating is also a popular activity, boosted by the rise of online video sites and major gaming tournaments. In this paper, we focus on the spectator, who is emerging as an important stakeholder in video games. Our study focuses on Starcraft, a popular real-time strategy game with millions of spectators and high level tournament play. We have collected over a hundred stories of the Starcraft spectator from online sources, aiming for as diverse a group as possible. We make three contributions using this data: i) we find nine personas in the data that tell us who the spectators are and why they spectate; ii) we strive to understand how different stakeholders, like commentators, players, crowds, and game designers, affect the spectator experience; and iii) we infer from the spectators' expressions what makes the game entertaining to watch, forming a theory of distinct types of information asymmetry that create suspense for the spectator. One design implication derived from these findings is that, rather than presenting as much information to the spectator as possible, it is more important for the stakeholders to be able to decide how and when they uncover that information.",
"title": ""
}
] |
[
{
"docid": "c8b1a0d5956ced6deaefe603efc523ba",
"text": "What can wearable sensors and usage of smart phones tell us about academic performance, self-reported sleep quality, stress and mental health condition? To answer this question, we collected extensive subjective and objective data using mobile phones, surveys, and wearable sensors worn day and night from 66 participants, for 30 days each, totaling 1,980 days of data. We analyzed daily and monthly behavioral and physiological patterns and identified factors that affect academic performance (GPA), Pittsburg Sleep Quality Index (PSQI) score, perceived stress scale (PSS), and mental health composite score (MCS) from SF-12, using these month-long data. We also examined how accurately the collected data classified the participants into groups of high/low GPA, good/poor sleep quality, high/low self-reported stress, high/low MCS using feature selection and machine learning techniques. We found associations among PSQI, PSS, MCS, and GPA and personality types. Classification accuracies using the objective data from wearable sensors and mobile phones ranged from 67-92%.",
"title": ""
},
{
"docid": "ce1048eb76d48800b4e455b8e5d3342a",
"text": "While it is true that successful implementation of an enterprise resource planning (ERP) system is a task of Herculean proportions, it is not impossible. If your organization is to reap the benefits of ERP, it must first develop a plan for success. But “prepare to see your organization reengineered, your staff disrupted, and your productivity drop before the payoff is realized.”1 Implementing ERP must be viewed and undertaken as a new business endeavor and a team mission, not just a software installation. Companies must involve all employees, and unconditionally and completely sell them on the concept of ERP for it to be a success.2 A successful implementation means involving, supervising, recognizing, and retaining those who have worked or will work closely with the system. Without a team attitude and total backing by everyone involved, an ERP implementation will end in less than an ideal situation.3 This was the situation for a soft drink bottler that tried to cut corners and did not recognize the importance of the people so heavily involved and depended on.",
"title": ""
},
{
"docid": "e84ca42f96cca0fe3ed7c70d90554a8d",
"text": "While the volume of scholarly publications has increased at a frenetic pace, accessing and consuming the useful candidate papers, in very large digital libraries, is becoming an essential and challenging task for scholars. Unfortunately, because of language barrier, some scientists (especially the junior ones or graduate students who do not master other languages) cannot efficiently locate the publications hosted in a foreign language repository. In this study, we propose a novel solution, cross-language citation recommendation via Hierarchical Representation Learning on Heterogeneous Graph (HRLHG), to address this new problem. HRLHG can learn a representation function by mapping the publications, from multilingual repositories, to a low-dimensional joint embedding space from various kinds of vertexes and relations on a heterogeneous graph. By leveraging both global (task specific) plus local (task independent) information as well as a novel supervised hierarchical random walk algorithm, the proposed method can optimize the publication representations by maximizing the likelihood of locating the important cross-language neighborhoods on the graph. Experiment results show that the proposed method can not only outperform state-of-the-art baseline models, but also improve the interpretability of the representation model for cross-language citation recommendation task.",
"title": ""
},
{
"docid": "aac17c2c975afaa3f55e42e698d398b3",
"text": "Many state-of-the-art Large Vocabulary Continuous Speech Recognition (LVCSR) Systems are hybrids of neural networks and Hidden Markov Models (HMMs). Recently, more direct end-to-end methods have been investigated, in which neural architectures were trained to model sequences of characters [1,2]. To our knowledge, all these approaches relied on Connectionist Temporal Classification [3] modules. We investigate an alternative method for sequence modelling based on an attention mechanism that allows a Recurrent Neural Network (RNN) to learn alignments between sequences of input frames and output labels. We show how this setup can be applied to LVCSR by integrating the decoding RNN with an n-gram language model and by speeding up its operation by constraining selections made by the attention mechanism and by reducing the source sequence lengths by pooling information over time. Recognition accuracies similar to other HMM-free RNN-based approaches are reported for the Wall Street Journal corpus.",
"title": ""
},
{
"docid": "3e66421e80bfc22f592ffbd6254b1951",
"text": "This paper presents a system which extends the use of the traditional white cane by the blind for navigation purposes in indoor environments. Depth data of the scene in front of the user is acquired using the Microsoft Kinect sensor which is then mapped into a pattern representation. Using neural networks, the proposed system uses this information to extract relevant features from the scene, enabling the detection of possible obstacles along the way. The results show that the neural network is able to correctly classify the type of pattern presented as input.",
"title": ""
},
{
"docid": "934bc45566dfa5199084a4f804513a9f",
"text": "Correct architecture is the backbone of the successful software. To address the complexity of the growing software there are different architectural models that are designed to handle this problem. The most important thing is to differentiate software architecture from software design. As the web based applications are developed under tight schedule and in quickly changing environment, the developers have to face different problematical situations. Therefore understanding of the components of architectures, specially designed for web based applications is crucial to overcome these challenging situations. The purpose of this paper is to emphasize on possible architectural solutions for web based applications. Different types of software architectures that are based on different architectural styles are compared according to the nature of software. Keyword: Component based architecture, Layered architecture, Service oriented architecture, Web applications.",
"title": ""
},
{
"docid": "ffede4ad022d6ea4006c2e123807e89f",
"text": "Awareness about the energy consumption of appliances can help to save energy in households. Non-intrusive Load Monitoring (NILM) is a feasible approach to provide consumption feedback at appliance level. In this paper, we evaluate a broad set of features for electrical appliance recognition, extracted from high frequency start-up events. These evaluations were applied on several existing high frequency energy datasets. To examine clean signatures, we ran all experiments on two datasets that are based on isolated appliance events; more realistic results were retrieved from two real household datasets. Our feature set consists of 36 signatures from related work including novel approaches, and from other research fields. The results of this work include a stand-alone feature ranking, promising feature combinations for appliance recognition in general and appliance-wise performances.",
"title": ""
},
{
"docid": "e2ce393fade02f0dfd20b9aca25afd0f",
"text": "This paper presents a comparative lightning performance study conducted on a 275 kV double circuit shielded transmission line using two software programs, TFlash and Sigma-Slp. The line performance was investigated by using both a single stroke and a statistical performance analysis and considering cases of shielding failure and backflashover. A sensitivity analysis was carried out to determine the relationship between the flashover rate and the parameters influencing it. To improve the lightning performance of the line, metal oxide surge arresters were introduced using different phase and line locations. Optimised arrester arrangements are proposed.",
"title": ""
},
{
"docid": "a4c739a3b4d6adbb907568c7fdc85d9d",
"text": "This paper describes about implementation of speech recognition system on a mobile robot for controlling movement of the robot. The methods used for speech recognition system are Linear Predictive Coding (LPC) and Artificial Neural Network (ANN). LPC method is used for extracting feature of a voice signal and ANN is used as the recognition method. Backpropagation method is used to train the ANN. Voice signals are sampled directly from the microphone and then they are processed using LPC method for extracting the features of voice signal. For each voice signal, LPC method produces 576 data. Then, these data become the input of the ANN. The ANN was trained by using 210 data training. This data training includes the pronunciation of the seven words used as the command, which are created from 30 different people. Experimental results show that the highest recognition rate that can be achieved by this system is 91.4%. This result is obtained by using 25 samples per word, 1 hidden layer, 5 neurons for each hidden layer, and learning rate 0.1.",
"title": ""
},
{
"docid": "212619e09ee7dfe0f32d90e2da25c8f0",
"text": "This paper tackles anomaly detection in videos, which is an extremely challenging task because anomaly is unbounded. We approach this task by leveraging a Convolutional Neural Network (CNN or ConvNet) for appearance encoding for each frame, and leveraging a Convolutional Long Short Term Memory (ConvLSTM) for memorizing all past frames which corresponds to the motion information. Then we integrate ConvNet and ConvLSTM with Auto-Encoder, which is referred to as ConvLSTM-AE, to learn the regularity of appearance and motion for the ordinary moments. Compared with 3D Convolutional Auto-Encoder based anomaly detection, our main contribution lies in that we propose a ConvLSTM-AE framework which better encodes the change of appearance and motion for normal events, respectively. To evaluate our method, we first conduct experiments on a synthesized Moving-MNIST dataset under controlled settings, and results show that our method can easily identify the change of appearance and motion. Extensive experiments on real anomaly datasets further validate the effectiveness of our method for anomaly detection.",
"title": ""
},
{
"docid": "056944e9e568d69d5caa707d03353f62",
"text": "Cyberbullying has emerged as a new form of antisocial behaviour in the context of online communication over the last decade. The present study investigates potential longitudinal risk factors for cyberbullying. A total of 835 Swiss seventh graders participated in a short-term longitudinal study (two assessments 6 months apart). Students reported on the frequency of cyberbullying, traditional bullying, rule-breaking behaviour, cybervictirnisation, traditional victirnisation, and frequency of online communication (interpersonal characteristics). In addition, we assessed moral disengagement, empathic concern, and global self-esteem (intrapersonal characteristics). Results showed that traditional bullying, rule-breaking behaviour, and frequency of online communication are longitudinal risk factors for involvement in cyberbullying as a bully. Thus, cyberbullying is strongly linked to real-world antisocial behaviours. Frequent online communication may be seen as an exposure factor that increases the likelihood of engaging in cyberbullying. In contrast, experiences of victimisation and intrapersonal characteristics were not found to increase the longitudinal risk for cyberbullying over and above antisocial behaviour and frequency of online communication. Implications of the findings for the prevention of cyberbullying are discussed. Copyright © 2012 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "64efd590a51fc3cab97c9b4b17ba9b40",
"text": "The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amount of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper, we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text. Another contribution that we make is proposing a technique based on synthetic minority oversampling to generate a large labeled dataset, suitable for deep nets training, from a minimal amount of labeled data (roughly 3,000 examples of sophisticated Twitter bots). We demonstrate that, from just one single tweet, our architecture can achieve high classification accuracy (AUC > 96%) in separating bots from humans. We apply the same architecture to account-level bot detection, achieving nearly perfect classification accuracy (AUC > 99%). Our system outperforms previous state of the art while leveraging a small and interpretable set of features yet requiring minimal training data.",
"title": ""
},
{
"docid": "b31aaa6805524495f57a2f54d0dd86f1",
"text": "CLINICAL HISTORY A 54-year-old white female was seen with a 10-year history of episodes of a burning sensation of the left ear. The episodes are preceded by nausea and a hot feeling for about 15 seconds and then the left ear becomes visibly red for an average of about 1 hour, with a range from about 30 minutes to 2 hours. About once every 2 years, she would have a flurry of episodes occurring over about a 1-month period during which she would average about five episodes with a range of 1 to 6. There was also an 18-year history of migraine without aura occurring about once a year. At the age of 36 years, she developed left-sided pulsatile tinnitus. A cerebral arteriogram revealed a proximal left internal carotid artery occlusion of uncertain etiology after extensive testing. An MRI scan at the age of 45 years was normal. Neurological examination was normal. A carotid ultrasound study demonstrated complete occlusion of the left internal carotid artery and a normal right. Question.—What is the diagnosis?",
"title": ""
},
{
"docid": "a64a83791259350d5d76dc1ea097a7fb",
"text": "Today the channels for expressing opinions seem to increase daily. When these opinions are relevant to a company, they are important sources of business insight, whether they represent critical intelligence about a customer's defection risk, the impact of an influential reviewer on other people's purchase decisions, or early feedback on product releases, company news or competitors. Capturing and analyzing these opinions is a necessity for proactive product planning, marketing and customer service and it is also critical in maintaining brand integrity. The importance of harnessing opinion is growing as consumers use technologies such as Twitter to express their views directly to other consumers. Tracking the disparate sources of opinion is hard - but even harder is quickly and accurately extracting the meaning so companies can analyze and act. Tweets' Language is complicated and contextual, especially when people are expressing opinions and requires reliable sentiment analysis based on parsing many linguistic shades of gray. This article argues that using the R programming platform for analyzing tweets programmatically simplifies the task of sentiment analysis and opinion mining. An R programming technique has been used for testing different sentiment lexicons as well as different scoring schemes. Experiments on analyzing the tweets of users over six NHL hockey teams reveals the effectively of using the opinion lexicon and the Latent Dirichlet Allocation (LDA) scoring scheme.",
"title": ""
},
{
"docid": "c77c6ea404d9d834ef1be5a1d7222e66",
"text": "We introduce the concepts of regular and totally regular bipolar fuzzy graphs. We prove necessary and sufficient condition under which regular bipolar fuzzy graph and totally bipolar fuzzy graph are equivalent. We introduce the notion of bipolar fuzzy line graphs and present some of their properties. We state a necessary and sufficient condition for a bipolar fuzzy graph to be isomorphic to its corresponding bipolar fuzzy line graph. We examine when an isomorphism between two bipolar fuzzy graphs follows from an isomorphism of their corresponding bipolar fuzzy line graphs.",
"title": ""
},
{
"docid": "88d226d5b10a044a4c368a0a6136e421",
"text": "The areas of machine learning and communication technology are converging. Today’s communications systems generate a huge amount of traffic data, which can help to significantly enhance the design and management of networks and communication components when combined with advanced machine learning methods. Furthermore, recently developed end-to-end training procedures offer new ways to jointly optimize the components of a communication system. Also in many emerging application fields of communication technology, e.g., smart cities or internet of things, machine learning methods are of central importance. This paper gives an overview over the use of machine learning in different areas of communications and discusses two exemplar applications in wireless networking. Furthermore, it identifies promising future research topics and discusses their potential impact.",
"title": ""
},
{
"docid": "85462fe3cf060d7fa85251d5a7d30d1a",
"text": "Validity of PostureScreen Mobile® in the Measurement of Standing Posture Breanna Cristine Berry Hopkins Department of Exercise Sciences, BYU Master of Science Background: PostureScreen Mobile® is an app created to quickly screen posture using front and side-view photographs. There is currently a lack of evidence that establishes PostureScreen Mobile® (PSM) as a valid measure of posture. Therefore, the purpose of this preliminary study was to document the validity and reliability of PostureScreen Mobile® in assessing static standing posture. Methods: This study was an experimental trial in which the posture of 50 male participants was assessed a total of six times using two different methods: PostureScreen Mobile® and Vicon 3D motion analysis system (VIC). Postural deviations, as measured during six trials of PSM assessments (3 trials with and 3 trials without anatomical markers), were compared to the postural deviations as measured using the VIC as the criterion measure. Measurement of lateral displacement on the x-axis (shift) and rotation on the y-axis (tilt) were made of the head, shoulders, and hips in the frontal plane. Measurement of forward/rearward displacement on the Z-axis (shift) of the head, shoulders, hips, and knees were made in the sagittal plane. Validity was evaluated by comparing the PSM measurements of shift and tilt of each body part to that of the VIC. Reliability was evaluated by comparing the variance of PSM measurements to the variance of VIC measurements. The statistical model employed the Bayesian framework and consisted of the scaled product of the likelihood of the data given the parameters and prior probability densities for each of the parameters. Results: PSM tended to overestimate VIC postural tilt and shift measurements in the frontal plane and underestimate VIC postural shift measurements in the sagittal plane. Use of anatomical markers did not universally improve postural measurements with PSM, and in most cases, the variance of postural measurements using PSM exceeded that of VIC. The patterns in the intraclass correlation coefficients (ICC) suggest high trial-to-trial variation in posture. Conclusions: We conclude that until research further establishes the validity and reliability of the PSM app, it should not be used in research or clinical applications when accurate postural assessments are necessary or when serial measurements of posture will be performed. We suggest that the PSM be used by health and fitness professionals as a screening tool, as described by the manufacturer. Due to the suspected trial-to-trial variation in posture, we question the usefulness of a single postural assessment.",
"title": ""
},
{
"docid": "fa005ff6f8f59517f10a5c9808e6549d",
"text": "Traditional approaches to simultaneous localization and mapping (SLAM) rely on low-level geometric features such as points, lines, and planes. They are unable to assign semantic labels to landmarks observed in the environment. Furthermore, loop closure recognition based on low-level features is often viewpoint-dependent and subject to failure in ambiguous or repetitive environments. On the other hand, object recognition methods can infer landmark classes and scales, resulting in a small set of easily recognizable landmarks, ideal for view-independent unambiguous loop closure. In a map with several objects of the same class, however, a crucial data association problem exists. While data association and recognition are discrete problems usually solved using discrete inference, classical SLAM is a continuous optimization over metric information. In this paper, we formulate an optimization problem over sensor states and semantic landmark positions that integrates metric information, semantic information, and data associations, and decompose it into two interconnected problems: an estimation of discrete data association and landmark class probabilities, and a continuous optimization over the metric states. The estimated landmark and robot poses affect the association and class distributions, which in turn affect the robot-landmark pose optimization. The performance of our algorithm is demonstrated on indoor and outdoor datasets.",
"title": ""
},
{
"docid": "f0093159ff25b3c19e9c48d9c09bcad5",
"text": "This article discusses the radiographic manifestation of jaw lesions whose etiology may be traced to underlying systemic disease. Some changes may be related to hematologic or metabolic disorders. A group of bone changes may be associated with disorders of the endocrine system. It is imperative for the clinician to compare the constantly changing and dynamic maxillofacial skeleton to the observed radiographic pathology as revealed on intraoral and extraoral imagery.",
"title": ""
},
{
"docid": "53b32cdb6c3d511180d8cb194c286ef5",
"text": "Silymarin, a C25 containing flavonoid from the plant Silybum marianum, has been the gold standard drug to treat liver disorders associated with alcohol consumption, acute and chronic viral hepatitis, and toxin-induced hepatic failures since its discovery in 1960. Apart from the hepatoprotective nature, which is mainly due to its antioxidant and tissue regenerative properties, Silymarin has recently been reported to be a putative neuroprotective agent against many neurologic diseases including Alzheimer's and Parkinson's diseases, and cerebral ischemia. Although the underlying neuroprotective mechanism of Silymarin is believed to be due to its capacity to inhibit oxidative stress in the brain, it also confers additional advantages by influencing pathways such as β-amyloid aggregation, inflammatory mechanisms, cellular apoptotic machinery, and estrogenic receptor mediation. In this review, we have elucidated the possible neuroprotective effects of Silymarin and the underlying molecular events, and suggested future courses of action for its acceptance as a CNS drug for the treatment of neurodegenerative diseases.",
"title": ""
}
] |
scidocsrr
|
c6b856db07d45a093186b5c5a651d2b1
|
BUILDING INFORMATION MODELLING FOR CULTURAL HERITAGE : A REVIEW
|
[
{
"docid": "47cf10951d13e1da241a5551217aa2d5",
"text": "Despite the widespread adoption of building information modelling (BIM) for the design and lifecycle management of new buildings, very little research has been undertaken to explore the value of BIM in the management of heritage buildings and cultural landscapes. To that end, we are investigating the construction of BIMs that incorporate both quantitative assets (intelligent objects, performance data) and qualitative assets (historic photographs, oral histories, music). Further, our models leverage the capabilities of BIM software to provide a navigable timeline that chronicles tangible and intangible changes in the past and projections into the future. In this paper, we discuss three projects undertaken by the authors that explore an expanded role for BIM in the documentation and conservation of architectural heritage. The projects range in scale and complexity and include: a cluster of three, 19th century heritage buildings in the urban core of Toronto, Canada; a 600 hectare village in rural, south-eastern Ontario with significant modern heritage value, and a proposed web-centered BIM database for materials and methods of construction specific to heritage conservation.",
"title": ""
}
] |
[
{
"docid": "a922051835f239db76be1dbb8edead3e",
"text": "Among the simplest and most intuitively appealing classes of nonprobabilistic classification procedures are those that weight the evidence of nearby sample observations most heavily. More specifically, one might wish to weight the evidence of a neighbor close to an unclassified observation more heavily than the evidence of another neighbor which is at a greater distance from the unclassified observation. One such classification rule is described which makes use of a neighbor weighting function for the purpose of assigning a class to an unclassified sample. The admissibility of such a rule is also considered.",
"title": ""
},
{
"docid": "959a43b6b851a4a255466296efac7299",
"text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.",
"title": ""
},
{
"docid": "3df12301c628a4b1fc9421c80b79b42b",
"text": "Cellular processes can only be understood as the dynamic interplay of molecules. There is a need for techniques to monitor interactions of endogenous proteins directly in individual cells and tissues to reveal the cellular and molecular architecture and its responses to perturbations. Here we report our adaptation of the recently developed proximity ligation method to examine the subcellular localization of protein-protein interactions at single-molecule resolution. Proximity probes—oligonucleotides attached to antibodies against the two target proteins—guided the formation of circular DNA strands when bound in close proximity. The DNA circles in turn served as templates for localized rolling-circle amplification (RCA), allowing individual interacting pairs of protein molecules to be visualized and counted in human cell lines and clinical specimens. We used this method to show specific regulation of protein-protein interactions between endogenous Myc and Max oncogenic transcription factors in response to interferon-γ (IFN-γ) signaling and low-molecular-weight inhibitors.",
"title": ""
},
{
"docid": "993d7ee2498f7b19ae70850026c0a0c4",
"text": "We present ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data. It is based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams. Our model is simple, easily extendable yet very effective, overall ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer feedback analysis in four languages: English, French, Japanese and Spanish.",
"title": ""
},
{
"docid": "b15078182915859c3eab4b174115cd0f",
"text": "We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. To address this issue, we propose the Moment Context Network (MCN) which effectively localizes natural language queries in videos by integrating local and global video features over time. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions. We demonstrate that MCN outperforms several baseline methods and believe that our initial results together with the release of DiDeMo will inspire further research on localizing video moments with natural language.",
"title": ""
},
{
"docid": "0277fd19009088f84ce9f94a7e942bc1",
"text": "These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.",
"title": ""
},
{
"docid": "458e4b5196805b608e15ee9c566123c9",
"text": "For the first half century of animal virology, the major problem was lack of a simple method for quantitating infectious virus particles; the only method available at that time was some form or other of the serial-dilution end-point method in animals, all of which were both slow and expensive. Cloned cultured animal cells, which began to be available around 1950, provided Dulbecco with a new approach. He adapted the technique developed by Emory Ellis and Max Delbrück for assaying bacteriophage, that is, seeding serial dilutions of a given virus population onto a confluent lawn of host cells, to the measurement of Western equine encephalitis virus, and demonstrated that it also formed easily countable plaques in monolayers of chick embryo fibroblasts. The impact of this finding was enormous; animal virologists had been waiting for such a technique for decades. It was immediately found to be widely applicable to many types of cells and most viruses, gained quick acceptance, and is widely regarded as marking the beginning of molecular animal virology. Renato Dulbecco was awarded the Nobel Prize in 1975. W. K. JOKLIK",
"title": ""
},
{
"docid": "e011ab57139a9a2f6dc13033b0ab6223",
"text": "Over the last few years, virtual reality (VR) has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate immersive experiences at low cost. One of the remaining challenges to further improve visual comfort in VR experiences is the vergence-accommodation conflict inherent to all stereoscopic displays. Accurate reproduction of all depth cues is crucial for visual comfort. By combining well-known stereoscopic display principles with emerging factored light field technology, we present the first wearable VR display supporting high image resolution as well as focus cues. A light field is presented to each eye, which provides more natural viewing experiences than conventional near-eye displays. Since the eye box is just slightly larger than the pupil size, rank-1 light field factorizations are sufficient to produce correct or nearly-correct focus cues; no time-multiplexed image display or gaze tracking is required. We analyze lens distortions in 4D light field space and correct them using the afforded high-dimensional image formation. We also demonstrate significant improvements in resolution and retinal blur quality over related near-eye displays. Finally, we analyze diffraction limits of these types of displays.",
"title": ""
},
{
"docid": "3aab2226cfdee4c6446090922fdd4f2d",
"text": "Information system and data mining are important resources for the investors to make decisions. Information theory pointed that the information is increasing all the time, when the corporations build their millions of databases in order to improve the efficiency. Database technology caters to the needs of fully developing the information resources. This essay discusses the problem of decision making support system and the application of business data mining in commercial decision making. It is recommended that the intelligent decision support system should be built. Besides, the business information used in the commercial decision making must follow the framework of a whole system under guideline, which should be designed by the company.",
"title": ""
},
{
"docid": "cd08ec6c25394b3304368952cf4fb99b",
"text": "Recently, several experimental studies have been conducted on block data layout as a data transformation technique used in conjunction with tiling to improve cache performance. In this paper, we provide a theoretical analysis for the TLB and cache performance of block data layout. For standard matrix access patterns, we derive an asymptotic lower bound on the number of TLB misses for any data layout and show that block data layout achieves this bound. We show that block data layout improves TLB misses by a factor of O B compared with conventional data layouts, where B is the block size of block data layout. This reduction contributes to the improvement in memory hierarchy performance. Using our TLB and cache analysis, we also discuss the impact of block size on the overall memory hierarchy performance. These results are validated through simulations and experiments on state-of-the-art platforms.",
"title": ""
},
{
"docid": "1caaac35c25cd9efb729b57e59c41be5",
"text": "The design of elastic file synchronization services like Dropbox is an open and complex issue yet not unveiled by the major commercial providers, as it includes challenges like fine-grained programmable elasticity and efficient change notification to millions of devices. In this paper, we propose a novel architecture for file synchronization which aims to solve the above two major challenges. At the heart of our proposal lies ObjectMQ, a lightweight framework for providing programmatic elasticity to distributed objects using messaging. The efficient use of indirect communication: i) enables programmatic elasticity based on queue message processing, ii) simplifies change notifications offering simple unicast and multicast primitives; and iii) provides transparent load balancing based on queues.\n Our reference implementation is StackSync, an open source elastic file synchronization Cloud service developed in the context of the FP7 project CloudSpaces. StackSync supports both predictive and reactive provisioning policies on top of ObjectMQ that adapt to real traces from the Ubuntu One service. The feasibility of our approach has been extensively validated with an open benchmark, including commercial synchronization services like Dropbox or OneDrive.",
"title": ""
},
{
"docid": "0bc1c637d6f4334dd8a27491ebde40d6",
"text": "Osteoarthritis of the hip describes a clinical syndrome of joint pain accompanied by varying degrees of functional limitation and reduced quality of life. Osteoarthritis may not be progressive and most patients will not need surgery, with their symptoms adequately controlled by non-surgical measures. The treatment of hip osteoarthritis is aimed at reducing pain and stiffness and improving joint mobility. Total hip replacement remains the most effective treatment option but it is a major surgery with potential serious complications. NICE guideline has suggested a holistic approach to management of hip osteoarthritis which includes both nonpharmacological and pharmacological treatments. The non-pharmacological treatments range from education ,physical therapy and behavioral changes ,walking aids .The ESCAPE( Enabling Self-Management and Coping of Arthritic Pain Through Exercise) rehabilitation programme for hip and knee osteoarthritis which integrates simple education, self-management and coping strategies, with an exercise regimen has shown to be more cost-effective than usual care. There is a choice of reviewed pharmacological treatments available, but there are few current reviews of possible nonpharmacological methods. This review will focus on the non-pharmacological and non-surgical methods.",
"title": ""
},
{
"docid": "51f5ba274068c0c03e5126bda056ba98",
"text": "Electricity is conceivably the most multipurpose energy carrier in modern global economy, and therefore primarily linked to human and economic development. Energy sector reform is critical to sustainable energy development and includes reviewing and reforming subsidies, establishing credible regulatory frameworks, developing policy environments through regulatory interventions, and creating marketbased approaches. Energy security has recently become an important policy driver and privatization of the electricity sector has secured energy supply and provided cheaper energy services in some countries in the short term, but has led to contrary effects elsewhere due to increasing competition, resulting in deferred investments in plant and infrastructure due to longer-term uncertainties. On the other hand global dependence on fossil fuels has led to the release of over 1100 GtCO2 into the atmosphere since the mid-19th century. Currently, energy-related GHG emissions, mainly from fossil fuel combustion for heat supply, electricity generation and transport, account for around 70% of total emissions including carbon dioxide, methane and some traces of nitrous oxide. This multitude of aspects play a role in societal debate in comparing electricity generating and supply options, such as cost, GHG emissions, radiological and toxicological exposure, occupational health and safety, employment, domestic energy security, and social impressions. Energy systems engineering provides a methodological scientific framework to arrive at realistic integrated solutions to complex energy problems, by adopting a holistic, systems-based approach, especially at decision making and planning stage. Modeling and optimization found widespread applications in the study of physical and chemical systems, production planning and scheduling systems, location and transportation problems, resource allocation in financial systems, and engineering design. This article reviews the literature on power and supply sector developments and analyzes the role of modeling and optimization in this sector as well as the future prospective of optimization modeling as a tool for sustainable energy systems. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "486978346e7a77f66e3ccce6f07fb346",
"text": "In this paper, we present a novel structure, Semi-AutoEncoder, based on AutoEncoder. We generalize it into a hybrid collaborative filtering model for rating prediction as well as personalized top-n recommendations. Experimental results on two real-world datasets demonstrate its state-of-the-art performances.",
"title": ""
},
{
"docid": "16a6c26d6e185be8383c062c6aa620f8",
"text": "In this research, we suggested a vision-based traffic accident detection system for automatically detecting, recording, and reporting traffic accidents at intersections. This model first extracts the vehicles from the video image of CCD camera, tracks the moving vehicles, and extracts features such as the variation rate of the velocity, position, area, and direction of moving vehicles. The model then makes decisions on the traffic accident based on the extracted features. And we suggested and designed the metadata registry for the system to improve the interoperability. In the field test, 4 traffic accidents were detected and recorded by the system. The video clips are invaluable for intersection safety analysis.",
"title": ""
},
{
"docid": "1fc468d42d432f716b3518dbba268db5",
"text": "In this paper a fast sweeping method for computing the numerical solution of Eikonal equations on a rectangular grid is presented. The method is an iterative method which uses upwind difference for discretization and uses Gauss-Seidel iterations with alternating sweeping ordering to solve the discretized system. The crucial idea is that each sweeping ordering follows a family of characteristics of the corresponding Eikonal equation in a certain direction simultaneously. The method has an optimal complexity of O(N) for N grid points and is extremely simple to implement in any number of dimensions. Monotonicity and stability properties of the fast sweeping algorithm are proven. Convergence and error estimates of the algorithm for computing the distance function is studied in detail. It is shown that 2n Gauss-Seidel iterations is enough for the distance function in n dimensions. An estimation of the number of iterations for general Eikonal equations is also studied. Numerical examples are used to verify the analysis.",
"title": ""
},
{
"docid": "5744e87741b6154b333e0f24bb17f0ea",
"text": "We describe two new related resources that facilitate modelling of general knowledge reasoning in 4th grade science exams. The first is a collection of curated facts in the form of tables, and the second is a large set of crowd-sourced multiple-choice questions covering the facts in the tables. Through the setup of the crowd-sourced annotation task we obtain implicit alignment information between questions and tables. We envisage that the resources will be useful not only to researchers working on question answering, but also to people investigating a diverse range of other applications such as information extraction, question parsing, answer type identification, and lexical semantic modelling.",
"title": ""
},
{
"docid": "7e6a3a04c24a0fc24012619d60ebb87b",
"text": "The recent trend toward democratization in countries throughout the globe has challenged scholars to pursue two potentially contradictory goals: to develop a differentiated conceptualization of democracy that captures the diverse experiences of these countries; and to extend the analysis to this broad range of cases without ‘stretching’ the concept. This paper argues that this dual challenge has led to a proliferation of conceptual innovations, including hundreds of subtypes of democracy—i.e., democracy ‘with adjectives.’ The paper explores the strengths and weaknesses of three important strategies of innovation that have emerged: ‘precising’ the definition of democracy; shifting the overarching concept with which democracy is associated; and generating various forms of subtypes. Given the complex structure of meaning produced by these strategies for refining the concept of democracy, we conclude by offering an old piece of advice with renewed urgency: It is imperative that scholars situate themselves in relation to this structure of meaning by clearly defining and explicating the conception of democracy they are employing.",
"title": ""
},
{
"docid": "5ea5650e03be82a600159c2095c387b6",
"text": "The medicinal plants are widely used by the traditional medicinal practitioners for curing various diseases in their day to day practice. In traditional system of medicine, different parts (leaves, stem, flower, root, seeds and even whole plant) of Ocimum sanctum Linn. have been recommended for the treatment of bronchitis, malaria, diarrhea, dysentery, skin disease, arthritis, eye diseases, insect bites and so on. The O. sanctum L. has also been suggested to possess anti-fertility, anticancer, antidiabetic, antifungal, antimicrobial, cardioprotective, analgesic, antispasmodic and adaptogenic actions. Eugenol (1-hydroxy-2-methoxy-4-allylbenzene), the active constituents present in O. sanctum L. have been found to be largely responsible for the therapeutic potentials. The pharmacological studies reported in the present review confirm the therapeutic value of O. sanctum L. The results of the above studies support the use of this plant for human and animal disease therapy and reinforce the importance of the ethno-botanical approach as a potential source of bioactive substances.",
"title": ""
},
{
"docid": "1830c839960f8ce9b26c906cc21e2a39",
"text": "This comparative review highlights the relationships between the disciplines of bloodstain pattern analysis (BPA) in forensics and that of fluid dynamics (FD) in the physical sciences. In both the BPA and FD communities, scientists study the motion and phase change of a liquid in contact with air, or with other liquids or solids. Five aspects of BPA related to FD are discussed: the physical forces driving the motion of blood as a fluid; the generation of the drops; their flight in the air; their impact on solid or liquid surfaces; and the production of stains. For each of these topics, the relevant literature from the BPA community and from the FD community is reviewed. Comments are provided on opportunities for joint BPA and FD research, and on the development of novel FD-based tools and methods for BPA. Also, the use of dimensionless numbers is proposed to inform BPA analyses.",
"title": ""
}
] |
scidocsrr
|
b8e2d6086985467637691a1160afc12b
|
An activity guideline for technology roadmapping implementation
|
[
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "3d81f003b29ad4cea90a533a002f3082",
"text": "Technology roadmapping is becoming an increasingly important and widespread approach for aligning technology with organizational goals. The popularity of roadmapping is due mainly to the communication and networking benefits that arise from the development and dissemination of roadmaps, particularly in terms of building common understanding across internal and external organizational boundaries. From its origins in Motorola and Corning more than 25 years ago, where it was used to link product and technology plans, the approach has been adapted for many different purposes in a wide variety of sectors and at all levels, from small enterprises to national foresight programs. Building on previous papers presented at PICMET, concerning the rapid initiation of the technique, and how to customize the approach, this paper highlights the evolution and continuing growth of the method and its application to general strategic planning. The issues associated with extending the roadmapping method to form a central element of an integrated strategic planning process are considered.",
"title": ""
}
] |
[
{
"docid": "e8ba260c18576f7f8b9f90afed0348e5",
"text": "This paper is aimed at recognition of offline handwritten characters in a given scanned text document with the help of neural networks. Image preprocessing, segmentation and feature extraction are various phases involved in character recognition. The first step is image acquisition followed by noise filtering, smoothing and image normalization of scanned image. Segmentation decomposes image into sub images and feature extraction extracts features from input image. Neural Network is created and trained to classify and recognize handwritten characters.",
"title": ""
},
{
"docid": "bed6069b49afd9c238267c6a276f1ede",
"text": "Today's top high performance computing systems run applications with hundreds of thousands of processes, contain hundreds of storage nodes, and must meet massive I/O requirements for capacity and performance. These leadership-class systems face daunting challenges to deploying scalable I/O systems. In this paper we present a case study of the I/O challenges to performance and scalability on Intrepid, the IBM Blue Gene/P system at the Argonne Leadership Computing Facility. Listed in the top 5 fastest supercomputers of 2008, Intrepid runs computational science applications with intensive demands on the I/O system. We show that Intrepid's file and storage system sustain high performance under varying workloads as the applications scale with the number of processes.",
"title": ""
},
{
"docid": "58612d7c22f6bd0bf1151b7ca5da0f7c",
"text": "In this paper we present a novel method for clustering words in micro-blogs, based on the similarity of the related temporal series. Our technique, named SAX*, uses the Symbolic Aggregate ApproXimation algorithm to discretize the temporal series of terms into a small set of levels, leading to a string for each. We then define a subset of “interesting” strings, i.e. those representing patterns of collective attention. Sliding temporal windows are used to detect co-occurring clusters of tokens with the same or similar string. To assess the performance of the method we first tune the model parameters on a 2-month 1 % Twitter stream, during which a number of world-wide events of differing type and duration (sports, politics, disasters, health, and celebrities) occurred. Then, we evaluate the quality of all discovered events in a 1-year stream, “googling” with the most frequent cluster n-grams and manually assessing how many clusters correspond to published news in the same temporal slot. Finally, we perform a complexity evaluation and we compare SAX* with three alternative methods for event discovery. Our evaluation shows that SAX* is at least one order of magnitude less complex than other temporal and non-temporal approaches to micro-blog clustering.",
"title": ""
},
{
"docid": "0857e32201b675c3e971c6caba8d2087",
"text": "Western tonal music relies on a formal geometric structure that determines distance relationships within a harmonic or tonal space. In functional magnetic resonance imaging experiments, we identified an area in the rostromedial prefrontal cortex that tracks activation in tonal space. Different voxels in this area exhibited selectivity for different keys. Within the same set of consistently activated voxels, the topography of tonality selectivity rearranged itself across scanning sessions. The tonality structure was thus maintained as a dynamic topography in cortical areas known to be at a nexus of cognitive, affective, and mnemonic processing.",
"title": ""
},
{
"docid": "5f17432d235a991a5544ad794875a919",
"text": "We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially observable Markov decision processes (POMDPs) provide a rich mathematical model to handle such environments but require a known model to be solved by most approaches. This is a limitation in practice as the exact model parameters are often difficult to specify exactly. We adopt a Bayesian approach where a posterior distribution over the model parameters is maintained and updated through experience with the environment. We propose a particle filter algorithm to maintain the posterior distribution and an online planning algorithm, based on trajectory sampling, to plan the best action to perform under the current posterior. The resulting approach selects control actions which optimally trade-off between 1) exploring the environment to learn the model, 2) identifying the system's state, and 3) exploiting its knowledge in order to maximize long-term rewards. Our preliminary results on a simulated robot navigation problem show that our approach is able to learn good models of the sensors and actuators, and performs as well as if it had the true model.",
"title": ""
},
{
"docid": "73a5466e9e471a015c601f75d2147ace",
"text": "In this paper we have proposed, developed and tested a hardware module based on Arduino Uno Board and Zigbee wireless technology, which measures the meteorological data, including air temperature, dew point temperature, barometric pressure, relative humidity, wind speed and wind direction. This information is received by a specially designed application interface running on a PC connected through Zigbee wireless link. The proposed system is also a mathematical model capable of generating short time local alerts based on the current weather parameters. It gives an on line and real time effect. We have also compared the data results of the proposed system with the data values of Meteorological Station Chandigarh and Snow & Avalanche Study Establishment Chandigarh Laboratory. The results have come out to be very precise. The idea behind to this work is to monitor the weather parameters, weather forecasting, condition mapping and warn the people from its disastrous effects.",
"title": ""
},
{
"docid": "28ba1eddc74c930350e1b2df5931fa39",
"text": "In this paper, the problem of how to implement the MTPA/MTPV control for an energy efficient operation of a high speed Interior Permanent Magnet Synchronous Motor (IPMSM) used as traction drive is considered. This control method depends on the inductances Ld, Lq, the flux linkage ΨPM and the stator resistance Rs which might vary during operation. The parameter variation causes miscalculation of the set point currents Id and Iq for the inner current control system and thus a wrong torque will be set. Consequently the IPMSM will not be operating in the optimal operation point which yields to a reduction of the total energy efficiency and the performance. As a consequence, this paper proposes the implementation of the the Recursive Least Square Estimation (RLS) for a high speed and high performance IPMSM. With this online identification method the variable parameters are estimated and adapted to the MTPA and MTPV control strategy.",
"title": ""
},
{
"docid": "07d8df7d895f0af5e76bd0d5980055fb",
"text": "Debate over euthanasia is not a recent phenomenon. Over the years, public opinion, decisions of courts, and legal and medical approaches to the issue of euthanasia has been conflicting. The connection between murder and euthanasia has been attempted in a few debates. Although it is widely accepted that murder is a crime, a clearly defined stand has not been taken on euthanasia. This article considers euthanasia from the medical, legal, and global perspectives and discusses the crime of murder in relation to euthanasia, taking into consideration the issue of consent in the law of crime. This article concludes that in the midst of this debate on euthanasia and murder, the important thing is that different countries need to find their own solution to the issue of euthanasia rather than trying to import solutions from other countries.",
"title": ""
},
{
"docid": "aaafdd0e0690fc253ecc9c0059b0d417",
"text": "With the discovery of the polymerase chain reaction (PCR) in the mid-1980's, the last in a series of critical molecular biology techniques (to include the isolation of DNA from human and non-human biological material, and primary sequence analysis of DNA) had been developed to rapidly analyze minute quantities of mitochondrial DNA (mtDNA). This was especially true for mtDNA isolated from challenged sources, such as ancient or aged skeletal material and hair shafts. One of the beneficiaries of this work has been the forensic community. Over the last decade, a significant amount of research has been conducted to develop PCR-based sequencing assays for the mtDNA control region (CR), which have subsequently been used to further characterize the CR. As a result, the reliability of these assays has been investigated, the limitations of the procedures have been determined, and critical aspects of the analysis process have been identified, so that careful control and monitoring will provide the basis for reliable testing. With the application of these assays to forensic identification casework, mtDNA sequence analysis has been properly validated, and is a reliable procedure for the examination of biological evidence encountered in forensic criminalistic cases.",
"title": ""
},
{
"docid": "6a383d8026b500d3365f3a668bafc732",
"text": "In the era of deep sub-wavelength lithography for nanometer VLSI designs, manufacturability and yield issues are critical and need to be addressed during the key physical design implementation stage, in particular detailed routing. However, most existing studies for lithography-friendly routing suffer from either huge run-time due to the intensive lithographic computations involved, or severe loss of quality of results because of the inaccurate predictive models. In this paper, we propose AENEID - a fast, generic and high performance lithography-friendly detailed router for enhanced manufacturability. AENEID combines novel hotspot detection and routing path prediction techniques through modern data learning methods and applies them at the detailed routing stage to drive high fidelity lithography-friendly routing. Compared with existing litho-friendly routing works, AENEID demonstrates 26% to 66% (avg. 50%) of lithography hotspot reduction at the cost of only 18%-38% (avg. 30%) of run-time overhead.",
"title": ""
},
{
"docid": "fad8cf15678cccbc727e9fba6292474d",
"text": "OBJECTIVE\nClinical records contain significant medical information that can be useful to researchers in various disciplines. However, these records also contain personal health information (PHI) whose presence limits the use of the records outside of hospitals. The goal of de-identification is to remove all PHI from clinical records. This is a challenging task because many records contain foreign and misspelled PHI; they also contain PHI that are ambiguous with non-PHI. These complications are compounded by the linguistic characteristics of clinical records. For example, medical discharge summaries, which are studied in this paper, are characterized by fragmented, incomplete utterances and domain-specific language; they cannot be fully processed by tools designed for lay language.\n\n\nMETHODS AND RESULTS\nIn this paper, we show that we can de-identify medical discharge summaries using a de-identifier, Stat De-id, based on support vector machines and local context (F-measure=97% on PHI). Our representation of local context aids de-identification even when PHI include out-of-vocabulary words and even when PHI are ambiguous with non-PHI within the same corpus. Comparison of Stat De-id with a rule-based approach shows that local context contributes more to de-identification than dictionaries combined with hand-tailored heuristics (F-measure=85%). Comparison with two well-known named entity recognition (NER) systems, SNoW (F-measure=94%) and IdentiFinder (F-measure=36%), on five representative corpora show that when the language of documents is fragmented, a system with a relatively thorough representation of local context can be a more effective de-identifier than systems that combine (relatively simpler) local context with global context. Comparison with a Conditional Random Field De-identifier (CRFD), which utilizes global context in addition to the local context of Stat De-id, confirms this finding (F-measure=88%) and establishes that strengthening the representation of local context may be more beneficial for de-identification than complementing local with global context.",
"title": ""
},
{
"docid": "8e02a76799f72d86e7240384bea563fd",
"text": "We have developed the suspended-load backpack, which converts mechanical energy from the vertical movement of carried loads (weighing 20 to 38 kilograms) to electricity during normal walking [generating up to 7.4 watts, or a 300-fold increase over previous shoe devices (20 milliwatts)]. Unexpectedly, little extra metabolic energy (as compared to that expended carrying a rigid backpack) is required during electricity generation. This is probably due to a compensatory change in gait or loading regime, which reduces the metabolic power required for walking. This electricity generation can help give field scientists, explorers, and disaster-relief workers freedom from the heavy weight of replacement batteries and thereby extend their ability to operate in remote areas.",
"title": ""
},
{
"docid": "372f54e1aa5901c53b76939e9572ab74",
"text": "-We develop a technique to test the hypothesis that multilavered../~'ed@~rward network,~ with [~'w units on the .Drst hidden layer ,~eneralize better than networks with many ttllits in the ~irst laver. Large networks are trained to per/orrn a class![)cation task and the redundant units are removed (\"pruning\") to produce the smallest network capable of'perf'orming the task. A teclmiqtte ,/~r inserting layers u'here /~rtttlitlg has introduced linear inseparability is also described. Two tests Of abilio' to generalize are used--the ability to classiflv training inputs corrupwd hv noise and the ability to classtlflv new patterns/)ore each class. The hypothes'is is f?~ltnd to be ,fa{s'e f~>r networks trained with noisy input.s'. Pruning to the mittitnum nt#nber c~f units in the ./irvt layer produces twtworks which correctly classify the training ,set hut j,,eneralize poor O' compared with lar~er ttetworks. Keywords--Neural Networks, Back-propagation, Pattern recognition, Generalization, t|idden units. Pruning. I N T R O D U C T I O N One of the major strengths of artificial neural networks is their ability to recognize or correctly classify patterns which have never been presented to the network before. Neural networks appear unique in their ability to extract the essential features from a training set and use them to identify new inputs. It is not known how the size or structure of a network affects this quality. This work concerns layered, feed-forward networks learning a classification task by back-propagation. Our desire was to investigate the relationship between network structure and the ability of the network to generalize from the training set. We examined the effect of noise in the training set on network structure and generalization, and we examined the effects of network size. This second could not be done simply by training networks of different sizes, since trained networks are not necessarily making effective use of all their hidden units. To address the question of the effective size of a network, we have been developing a technique of training a network which is known or suspected to be larger than required and then trimming off excess units to obtain the smallest network (Sietsma & Dow, Acknowledgement : The authors wish to thank Mr. David A. Penington for his valuable contributions, both in ideas and in excellent computer programs. Requests for reprints should be sent to J. Sietsma, USD, Material Research Laboratory, P.O. Box 50, Ascot Vale, Victoria 3032, Australia. 1988). This is slower than training the \"right'\" size network from the start, but it proves to have a number of advantages. When we started we had some assumptions about the relationship between size and ability to generalize, which we wished to test. If a network can be trained successfully with a small number of units on the first processing layer, these units must be extracting features of the classes which can be compactly expressed by the units and interpreted by the higher layers. This compact coding might be expected to imply good generalization performance. To have many units in a layer can allow a network to become overspecific, approximating a look-up table, particularly in the extreme where the number of units in the first processing layer is equal to the number of examplars in the training set. It has been suggested that networks with n'.,we layers, and fewer units in the early layers, may generalize better than \"'shallow\" networks with many units in each layer (Rumelhart, 1988). However, narrow networks with many layers are far harder to train than broad networks of one or two layers. Sometimes we found that after rigorous removal of inessential units, more layers were required to perform the task. This suggested a way of producing long, narrow networks. A broad network could be trained, trimmed to the fewest possible units, and then extra layers inserted to enable the network to relearn the solution. This avoids both the training difficulties and the problem that the smallest number of units needed for a task is not generally known. We could then test",
"title": ""
},
{
"docid": "f06d083ebd1449b1fd84e826898c2fda",
"text": "The resolution of any linear imaging system is given by its point spread function (PSF) that quantifies the blur of an object point in the image. The sharper the PSF, the better the resolution is. In standard fluorescence microscopy, however, diffraction dictates a PSF with a cigar-shaped main maximum, called the focal spot, which extends over at least half the wavelength of light (λ = 400–700 nm) in the focal plane and >λ along the optical axis (z). Although concepts have been developed to sharpen the focal spot both laterally and axially, none of them has reached their ultimate goal: a spherical spot that can be arbitrarily downscaled in size. Here we introduce a fluorescence microscope that creates nearly spherical focal spots of 40–45 nm (λ/16) in diameter. Fully relying on focused light, this lens-based fluorescence nanoscope unravels the interior of cells noninvasively, uniquely dissecting their sub-λ–sized organelles.",
"title": ""
},
{
"docid": "ca4752a75f440dda1255a71764258a51",
"text": "Neurofeedback is a method for using neural activity displayed on a computer to regulate one's own brain function and has been shown to be a promising technique for training individuals to interact with brain-machine interface applications such as neuroprosthetic limbs. The goal of this study was to develop a user-friendly functional near-infrared spectroscopy (fNIRS)-based neurofeedback system to upregulate neural activity associated with motor imagery, which is frequently used in neuroprosthetic applications. We hypothesized that fNIRS neurofeedback would enhance activity in motor cortex during a motor imagery task. Twenty-two participants performed active and imaginary right-handed squeezing movements using an elastic ball while wearing a 98-channel fNIRS device. Neurofeedback traces representing localized cortical hemodynamic responses were graphically presented to participants in real time. Participants were instructed to observe this graphical representation and use the information to increase signal amplitude. Neural activity was compared during active and imaginary squeezing with and without neurofeedback. Active squeezing resulted in activity localized to the left premotor and supplementary motor cortex, and activity in the motor cortex was found to be modulated by neurofeedback. Activity in the motor cortex was also shown in the imaginary squeezing condition only in the presence of neurofeedback. These findings demonstrate that real-time fNIRS neurofeedback is a viable platform for brain-machine interface applications.",
"title": ""
},
{
"docid": "230924b74e7492d9999c1b2a134deac3",
"text": "The name ambiguity problem presents many challenges for scholar finding, citation analysis and other related research fields. To attack this issue, various disambiguation methods combined with separate disambiguation features have been put forward. In this paper, we offer an unsupervised Dempster–Shafer theory (DST) based hierarchical agglomerative clustering algorithm for author disambiguation tasks. Distinct from existing methods, we exploit the DST in combination with Shannon’s entropy to fuse various disambiguation features and come up with a more reliable candidate pair of clusters for amalgamation in each iteration of clustering. Also, some solutions to determine the convergence condition of the clustering process are proposed. Depending on experiments, our method outperforms three unsupervised models, and achieves comparable performances to a supervised model, while does not prescribe any hand-labelled training data.",
"title": ""
},
{
"docid": "082b1c341435ce93cfab869475ed32bd",
"text": "Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed. We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 / 2.5 / 3 must have Ω(k2) / Ω(k5/4) / Ω(k6/5) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant. We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k2) non-terminals, and any planar graph admits a minor with 1 + ε distortion and Õ((k/ε)2) non-terminals. 1998 ACM Subject Classification G.2.2 Graph Theory",
"title": ""
},
{
"docid": "88520d58d125e87af3d5ea6bb4335c4f",
"text": "We present an algorithm for marker-less performance capture of interacting humans using only three hand-held Kinect cameras. Our method reconstructs human skeletal poses, deforming surface geometry and camera poses for every time step of the depth video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Only the combination of geometric and photometric correspondences and the integration of human pose and camera pose estimation enables reliable performance capture with only three sensors. As opposed to previous performance capture methods, our algorithm succeeds on general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.",
"title": ""
},
{
"docid": "2a6aa350dd7ddc663aaaafe4d745845e",
"text": "Neural networks augmented with external memory have the ability to learn algorithmic solutions to complex tasks. These models appear promising for applications such as language modeling and machine translation. However, they scale poorly in both space and time as the amount of memory grows — limiting their applicability to real-world domains. Here, we present an end-to-end differentiable memory access scheme, which we call Sparse Access Memory (SAM), that retains the representational power of the original approaches whilst training efficiently with very large memories. We show that SAM achieves asymptotic lower bounds in space and time complexity, and find that an implementation runs 1,000⇥ faster and with 3,000⇥ less physical memory than non-sparse models. SAM learns with comparable data efficiency to existing models on a range of synthetic tasks and one-shot Omniglot character recognition, and can scale to tasks requiring 100,000s of time steps and memories. As well, we show how our approach can be adapted for models that maintain temporal associations between memories, as with the recently introduced Differentiable Neural Computer.",
"title": ""
},
{
"docid": "b1845c42902075de02c803e77345a30f",
"text": "Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from taskspecific labeled data during the main training phase. We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On labeled examples, standard supervised learning is used. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input. Since the auxiliary modules and the full model share intermediate representations, this in turn improves the full model. Moreover, we show that CVT is particularly effective when combined with multitask learning. We evaluate CVT on five sequence tagging tasks, machine translation, and dependency parsing, achieving state-of-the-art results.1",
"title": ""
}
] |
scidocsrr
|
f2cf673fdb691fb7a3f142338ff21b81
|
Measuring Online Learning Systems Success: Applying the Updated DeLone and McLean Model
|
[
{
"docid": "a8699e1ed8391e5a55fbd79ae3ac0972",
"text": "The benefits of an e-learning system will not be maximized unless learners use the system. This study proposed and tested alternative models that seek to explain student intention to use an e-learning system when the system is used as a supplementary learning tool within a traditional class or a stand-alone distance education method. The models integrated determinants from the well-established technology acceptance model as well as system and participant characteristics cited in the research literature. Following a demonstration and use phase of the e-learning system, data were collected from 259 college students. Structural equation modeling provided better support for a model that hypothesized stronger effects of system characteristics on e-learning system use. Implications for both researchers and practitioners are discussed. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
}
] |
[
{
"docid": "73d09f005f9335827493c3c47d02852b",
"text": "Multiprotocol Label Switched Networks need highly intelligent controls to manage high volume traffic due to issues of traffic congestion and best path selection. The work demonstrated in this paper shows results from simulations for building optimal fuzzy based algorithm for traffic splitting and congestion avoidance. The design and implementation of Fuzzy based software defined networking is illustrated by introducing the Fuzzy Traffic Monitor in an ingress node. Finally, it displays improvements in the terms of mean delay (42.0%) and mean loss rate (2.4%) for Video Traffic. Then, the resu1t shows an improvement in the terms of mean delay (5.4%) and mean loss rate (3.4%) for Data Traffic and an improvement in the terms of mean delay(44.9%) and mean loss rate(4.1%) for Voice Traffic as compared to default MPLS implementation. Keywords—Multiprotocol Label Switched Networks; Fuzzy Traffic Monitor; Network Simulator; Ingress; Traffic Splitting; Fuzzy Logic Control System; Label setup System; Traffic Splitting System",
"title": ""
},
{
"docid": "0241cef84d46b942ee32fc7345874b90",
"text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.",
"title": ""
},
{
"docid": "f66dfbbd6d2043744d32b44dba145ef2",
"text": "Newly emerging location-based and event-based social network services provide us with a new platform to understand users' preferences based on their activity history. A user can only visit a limited number of venues/events and most of them are within a limited distance range, so the user-item matrix is very sparse, which creates a big challenge for traditional collaborative filtering-based recommender systems. The problem becomes more challenging when people travel to a new city where they have no activity history.\n In this paper, we propose LCARS, a location-content-aware recommender system that offers a particular user a set of venues (e.g., restaurants) or events (e.g., concerts and exhibitions) by giving consideration to both personal interest and local preference. This recommender system can facilitate people's travel not only near the area in which they live, but also in a city that is new to them. Specifically, LCARS consists of two components: offline modeling and online recommendation. The offline modeling part, called LCA-LDA, is designed to learn the interest of each individual user and the local preference of each individual city by capturing item co-occurrence patterns and exploiting item contents. The online recommendation part automatically combines the learnt interest of the querying user and the local preference of the querying city to produce the top-k recommendations. To speed up this online process, a scalable query processing technique is developed by extending the classic Threshold Algorithm (TA). We evaluate the performance of our recommender system on two large-scale real data sets, DoubanEvent and Foursquare. The results show the superiority of LCARS in recommending spatial items for users, especially when traveling to new cities, in terms of both effectiveness and efficiency.",
"title": ""
},
{
"docid": "7c0b7d55abdd6cce85730dbf1cd02109",
"text": "Suppose fx, h , ■ • ■ , fk are polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being h\\ , h2, •• -, A* respectively. Suppose each of these polynomials is irreducible over the field of rational numbers and no two of them differ by a constant factor. Let Q(fx ,f2, • • • ,fk ; N) denote the number of positive integers n between 1 and N inclusive such that /i(n), f2(n), • ■ ■ , fk(n) are all primes. (We ignore the finitely many values of n for which some /,(n) is negative.) Then heuristically we would expect to have for N large",
"title": ""
},
{
"docid": "f555a50f629bd9868e1be92ebdcbc154",
"text": "The transformation of traditional energy networks to smart grids revolutionizes the energy industry in terms of reliability, performance, and manageability by providing bi-directional communications to operate, monitor, and control power flow and measurements. However, communication networks in smart grid bring increased connectivity with increased severe security vulnerabilities and challenges. Smart grid can be a prime target for cyber terrorism because of its critical nature. As a result, smart grid security is already getting a lot of attention from governments, energy industries, and consumers. There have been several research efforts for securing smart grid systems in academia, government and industries. This article provides a comprehensive study of challenges in smart grid security, which we concentrate on the problems and proposed solutions. Then, we outline current state of the research and future perspectives.With this article, readers can have a more thorough understanding of smart grid security and the research trends in this topic.",
"title": ""
},
{
"docid": "ca51d7c9c4a764dbb2f8f01adf3f3b5a",
"text": "Detecting carried objects is one of the requirements for developing systems to reason about activities involving people and objects. We present an approach to detect carried objects from a single video frame with a novel method that incorporates features from multiple scales. Initially, a foreground mask in a video frame is segmented into multi-scale superpixels. Then the human-like regions in the segmented area are identified by matching a set of extracted features from superpixels against learned features in a codebook. A carried object probability map is generated using the complement of the matching probabilities of superpixels to human-like regions and background information. A group of superpixels with high carried object probability and strong edge support is then merged to obtain the shape of the carried object. We applied our method to two challenging datasets, and results show that our method is competitive with or better than the state-of-the-art.",
"title": ""
},
{
"docid": "b8de76afab03ad223fb4713b214e3fec",
"text": "Companies facing new requirements for governance are scrambling to buttress financial-reporting systems, overhaul board structures--whatever it takes to comply. But there are limits to how much good governance can be imposed from the outside. Boards know what they ought to be: seats of challenge and inquiry that add value without meddling and make CEOs more effective but not all-powerful. A board can reach that goal only if it functions as a high-performance team, one that is competent, coordinated, collegial, and focused on an unambiguous goal. Such entities don't just evolve; they must be constructed to an exacting blueprint--what the author calls board building. In this article, Nadler offers an agenda and a set of tools that boards can use to define and achieve their objectives. It's important for a board to conduct regular self-assessments and to pay attention to the results of those analyses. As a first step, the directors and the CEO should agree on which of the following common board models best fits the company: passive, certifying, engaged, intervening, or operating. The directors and the CEO should then analyze which business tasks are most important and allot sufficient time and resources to them. Next, the board should take inventory of each director's strengths to ensure that the group as a whole possesses the skills necessary to do its work. Directors must exert more influence over meeting agendas and make sure they have the right information at the right time and in the right format to perform their duties. Finally, the board needs to foster an engaged culture characterized by candor and a willingness to challenge. An ambitious board-building process, devised and endorsed both by directors and by management, can potentially turn a good board into a great one.",
"title": ""
},
{
"docid": "dde00778c4d9a3123317840eb001df54",
"text": "The ability to generate heat under an alternating magnetic field (AMF) makes magnetic iron oxide nanoparticles (MIONs) an ideal heat source for biomedical applications including cancer thermoablative therapy, tissue preservation, and remote control of cell function. However, there is a lack of quantitative understanding of the mechanisms governing heat generation of MIONs, and the optimal nanoparticle size for magnetic fluid heating (MFH) applications. Here, we show that MIONs with large sizes (>20 nm) have a specific absorption rate (SAR) significantly higher than that predicted by the widely used linear theory of MFH. The heating efficiency of MIONs in both the superparamagnetic and ferromagnetic regimes increased with size, which can be accurately characterized with a modified dynamic hysteresis model. In particular, the 40 nm ferromagnetic nanoparticles have an SAR value approaching the theoretical limit under a clinically relevant AMF. An in vivo study further demonstrated that the 40 nm MIONs could effectively heat tumor tissues at a minimal dose. Our experimental results and theoretical analysis on nanoparticle heating offer important insight into the rationale design of MION-based MFH for therapeutic applications.",
"title": ""
},
{
"docid": "b08027d8febf1d7f8393b9934739847d",
"text": "Sarcasm is generally characterized as a figure of speech that involves the substitution of a literal by a figurative meaning, which is usually the opposite of the original literal meaning. We re-frame the sarcasm detection task as a type of word sense disambiguation problem, where the sense of a word is either literal or sarcastic. We call this the Literal/Sarcastic Sense Disambiguation (LSSD) task. We address two issues: 1) how to collect a set of target words that can have either literal or sarcastic meanings depending on context; and 2) given an utterance and a target word, how to automatically detect whether the target word is used in the literal or the sarcastic sense. For the latter, we investigate several distributional semantics methods and show that a Support Vector Machines (SVM) classifier with a modified kernel using word embeddings achieves a 7-10% F1 improvement over a strong lexical baseline.",
"title": ""
},
{
"docid": "92e150f30ae9ef371ffdd7160c84719d",
"text": "Categorization is a vitally important skill that people use every day. Early theories of category learning assumed a single learning system, but recent evidence suggests that human category learning may depend on many of the major memory systems that have been hypothesized by memory researchers. As different memory systems flourish under different conditions, an understanding of how categorization uses available memory systems will improve our understanding of a basic human skill, lead to better insights into the cognitive changes that result from a variety of neurological disorders, and suggest improvements in training procedures for complex categorization tasks.",
"title": ""
},
{
"docid": "289694f2395a6a2afc7d86d475b9c02d",
"text": "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a finegrained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.",
"title": ""
},
{
"docid": "b32218abeff9a34c3e89eac76b8c6a45",
"text": "The reliability and availability of distributed services can be ensured using replication. We present an architecture and an algorithm for Byzantine fault-tolerant state machine replication. We explore the benefits of virtualization to reliably detect and tolerate faulty replicas, allowing the transformation of Byzantine faults into omission faults. Our approach reduces the total number of physical replicas from 3f+1 to 2f+1. It is based on the concept of twin virtual machines, which involves having two virtual machines in each physical host, each one acting as failure detector of the other.",
"title": ""
},
{
"docid": "a3f06bfcc2034483cac3ee200803878c",
"text": "This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.",
"title": ""
},
{
"docid": "8d9a55b7d730d9acbff50aef4f55808b",
"text": "Interactions between light and matter can be dramatically modified by concentrating light into a small volume for a long period of time. Gaining control over such interaction is critical for realizing many schemes for classical and quantum information processing, including optical and quantum computing, quantum cryptography, and metrology and sensing. Plasmonic structures are capable of confining light to nanometer scales far below the diffraction limit, thereby providing a promising route for strong coupling between light and matter, as well as miniaturization of photonic circuits. At the same time, however, the performance of plasmonic circuits is limited by losses and poor collection efficiency, presenting unique challenges that need to be overcome for quantum plasmonic circuits to become a reality. In this paper, we survey recent progress in controlling emission from quantum emitters using plasmonic structures, as well as efforts to engineer surface plasmon propagation and design plasmonic circuits using these elements.",
"title": ""
},
{
"docid": "670b35833f96a62bce9e2ddd58081fc4",
"text": "Although video summarization has achieved great success in recent years, few approaches have realized the influence of video structure on the summarization results. As we know, the video data follow a hierarchical structure, i.e., a video is composed of shots, and a shot is composed of several frames. Generally, shots provide the activity-level information for people to understand the video content. While few existing summarization approaches pay attention to the shot segmentation procedure. They generate shots by some trivial strategies, such as fixed length segmentation, which may destroy the underlying hierarchical structure of video data and further reduce the quality of generated summaries. To address this problem, we propose a structure-adaptive video summarization approach that integrates shot segmentation and video summarization into a Hierarchical Structure-Adaptive RNN, denoted as HSA-RNN. We evaluate the proposed approach on four popular datasets, i.e., SumMe, TVsum, CoSum and VTW. The experimental results have demonstrated the effectiveness of HSA-RNN in the video summarization task.",
"title": ""
},
{
"docid": "c70383b0a3adb6e697932ef4b02877ac",
"text": "Betweenness centrality (BC) is a crucial graph problem that measures the significance of a vertex by the number of shortest paths leading through it. We propose Maximal Frontier Betweenness Centrality (MFBC): a succinct BC algorithm based on novel sparse matrix multiplication routines that performs a factor of p1/3 less communication on p processors than the best known alternatives, for graphs with n vertices and average degree k = n/p2/3. We formulate, implement, and prove the correctness of MFBC for weighted graphs by leveraging monoids instead of semirings, which enables a surprisingly succinct formulation. MFBC scales well for both extremely sparse and relatively dense graphs. It automatically searches a space of distributed data decompositions and sparse matrix multiplication algorithms for the most advantageous configuration. The MFBC implementation outperforms the well-known CombBLAS library by up to 8x and shows more robust performance. Our design methodology is readily extensible to other graph problems.",
"title": ""
},
{
"docid": "1bb7c5d71db582329ad8e721fdddb0b3",
"text": "The sharing economy is spreading rapidly worldwide in a number of industries and markets. The disruptive nature of this phenomenon has drawn mixed responses ranging from active conflict to adoption and assimilation. Yet, in spite of the growing attention to the sharing economy, we still do not know much about it. With the abundant enthusiasm about the benefits that the sharing economy can unleash and the weekly reminders about its dark side, further examination is required to determine the potential of the sharing economy while mitigating its undesirable side effects. The panel will join the ongoing debate about the sharing economy and contribute to the discourse with insights about how digital technologies are critical in shaping this turbulent ecosystem. Furthermore, we will define an agenda for future research on the sharing economy as it becomes part of the mainstream society as well as part of the IS research",
"title": ""
},
{
"docid": "401ae8d7243fa09d3dd358237f0c64f9",
"text": "We introduce a novel information-theoretic approach for active model selection and demonstrate its effectiveness in a real-world application. Although our method can work with arbitrary models, we focus on actively learning the appropriate structure for Gaussian process (GP) models with arbitrary observation likelihoods. We then apply this framework to rapid screening for noise-induced hearing loss (NIHL), a widespread and preventible disability, if diagnosed early. We construct a GP model for pure-tone audiometric responses of patients with NIHL. Using this and a previously published model for healthy responses, the proposed method is shown to be capable of diagnosing the presence or absence of NIHL with drastically fewer samples than existing approaches. Further, the method is extremely fast and enables the diagnosis to be performed in real time.",
"title": ""
},
{
"docid": "32317e5403d75ccc5f2725991f281874",
"text": "Background: Knowing the cultural factors existing behind health behaviors is important for improving the acceptance of services and for elevating the quality of service. Objectives: This study was conducted for the purpose of evaluating the effect of cultural characteristics on use of health care services using the “Giger and Davidhizar’s Transcultural Assessment Model”. Methods: The research is qualitative. The study group was 31 individuals who volunteered to participate in the study and living in a rural area. The snowball method was used. Data were collected in 2005. Results: Limitations/obstacles to the use of health care services were the widespread gender, use of traditional treatment methods, a high level of environmental control, and a fatalistic attitude about health. Conclusion: According to the results the most important limitation/obstacle to using health care services was being a woman.",
"title": ""
}
] |
scidocsrr
|
ae036b2fdd01807e326000d60af3fb17
|
EGameFlow: A scale to measure learners' enjoyment of e-learning games
|
[
{
"docid": "ef8d88d57858706ba269a8f3aaa989f3",
"text": "The mid 20 century witnessed some serious attempts in studies of play and games with an emphasis on their importance within culture. Most prominently, Johan Huizinga (1944) maintained in his book Homo Ludens that the earliest stage of culture is in the form of play and that culture proceeds in the shape and the mood of play. He also claimed that some elements of play crystallised as knowledge such as folklore, poetry and philosophy as culture advanced.",
"title": ""
},
{
"docid": "ff1c33b797861cde34b8705c1136912b",
"text": "This workshop addresses current needs in the games developers' community and games industry to evaluate the overall user experience of games. New forms of interaction techniques, like gestures, eye-tracking or even bio-physiological input and feedback present the limits of current evaluation methods for user experience, and even standard usability evaluation used during game development. This workshop intends to bring together practitioners and researchers sharing their experiences using methods from HCI to explore and measure usability and user experience in games. To this workshop we also invite contributions from other disciplines (especially from the games industry) showing new concepts for user experience evaluation.",
"title": ""
}
] |
[
{
"docid": "08b2b3539a1b10f7423484946121ed50",
"text": "BACKGROUND\nCatheter ablation of persistent atrial fibrillation yields an unsatisfactorily high number of failures. The hybrid approach has recently emerged as a technique that overcomes the limitations of both surgical and catheter procedures alone.\n\n\nMETHODS AND RESULTS\nWe investigated the sequential (staged) hybrid method, which consists of a surgical thoracoscopic radiofrequency ablation procedure followed by radiofrequency catheter ablation 6 to 8 weeks later using the CARTO 3 mapping system. Fifty consecutive patients (mean age 62±7 years, 32 males) with long-standing persistent atrial fibrillation (41±34 months) and a dilated left atrium (>45 mm) were included and prospectively followed in an unblinded registry. During the electrophysiological part of the study, all 4 pulmonary veins were found to be isolated in 36 (72%) patients and a complete box-lesion was confirmed in 14 (28%) patients. All gaps were successfully re-ablated. Twelve months after the completed hybrid ablation, 47 patients (94%) were in normal sinus rhythm (4 patients with paroxysmal atrial fibrillation required propafenone and 1 patient underwent a redo catheter procedure). The majority of arrhythmias recurred during the first 3 months. Beyond 12 months, there were no arrhythmia recurrences detected. The surgical part of the procedure was complicated by 7 (13.7%) major complications, while no serious adverse events were recorded during the radiofrequency catheter part of the procedure.\n\n\nCONCLUSIONS\nThe staged hybrid epicardial-endocardial treatment of long-standing persistent atrial fibrillation seems to be extremely effective in maintenance of normal sinus rhythm compared to radiofrequency catheter or surgical ablation alone. Epicardial ablation alone cannot guarantee durable transmural lesions.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: www.ablace.cz Unique identifier: cz-060520121617.",
"title": ""
},
{
"docid": "65b9bef6e27683257a67e75a51a47ea0",
"text": "This paper describes a conceptual approach to individual and organizational competencies needed for Open Innovation (OI) using a new ambidexterity model. It starts from the assumption that the entire innovation process is rarely open by all means, as the OI concept may suggest. It rather takes into consideration that in practice especially for early phases of the innovation process the organization and their innovation actors are opening up for new ways of joint ideation, collaboration etc. to gain a maximum of explorative performance and effectiveness. Though, when it comes to committing considerable resources to development and implementation activities, the innovation process usually closes step by step as efficiency criteria gain ground for a maximum of knowledge exploitation. The ambidexterity model of competences for OI refers to these tensions and provides a new framework to understand the needs of industry and Higher Education Institutes (HEI) to develop appropriate exploration and exploitation competencies for OI.",
"title": ""
},
{
"docid": "fcccb84e3a26ed0acf53bac35ae466ea",
"text": "In this paper, we introduce a vision for Semantic Web services which combines the growing Web services architecture and the Semantic Web and we will propose DAML-S as a prototypical example of an ontology for describing Semantic Web services. Furthermore, we show that DAML-S is not just an abstract description, but it can be efficiently implemented to support capability matching and to manage interaction between Web services. Specifically, we will describe the implementation of the DAML-S/UDDI Matchmaker that expands on UDDI by providing semantic capability matching, and we will present the DAML-S Virtual Machine that uses the DAML-S Process Model to manage the interaction with Web service. We will also show that the use of DAML-S does not produce a performance penalty during the normal operation of Web services. © 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b6d853e456003da0978bafd8153511ec",
"text": "Bitcoin is gaining increasing adoption and popularity nowadays. In spite of its reliance on pseudonyms, Bitcoin raises a number of privacy concerns due to the fact that all of the transactions that take place in the system are publicly announced. The literature contains a number of proposals that aim at evaluating and enhancing user privacy in Bitcoin. To the best of our knowledge, ZeroCoin (ZC) is the first proposal which prevents the public tracing of coin expenditure in Bitcoin by leveraging zero-knowledge proofs of knowledge and one-way accumulators. While ZeroCoin hardens the traceability of coins, it does not hide the amount per transaction, nor does it prevent the leakage of the balances of Bitcoin addresses. In this paper, we propose, EZC, an extension of ZeroCoin which (i) enables the construction of multi-valued ZCs whose values are only known to the sender and recipient of the transaction and (ii) supports the expenditure of ZCs among users in the Bitcoin system, without the need to convert them back to Bitcoins. By doing so, EZC hides transaction values and address balances in Bitcoin, for those users who opt-out from exchanging their coins to BTCs. We performed a preliminary assessment of the performance of EZC; our findings suggest that EZC improves the communication overhead incurred in ZeroCoin.",
"title": ""
},
{
"docid": "6b467ec8262144150b17cedb3d96edcb",
"text": "We describe a new method of measuring surface currents using an interferometric synthetic aperture radar. An airborne implementation has been tested over the San Francisco Bay near the time of maximum tidal flow, resulting in a map of the east-west component of the current. Only the line-of-sight component of velocity is measured by this technique. Where the signal-to-noise ratio was strongest, statistical fluctuations of less than 4 cm s−1 were observed for ocean patches of 60×60 m.",
"title": ""
},
{
"docid": "1cc5ab9bd552e6399c6cf5a06e0ca235",
"text": "Fake identities and Sybil accounts are pervasive in today’s online communities. They are responsible for a growing number of threats, including fake product reviews, malware and spam on social networks, and astroturf political campaigns. Unfortunately, studies show that existing tools such as CAPTCHAs and graph-based Sybil detectors have not proven to be effective defenses. In this paper, we describe our work on building a practical system for detecting fake identities using server-side clickstream models. We develop a detection approach that groups “similar” user clickstreams into behavioral clusters, by partitioning a similarity graph that captures distances between clickstream sequences. We validate our clickstream models using ground-truth traces of 16,000 real and Sybil users from Renren, a large Chinese social network with 220M users. We propose a practical detection system based on these models, and show that it provides very high detection accuracy on our clickstream traces. Finally, we worked with collaborators at Renren and LinkedIn to test our prototype on their server-side data. Following positive results, both companies have expressed strong interest in further experimentation and possible internal deployment.",
"title": ""
},
{
"docid": "5c898e311680199f1f369d3c264b2b14",
"text": "Behaviour Driven Development (BDD) has gained increasing attention as an agile development approach in recent years. However, characteristics that constituite the BDD approach are not clearly defined. In this paper, we present a set of main BDD charactersitics identified through an analysis of relevant literature and current BDD toolkits. Our study can provide a basis for understanding BDD, as well as for extending the exisiting BDD toolkits or developing new ones.",
"title": ""
},
{
"docid": "d35ff18c7d7f8f02f803a0138530fbff",
"text": "This paper presents the design and development of a novel Natural Language Interface to Database (NLIDB). The developed prototype is called Aneesah the NLIDB, which is capable of allowing users to interactively/conversely access desired information stored in a relational database. This paper introduces the novel conversational agent enabled architecture of Aneesah NLIDB and describes the scripting techniques that has been adopted for its development. The proposed framework for Aneesah NLIDB is based on pattern matching techniques implemented to converse with users, handle complexities and ambiguities for building dynamic SQL queries from multiple dialogues in order to extract database information. The preliminary evaluation results gathered following a pilot study reveal promising results. Index Terms – Natural Language Interface to Databases (NLIDB), Conversational Agents (CA), Knowledge base, Artificial Intelligence (AI), Pattern Matching (PM).",
"title": ""
},
{
"docid": "0c01132904f2c580884af1391069addd",
"text": "BACKGROUND\nThe inclusion of qualitative studies in systematic reviews poses methodological challenges. This paper presents worked examples of two methods of data synthesis (textual narrative and thematic), used in relation to one review, with the aim of enabling researchers to consider the strength of different approaches.\n\n\nMETHODS\nA systematic review of lay perspectives of infant size and growth was conducted, locating 19 studies (including both qualitative and quantitative). The data extracted from these were synthesised using both a textual narrative and a thematic synthesis.\n\n\nRESULTS\nThe processes of both methods are presented, showing a stepwise progression to the final synthesis. Both methods led us to similar conclusions about lay views toward infant size and growth. Differences between methods lie in the way they dealt with study quality and heterogeneity.\n\n\nCONCLUSION\nOn the basis of the work reported here, we consider textual narrative and thematic synthesis have strengths and weaknesses in relation to different research questions. Thematic synthesis holds most potential for hypothesis generation, but may obscure heterogeneity and quality appraisal. Textual narrative synthesis is better able to describe the scope of existing research and account for the strength of evidence, but is less good at identifying commonality.",
"title": ""
},
{
"docid": "f7c46115abe7cc204dd7dbd56f9e13c6",
"text": "Forecasting of future electricity demand is very important for decision making in power system operation and planning. In recent years, due to privatization and deregulation of the power industry, accurate electricity forecasting has become an important research area for efficient electricity production. This paper presents a time series approach for mid-term load forecasting (MTLF) in order to predict the daily peak load for the next month. The proposed method employs a computational intelligence scheme based on the self-organizing map (SOM) and support vector machine (SVM). According to the similarity degree of the time series load data, SOM is used as a clustering tool to cluster the training data into two subsets, using the Kohonen rule. As a novel machine learning technique, the support vector regression (SVR) is used to fit the testing data based on the clustered subsets, for predicting the daily peak load. Our proposed SOM-SVR load forecasting model is evaluated in MATLAB on the electricity load dataset provided by the Eastern Slovakian Electricity Corporation, which was used in the 2001 European Network on Intelligent Technologies (EUNITE) load forecasting competition. Power load data obtained from (i) Tenaga Nasional Berhad (TNB) for peninsular Malaysia and (ii) PJM for the eastern interconnection grid of the United States of America is used to benchmark the performance of our proposed model. Experimental results obtained indicate that our proposed SOM-SVR technique gives significantly good prediction accuracy for MTLF compared to previously researched findings using the EUNITE, Malaysian and PJM electricity load",
"title": ""
},
{
"docid": "82d3217331a70ead8ec3064b663de451",
"text": "The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer’s output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.",
"title": ""
},
{
"docid": "e1d9ff28da38fcf8ea3a428e7990af25",
"text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.",
"title": ""
},
{
"docid": "350daaeb965ac6a1383ec96f4d34e0ba",
"text": "This paper proposes a new automatic approach for the detection of SQL Injection and XPath Injection vulnerabilities, two of the most common and most critical types of vulnerabilities in web services. Although there are tools that allow testing web applications against security vulnerabilities, previous research shows that the effectiveness of those tools in web services environments is very poor. In our approach a representative workload is used to exercise the web service and a large set of SQL/XPath Injection attacks are applied to disclose vulnerabilities. Vulnerabilities are detected by comparing the structure of the SQL/XPath commands issued in the presence of attacks to the ones previously learned when running the workload in the absence of attacks. Experimental evaluation shows that our approach performs much better than known tools (including commercial ones), achieving extremely high detection coverage while maintaining the false positives rate very low.",
"title": ""
},
{
"docid": "834a5cb9f2948443fbb48f274e02ca9c",
"text": "The Carnegie Mellon Communicator is a telephone-based dialog system that supports planning in a travel domain. The implementation of such a system requires two complimentary components, an architecture capable of managing interaction and the task, as well as a knowledge base that captures the speech, language and task characteristics specific to the domain. Given a suitable architecture, the principal effort in development in taken up in the acquisition and processing of a domain knowledge base. This paper describes a variety of techniques we have applied to modeling in acoustic, language, task, generation and synthesis components of the system.",
"title": ""
},
{
"docid": "6d61da17db5c16611409356bd79006c4",
"text": "We examine empirical evidence for religious prosociality, the hypothesis that religions facilitate costly behaviors that benefit other people. Although sociological surveys reveal an association between self-reports of religiosity and prosociality, experiments measuring religiosity and actual prosocial behavior suggest that this association emerges primarily in contexts where reputational concerns are heightened. Experimentally induced religious thoughts reduce rates of cheating and increase altruistic behavior among anonymous strangers. Experiments demonstrate an association between apparent profession of religious devotion and greater trust. Cross-cultural evidence suggests an association between the cultural presence of morally concerned deities and large group size in humans. We synthesize converging evidence from various fields for religious prosociality, address its specific boundary conditions, and point to unresolved questions and novel predictions.",
"title": ""
},
{
"docid": "3fa0be0d8075e68b5344fe85d37c7dee",
"text": "We develop a structural model for the co-evolution of individuals’ friendship tie formations and their concurrent online activities (product adoptions and production of user-generated content) within a social network. Explicitly modeling the endogenous formation of the network and accounting for the interdependence between decisions in these two areas (friendship formations and concurrent online activities) provides a clean identification of peer effects and of important drivers of individuals’ friendship decisions. We estimate our model using a novel data set capturing the continuous development of a network and users’ entire action histories within the network. Our results reveal that, compared to a potential friend’s product adoptions and content generation activities, the total number of friends and the number of common friends this potential friend has with the focal individual are the most important drivers of friendship formation. Further, while having more friends does not make a person more active, having more active friends does increase a user’s activity levels in terms of both product adoptions and content generation through peer effects. Via counterfactuals we assess the effectiveness of various seeding and stimulation strategies in increasing website traffic while taking the endogenous network formation into account. We find that seeding to users with the most friends is not always the best strategy to increase users’ activity levels on the website.",
"title": ""
},
{
"docid": "8aae828a75eb83192e7ac9850f70e7ff",
"text": "Over the past decade, goal models have been used in Computer Science in order to represent software requirements, business objectives and design qualities. Such models extend traditional AI planning techniques for representing goals by allowing for partially defined and possibly inconsistent goals. This paper presents a formal framework for reasoning with such goal models. In particular, the paper proposes a qualitative and a numerical axiomatization for goal modeling primitives and introduces label propagation algorithms that are shown to be sound and complete with respect to their respective axiomatizations. In addition, the paper reports on experimental results on the propagation algorithms applied to a goal model for a US car manufacturer.",
"title": ""
},
{
"docid": "27ba6cfdebdedc58ab44b75a15bbca05",
"text": "OBJECTIVES\nTo assess the influence of material/technique selection (direct vs. CAD/CAM inlays) for large MOD composite adhesive restorations and its effect on the crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slot-type tooth preparation was applied to 32 extracted maxillary molars (5mm depth and 5mm bucco-palatal width) including immediately sealed dentin for the inlay group. Fifteen teeth were restored with direct composite resin restoration (Miris2) and 17 teeth received milled inlays using Paradigm MZ100 block in the CEREC machine. All inlays were adhesively luted with a light curing composite resin (Filtek Z100). Enamel shrinkage-induced cracks were tracked with photography and transillumination. Cyclic isometric chewing (5 Hz) was simulated, starting with a load of 200 N (5000 cycles), followed by stages of 400, 600, 800, 1000, 1200 and 1400 N at a maximum of 30,000 cycles each. Samples were loaded until fracture or to a maximum of 185,000 cycles.\n\n\nRESULTS\nTeeth restored with the direct technique fractured at an average load of 1213 N and two of them withstood all loading cycles (survival=13%); with inlays, the survival rate was 100%. Most failures with Miris2 occurred above the CEJ and were re-restorable (67%), but generated more shrinkage-induced cracks (47% of the specimen vs. 7% for inlays).\n\n\nSIGNIFICANCE\nCAD/CAM MZ100 inlays increased the accelerated fatigue resistance and decreased the crack propensity of large MOD restorations when compared to direct restorations. While both restorative techniques yielded excellent fatigue results at physiological masticatory loads, CAD/CAM inlays seem more indicated for high-load patients.",
"title": ""
},
{
"docid": "aa2af8bd2ef74a0b5fa463a373a4c049",
"text": "What modern game theorists describe as “fictitious play” is not the learning process George W. Brown defined in his 1951 paper. Brown’s original version differs in a subtle detail, namely the order of belief updating. In this note we revive Brown’s original fictitious play process and demonstrate that this seemingly innocent detail allows for an extremely simple and intuitive proof of convergence in an interesting and large class of games: nondegenerate ordinal potential games. © 2006 Elsevier Inc. All rights reserved. JEL classification: C72",
"title": ""
}
] |
scidocsrr
|
f172512c8d31844ec68149e88c094982
|
Cellulose chemical markers in transformer oil insulation Part 1: Temperature correction factors
|
[
{
"docid": "7a4f42c389dbca2f3c13469204a22edd",
"text": "This article attempts to capture and summarize the known technical information and recommendations for analysis of furan test results. It will also provide the technical basis for continued gathering and evaluation of furan data for liquid power transformers, and provide a recommended structure for collecting that data.",
"title": ""
}
] |
[
{
"docid": "94f39416ba9918e664fb1cd48732e3ae",
"text": "In this paper, a nanostructured biosensor is developed to detect glucose in tear by using fluorescence resonance energy transfer (FRET) quenching mechanism. The designed FRET pair, including the donor, CdSe/ZnS quantum dots (QDs), and the acceptor, dextran-binding malachite green (MG-dextran), was conjugated to concanavalin A (Con A), an enzyme with specific affinity to glucose. In the presence of glucose, the quenched emission of QDs through the FRET mechanism is restored by displacing the dextran from Con A. To have a dual-modulation sensor for convenient and accurate detection, the nanostructured FRET sensors were assembled onto a patterned ZnO nanorod array deposited on the synthetic silicone hydrogel. Consequently, the concentration of glucose detected by the patterned sensor can be converted to fluorescence spectra with high signal-to-noise ratio and calibrated image pixel value. The photoluminescence intensity of the patterned FRET sensor increases linearly with increasing concentration of glucose from 0.03mmol/L to 3mmol/L, which covers the range of tear glucose levels for both diabetics and healthy subjects. Meanwhile, the calibrated values of pixel intensities of the fluorescence images captured by a handhold fluorescence microscope increases with increasing glucose. Four male Sprague-Dawley rats with different blood glucose concentrations were utilized to demonstrate the quick response of the patterned FRET sensor to 2µL of tear samples.",
"title": ""
},
{
"docid": "1274656b97db1f736944c174a174925d",
"text": "In full-duplex systems, due to the strong self-interference signal, system nonlinearities become a significant limiting factor that bounds the possible cancellable self-interference power. In this paper, a self-interference cancellation scheme for full-duplex orthogonal frequency division multiplexing systems is proposed. The proposed scheme increases the amount of cancellable self-interference power by suppressing the distortion caused by the transmitter and receiver nonlinearities. An iterative technique is used to jointly estimate the self-interference channel and the nonlinearity coefficients required to suppress the distortion signal. The performance is numerically investigated showing that the proposed scheme achieves a performance that is less than 0.5dB off the performance of a linear full-duplex system.",
"title": ""
},
{
"docid": "fac92316ce84b0c10b0bef2827d78b03",
"text": "Background: High rates of teacher turnover likely mean greater school instability, disruption of curricular cohesiveness, and a continual need to hire inexperienced teachers, who typically are less effective, as replacements for teachers who leave. Unfortunately, research consistently finds that teachers who work in schools with large numbers of poor students and students of color feel less satisfied and are more likely to turn over, meaning that turnover is concentrated in the very schools that would benefit most from a stable staff of experienced teachers. Despite the potential challenge that this turnover disparity poses for equity of educational opportunity and student performance gaps across schools, little research has examined the reasons for elevated teacher turnover in schools with large numbers of traditionally disadvantaged students. Purpose: This study hypothesizes that school working conditions help explain both teacher satisfaction and turnover. In particular, it focuses on the role effective principals in retaining teachers, particularly in disadvantaged schools with the greatest staffing challenges. Research Design: The study conducts quantitative analysis of national data from the 2003-04 Schools and Staffing Survey and 2004-05 Teacher Follow-up Survey. Regression analyses combat the potential for bias from omitted variables by utilizing an extensive set of control variables and employing a school district fixed effects approach that implicitly makes comparisons among principals and teachers within the same local context. Conclusions: Descriptive analyses confirm that observable measures of teachers‘ work environments, including ratings of the effectiveness of the principal, generally are lower in schools with large numbers of disadvantaged students. Regression results show that principal effectiveness is associated with greater teacher satisfaction and a lower probability that the teacher leaves the school within a year. Moreover, the positive impacts of principal effectiveness on these teacher outcomes are even greater in disadvantaged schools. These findings suggest that policies focused on getting the best principals into the most challenging school environments may be effective strategies for lowering perpetually high teacher turnover rates in those schools.",
"title": ""
},
{
"docid": "9955e99d9eba166458f5551551ab05e3",
"text": "Every day, millions of tons of temperature sensitive goods are produced, transported, stored or distributed worldwide. For all these products the control of temperature is essential. The term “cold chain” describes the series of interdependent equipment and processes employed to ensure the temperature preservation of perishables and other temperaturecontrolled products from the production to the consumption end in a safe, wholesome, and good quality state (Zhang, 2007). In other words, it is a supply chain of temperature sensitive products. So temperature-control is the key point in cold chain operation and the most important factor when prolonging the practical shelf life of produce. Thus, the major challenge is to ensure a continuous ‘cold chain’ from producer to consumer in order to guaranty prime condition of goods (Ruiz-Garcia et al., 2007).These products can be perishable items like fruit, vegetables, flowers, fish, meat and dairy products or medical products like drugs, blood, vaccines, organs, plasma and tissues. All of them can have their properties affected by temperature changes. Also some chemicals and electronic components like microchips are temperature sensitive.",
"title": ""
},
{
"docid": "e948583ef067952fa8c968de5e5ae643",
"text": "A key problem in learning representations of multiple objects from unlabeled images is that it is a priori impossible to tell which part of the image corresponds to each individual object, and which part is irrelevant clutter. Distinguishing individual objects in a scene would allow unsupervised learning of multiple objects from unlabeled images. There is psychophysical and neurophysiological evidence that the brain employs visual attention to select relevant parts of the image and to serialize the perception of individual objects. We propose a method for the selection of salient regions likely to contain objects, based on bottom-up visual attention. By comparing the performance of David Lowe s recognition algorithm with and without attention, we demonstrate in our experiments that the proposed approach can enable one-shot learning of multiple objects from complex scenes, and that it can strongly improve learning and recognition performance in the presence of large amounts of clutter. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a55422a96369797c7d42cb77dc99c6dc",
"text": "In order to store massive image data in real-time system, a high performance Serial Advanced Technology Attachment[1] (SATA) controller is proposed in this paper. RocketIO GTX transceiver[2] realizes physical layer of SATA protocol. Link layer and transport layers are implemented in VHDL with programmable logic resources. Application layer is developed on POWERPC440 embedded in Xilinx Virtex-5 FPGA. The whole SATA protocol implement in a platform FPGA has better features in expansibility, scalability, improvability and in-system programmability comparing with realizing it using Application Specific Integrated Circuit (ASIC). The experiment results shown that the controller works accurately and stably and the maximal sustained orderly data transfer rate up to 110 MB/s when connect to SATA hard disk. The high performance of the host SATA controller makes it possible that cheap SATA hard disk instead expensive Small Computer System Interface (SCSI) hard disk in some application. The controller is very suited for high speed mass data storage in embedded system.",
"title": ""
},
{
"docid": "df63ca9286b2fc520d6be36edb7afaef",
"text": "To analyse the accuracy of dual-energy contrast-enhanced spectral mammography in dense breasts in comparison with contrast-enhanced subtracted mammography (CESM) and conventional mammography (Mx). CESM cases of dense breasts with histological proof were evaluated in the present study. Four radiologists with varying experience in mammography interpretation blindly read Mx first, followed by CESM. The diagnostic profiles, consistency and learning curve were analysed statistically. One hundred lesions (28 benign and 72 breast malignancies) in 89 females were analysed. Use of CESM improved the cancer diagnosis by 21.2 % in sensitivity (71.5 % to 92.7 %), by 16.1 % in specificity (51.8 % to 67.9 %) and by 19.8 % in accuracy (65.9 % to 85.8 %) compared with Mx. The interobserver diagnostic consistency was markedly higher using CESM than using Mx alone (0.6235 vs. 0.3869 using the kappa ratio). The probability of a correct prediction was elevated from 80 % to 90 % after 75 consecutive case readings. CESM provided additional information with consistent improvement of the cancer diagnosis in dense breasts compared to Mx alone. The prediction of the diagnosis could be improved by the interpretation of a significant number of cases in the presence of 6 % benign contrast enhancement in this study. • DE-CESM improves the cancer diagnosis in dense breasts compared with mammography. • DE-CESM shows greater consistency than mammography alone by interobserver blind reading. • Diagnostic improvement of DE-CESM is independent of the mammographic reading experience.",
"title": ""
},
{
"docid": "0169f6c2eee1710d2ccd1403116da68f",
"text": "A resonant snubber is described for voltage-source inverters, current-source inverters, and self-commutated frequency changers. The main self-turn-off devices have shunt capacitors directly across them. The lossless resonant snubber described avoids trapping energy in a converter circuit where high dynamic stresses at both turn-on and turn-off are normally encountered. This is achieved by providing a temporary parallel path through a small ordinary thyristor (or other device operating in a similar node) to take over the high-stress turn-on duty from the main gate turn-off (GTO) or power transistor, in a manner that leaves no energy trapped after switching.<<ETX>>",
"title": ""
},
{
"docid": "dc323eabca83c4e9381539832dbb7f63",
"text": "We present the main freight transportation planning and management issues, briefly review the associated literature, describe a number of major developments, and identify trends and challenges. In order to keep the length of the paper within reasonable limits, we focus on long-haul, intercity, freight transportation. Optimization-based operations research methodologies are privileged. The paper starts with an overview of freight transportation systems and planning issues and continues with models which attempt to analyze multimodal, multicommodity transportation systems at the regional, national or global level. We then review location and network design formulations which are often associated with the long-term evolution of transportation systems and also appear prominently when service design issues are considered as described later on. Operational models and methods, particularly those aimed at the allocation and repositioning of resources such as empty vehicles, are then described. To conclude, we identify a number of interesting problems and challenges.",
"title": ""
},
{
"docid": "7ac2f63821256491f45e2a9666333853",
"text": "Precision-Recall analysis abounds in applications of binary classification where true negatives do not add value and hence should not affect assessment of the classifier’s performance. Perhaps inspired by the many advantages of receiver operating characteristic (ROC) curves and the area under such curves for accuracybased performance assessment, many researchers have taken to report PrecisionRecall (PR) curves and associated areas as performance metric. We demonstrate in this paper that this practice is fraught with difficulties, mainly because of incoherent scale assumptions – e.g., the area under a PR curve takes the arithmetic mean of precision values whereas the Fβ score applies the harmonic mean. We show how to fix this by plotting PR curves in a different coordinate system, and demonstrate that the new Precision-Recall-Gain curves inherit all key advantages of ROC curves. In particular, the area under Precision-Recall-Gain curves conveys an expected F1 score on a harmonic scale, and the convex hull of a PrecisionRecall-Gain curve allows us to calibrate the classifier’s scores so as to determine, for each operating point on the convex hull, the interval of β values for which the point optimises Fβ . We demonstrate experimentally that the area under traditional PR curves can easily favour models with lower expected F1 score than others, and so the use of Precision-Recall-Gain curves will result in better model selection.",
"title": ""
},
{
"docid": "fdd14b086d77b95b7ca00ab744f39458",
"text": "1567-4223/$34.00 Crown Copyright 2008 Publishe doi:10.1016/j.elerap.2008.11.001 * Corresponding author. Tel.: +886 7 5254713; fax: E-mail address: [email protected] (C.-C. H While eWOM advertising has recently emerged as an effective marketing strategy among marketing practitioners, comparatively few studies have been conducted to examine the eWOM from the perspective of pass-along emails. Based on social capital theory and social cognitive theory, this paper develops a model involving social enablers and personal cognition factors to explore the eWOM behavior and its efficacy. Data collected from 347 email users have lent credit to the model proposed. Tested by LISREL 8.70, the results indicate that the factors such as message involvement, social interaction tie, affection outcome expectations and message passing self-efficacy exert significant influences on pass-along email intentions (PAEIs). The study result may well be useful to marketing practitioners who are considering email marketing, especially to those who are in the process of selecting key email users and/or designing product advertisements to heighten the eWOM effect. Crown Copyright 2008 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a120d11f432017c3080bb4107dd7ea71",
"text": "Over the last decade, the zebrafish has entered the field of cardiovascular research as a new model organism. This is largely due to a number of highly successful small- and large-scale forward genetic screens, which have led to the identification of zebrafish mutants with cardiovascular defects. Genetic mapping and identification of the affected genes have resulted in novel insights into the molecular regulation of vertebrate cardiac development. More recently, the zebrafish has become an attractive model to study the effect of genetic variations identified in patients with cardiovascular defects by candidate gene or whole-genome-association studies. Thanks to an almost entirely sequenced genome and high conservation of gene function compared with humans, the zebrafish has proved highly informative to express and study human disease-related gene variants, providing novel insights into human cardiovascular disease mechanisms, and highlighting the suitability of the zebrafish as an excellent model to study human cardiovascular diseases. In this review, I discuss recent discoveries in the field of cardiac development and specific cases in which the zebrafish has been used to model human congenital and acquired cardiac diseases.",
"title": ""
},
{
"docid": "f0c8b45d2648de6825975cba4dd9d429",
"text": "This work presents a safe navigation approach for a carlike robot. The approach relies on a global motion planning based on Velocity Vector Fields along with a Dynamic Window Approach for avoiding unmodeled obstacles. Basically, the vector field is associated with a kinematic, feedback-linearization controller whose outputs are validated, and eventually modified, by the Dynamic Window Approach. Experiments with a full-size autonomous car equipped with a stereo camera show that the vehicle was able to track the vector field and avoid obstacles in its way.",
"title": ""
},
{
"docid": "6922a913c6ede96d5062f055b55377e7",
"text": "This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.",
"title": ""
},
{
"docid": "22654d2ed4c921c7bceb22ce9f9dc892",
"text": "xv",
"title": ""
},
{
"docid": "ddeb70a9abd07b113c8c7bfcf2f535b6",
"text": "Implementation of authentic leadership can affect not only the nursing workforce and the profession but the healthcare delivery system and society as a whole. Creating a healthy work environment for nursing practice is crucial to maintain an adequate nursing workforce; the stressful nature of the profession often leads to burnout, disability, and high absenteeism and ultimately contributes to the escalating shortage of nurses. Leaders play a pivotal role in retention of nurses by shaping the healthcare practice environment to produce quality outcomes for staff nurses and patients. Few guidelines are available, however, for creating and sustaining the critical elements of a healthy work environment. In 2005, the American Association of Critical-Care Nurses released a landmark publication specifying 6 standards (skilled communication, true collaboration, effective decision making, appropriate staffing, meaningful recognition, and authentic leadership) necessary to establish and sustain healthy work environments in healthcare. Authentic leadership was described as the \"glue\" needed to hold together a healthy work environment. Now, the roles and relationships of authentic leaders in the healthy work environment are clarified as follows: An expanded definition of authentic leadership and its attributes (eg, genuineness, trustworthiness, reliability, compassion, and believability) is presented. Mechanisms by which authentic leaders can create healthy work environments for practice (eg, engaging employees in the work environment to promote positive behaviors) are described. A practical guide on how to become an authentic leader is advanced. A research agenda to advance the study of authentic leadership in nursing practice through collaboration between nursing and business is proposed.",
"title": ""
},
{
"docid": "e72cfaa1d2781e7dda66625ce45bdebb",
"text": "Providing appropriate methods to facilitate the analysis of time-oriented data is a key issue in many application domains. In this paper, we focus on the unique role of the parameter time in the context of visually driven data analysis. We will discuss three major aspects - visualization, analysis, and the user. It will be illustrated that it is necessary to consider the characteristics of time when generating visual representations. For that purpose, we take a look at different types of time and present visual examples. Integrating visual and analytical methods has become an increasingly important issue. Therefore, we present our experiences in temporal data abstraction, principal component analysis, and clustering of larger volumes of time-oriented data. The third main aspect we discuss is supporting user-centered visual analysis. We describe event-based visualization as a promising means to adapt the visualization pipeline to needs and tasks of users.",
"title": ""
},
{
"docid": "7ebd355d65c8de8607da0363e8c86151",
"text": "In this letter, we compare the scanning beams of two leaky-wave antennas (LWAs), respectively, loaded with capacitive and inductive radiation elements, which have not been fully discussed in previous publications. It is pointed out that an LWA with only one type of radiation element suffers from a significant gain fluctuation over its beam-scanning band. To remedy this problem, we propose an LWA alternately loaded with inductive and capacitive elements along the host transmission line. The proposed LWA is able to steer its beam continuously from backward to forward with constant gain. A microstrip-based LWA is designed on the basis of the proposed method, and the measurement of its fabricated prototype demonstrates and confirms the desired results. This design method can widely be used to obtain LWAs with constant gain based on a variety of TLs.",
"title": ""
},
{
"docid": "32025802178ce122c288a558ba6572e4",
"text": "Based on this literature review, early orthodontic treatment of unilateral posterior crossbites with mandibular shifts is recommended. Treatment success is high if it is started early. Evidence that crossbites are not self-correcting, have some association with temporomandibular disorders and cause skeletal, dental and muscle adaptation provides further rationale for early treatment. It can be difficult to treat unilateral crossbites in adults without a combination of orthodontics and surgery. The most appropriate timing of treatment occurs when the patient is in the late deciduous or early mixed dentition stage as expansion modalities are very successful in this age group and permanent incisors are given more space as a result of the expansion. Treatment of unilateral posterior crossbites generally involves symmetric expansion of the maxillary arch, removal of selective occlusal interferences and elimination of the mandibular functional shift. The general practitioner and pediatric dentist must be able to diagnose unilateral posterior crossbites successfully and provide treatment or referral to take advantage of the benefits of early treatment.",
"title": ""
},
{
"docid": "dfcb51bd990cce7fb7abfe8802dc0c4e",
"text": "In this paper, we describe the machine learning approach we used in the context of the Automatic Cephalometric X-Ray Landmark Detection Challenge. Our solution is based on the use of ensembles of Extremely Randomized Trees combined with simple pixel-based multi-resolution features. By carefully tuning method parameters with cross-validation, our approach could reach detection rates ≥ 90% at an accuracy of 2.5mm for 8 landmarks. Our experiments show however a high variability between the different landmarks, with some landmarks detected at a much lower rate than others.",
"title": ""
}
] |
scidocsrr
|
6b9ce507f12ba3036f9c580491e845e3
|
TLTD: A Testing Framework for Learning-Based IoT Traffic Detection Systems
|
[
{
"docid": "67e85e8b59ec7dc8b0019afa8270e861",
"text": "Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.",
"title": ""
},
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
},
{
"docid": "580d83a0e627daedb45fe55e3f9b6883",
"text": "With near exponential growth predicted in the number of Internet of Things (IoT) based devices within networked systems there is need of a means of providing their flexible and secure integration. Software Defined Networking (SDN) is a concept that allows for the centralised control and configuration of network devices, and also provides opportunities for the dynamic control of network traffic. This paper proposes the use of an SDN gateway as a distributed means of monitoring the traffic originating from and directed to IoT based devices. This gateway can then both detect anomalous behaviour and perform an appropriate response (blocking, forwarding, or applying Quality of Service). Initial results demonstrate that, while the addition of the attack detection functionality has an impact on the number of flow installations possible per second, it can successfully detect and block TCP and ICMP flood based attacks.",
"title": ""
},
{
"docid": "11a69c06f21e505b3e05384536108325",
"text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"title": ""
}
] |
[
{
"docid": "f400ca4fe8fc5c684edf1ae60e026632",
"text": "Driverless vehicles will be common on the road in a short time. They will have many impacts on the global transport market trends. One of the remarkable driverless vehicles impacts will be the laying aside of rail systems, because of several reasons, that is to say traffic congestions will be no more a justification for rail, rail will not be the best answer for disableds, air pollution of cars are more or less equal to air pollution of trains and the last but not least reason is that driverless cars are safer than trains.",
"title": ""
},
{
"docid": "6171a708ea6470b837439ad23af90dff",
"text": "Cardiovascular diseases represent a worldwide relevant socioeconomical problem. Cardiovascular disease prevention relies also on lifestyle changes, including dietary habits. The cardioprotective effects of several foods and dietary supplements in both animal models and in humans have been explored. It was found that beneficial effects are mainly dependent on antioxidant and anti-inflammatory properties, also involving modulation of mitochondrial function. Resveratrol is one of the most studied phytochemical compounds and it is provided with several benefits in cardiovascular diseases as well as in other pathological conditions (such as cancer). Other relevant compounds are Brassica oleracea, curcumin, and berberine, and they all exert beneficial effects in several diseases. In the attempt to provide a comprehensive reference tool for both researchers and clinicians, we summarized in the present paper the existing literature on both preclinical and clinical cardioprotective effects of each mentioned phytochemical. We structured the discussion of each compound by analyzing, first, its cellular molecular targets of action, subsequently focusing on results from applications in both ex vivo and in vivo models, finally discussing the relevance of the compound in the context of human diseases.",
"title": ""
},
{
"docid": "94316059aba51baedd5662e7246e23c1",
"text": "The increased need of content based image retrieval technique can be found in a number of different domains such as Data Mining, Education, Medical Imaging, Crime Prevention, Weather forecasting, Remote Sensing and Management of Earth Resources. This paper presents the content based image retrieval, using features like texture and color, called WBCHIR (Wavelet Based Color Histogram Image Retrieval).The texture and color features are extracted through wavelet transformation and color histogram and the combination of these features is robust to scaling and translation of objects in an image. The proposed system has demonstrated a promising and faster retrieval method on a WANG image database containing 1000 general-purpose color images. The performance has been evaluated by comparing with the existing systems in the literature.",
"title": ""
},
{
"docid": "6560a704d5f8022193b60dd3ad213d5a",
"text": "Despite web access on mobile devices becoming commonplace, users continue to experience poor web performance on these devices. Traditional approaches for improving web performance (e.g., compression, SPDY, faster browsers) face an uphill battle due to the fundamentally conflicting trends in user expectations of lower load times and richer web content. Embracing the reality that page load times will continue to be higher than user tolerance limits for the foreseeable future, we ask: How can we deliver the best possible user experience? To this end, we present KLOTSKI, a system that prioritizes the content most relevant to a user’s preferences. In designing KLOTSKI, we address several challenges in: (1) accounting for inter-resource dependencies on a page; (2) enabling fast selection and load time estimation for the subset of resources to be prioritized; and (3) developing a practical implementation that requires no changes to websites. Across a range of user preference criteria, KLOTSKI can significantly improve the user experience relative to native websites.",
"title": ""
},
{
"docid": "7e40c7145f4613f12e7fc13646f3927c",
"text": "One strategy for intelligent agents in order to reach their goals is to plan their actions in advance. This can be done by simulating how the agent’s actions affect the environment and how it evolves independently of the agent. For this simulation, a model of the environment is needed. However, the creation of this model might be labor-intensive and it might be computational complex to evaluate during simulation. That is why, we suggest to equip an intelligent agent with a learned intuition about the dynamics of its environment by utilizing the concept of intuitive physics. To demonstrate our approach, we used an agent that can freely move in a two dimensional floor plan. It has to collect moving targets while avoiding the collision with static and dynamic obstacles. In order to do so, the agent plans its actions up to a defined planning horizon. The performance of our agent, which intuitively estimates the dynamics of its surrounding objects based on artificial neural networks, is compared to an agent which has a physically exact model of the world and one that acts randomly. The evaluation shows comparatively good results for the intuition based agent considering it uses only a quarter of the computation time in comparison to the agent with a physically exact model.",
"title": ""
},
{
"docid": "091d9afe87fa944548b9f11386112d6e",
"text": "In a cognitive radio network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. To support this spectrum reuse functionality, the secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance in cognitive radio networks. There are two parameters associated with spectrum sensing: probability of detection and probability of false alarm. The higher the probability of detection, the better the primary users are protected. However, from the secondary users' perspective, the lower the probability of false alarm, the more chances the channel can be reused when it is available, thus the higher the achievable throughput for the secondary network. In this paper, we study the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected. We formulate the sensing-throughput tradeoff problem mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network. Cooperative sensing using multiple mini-slots or multiple secondary users are also studied using the methodology proposed in this paper. Computer simulations have shown that for a 6 MHz channel, when the frame duration is 100 ms, and the signal-to-noise ratio of primary user at the secondary receiver is -20 dB, the optimal sensing time achieving the highest throughput while maintaining 90% detection probability is 14.2 ms. This optimal sensing time decreases when distributed spectrum sensing is applied.",
"title": ""
},
{
"docid": "f58a1a0d8cc0e2c826c911be4451e0df",
"text": "From an accessibility perspective, voice-controlled, home-based intelligent personal assistants (IPAs) have the potential to greatly expand speech interaction beyond dictation and screen reader output. To examine the accessibility of off-the-shelf IPAs (e.g., Amazon Echo) and to understand how users with disabilities are making use of these devices, we conducted two exploratory studies. The first, broader study is a content analysis of 346 Amazon Echo reviews that include users with disabilities, while the second study more specifically focuses on users with visual impairments, through interviews with 16 current users of home-based IPAs. Findings show that, although some accessibility challenges exist, users with a range of disabilities are using the Amazon Echo, including for unexpected cases such as speech therapy and support for caregivers. Richer voice-based applications and solutions to support discoverability would be particularly useful to users with visual impairments. These findings should inform future work on accessible voice-based IPAs.",
"title": ""
},
{
"docid": "374674cc8a087d31ee2c801f7e49aa8d",
"text": "Two biological control agents, Bacillus subtilis AP-01 (Larminar(™)) and Trichoderma harzianum AP-001 (Trisan(™)) alone or/in combination were investigated in controlling three tobacco diseases, including bacterial wilt (Ralstonia solanacearum), damping-off (Pythium aphanidermatum), and frogeye leaf spot (Cercospora nicotiana). Tests were performed in greenhouse by soil sterilization prior to inoculation of the pathogens. Bacterial-wilt and damping off pathogens were drenched first and followed with the biological control agents and for comparison purposes, two chemical fungicides. But for frogeye leaf spot, which is an airborne fungus, a spraying procedure for every treatment including a chemical fungicide was applied instead of drenching. Results showed that neither B. subtilis AP-01 nor T harzianum AP-001 alone could control the bacterial wilt, but when combined, their controlling capabilities were as effective as a chemical treatment. These results were also similar for damping-off disease when used in combination. In addition, the combined B. subtilis AP-01 and T. harzianum AP-001 resulted in a good frogeye leaf spot control, which was not significantly different from the chemical treatment.",
"title": ""
},
{
"docid": "32b860121b49bd3a61673b3745b7b1fd",
"text": "Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.",
"title": ""
},
{
"docid": "f70ce9d95ac15fc0800b8e6ac60247cb",
"text": "Many systems for the parallel processing of big data are available today. Yet, few users can tell by intuition which system, or combination of systems, is \"best\" for a given workflow. Porting workflows between systems is tedious. Hence, users become \"locked in\", despite faster or more efficient systems being available. This is a direct consequence of the tight coupling between user-facing front-ends that express workflows (e.g., Hive, SparkSQL, Lindi, GraphLINQ) and the back-end execution engines that run them (e.g., MapReduce, Spark, PowerGraph, Naiad).\n We argue that the ways that workflows are defined should be decoupled from the manner in which they are executed. To explore this idea, we have built Musketeer, a workflow manager which can dynamically map front-end workflow descriptions to a broad range of back-end execution engines.\n Our prototype maps workflows expressed in four high-level query languages to seven different popular data processing systems. Musketeer speeds up realistic workflows by up to 9x by targeting different execution engines, without requiring any manual effort. Its automatically generated back-end code comes within 5%--30% of the performance of hand-optimized implementations.",
"title": ""
},
{
"docid": "11a2882124e64bd6b2def197d9dc811a",
"text": "1 Abstract— Clustering is the most acceptable technique to analyze the raw data. Clustering can help detect intrusions when our training data is unlabeled, as well as for detecting new and unknown types of intrusions. In this paper we are trying to analyze the NSL-KDD dataset using Simple K-Means clustering algorithm. We tried to cluster the dataset into normal and four of the major attack categories i.e. DoS, Probe, R2L, U2R. Experiments are performed in WEKA environment. Results are verified and validated using test dataset. Our main objective is to provide the complete analysis of NSL-KDD intrusion detection dataset.",
"title": ""
},
{
"docid": "b44ebb850ce2349dddc35bbf9a01fb8a",
"text": "Automatically assessing emotional valence in human speech has historically been a difficult task for machine learning algorithms. The subtle changes in the voice of the speaker that are indicative of positive or negative emotional states are often “overshadowed” by voice characteristics relating to emotional intensity or emotional activation. In this work we explore a representation learning approach that automatically derives discriminative representations of emotional speech. In particular, we investigate two machine learning strategies to improve classifier performance: (1) utilization of unlabeled data using a deep convolutional generative adversarial network (DCGAN), and (2) multitask learning. Within our extensive experiments we leverage a multitask annotated emotional corpus as well as a large unlabeled meeting corpus (around 100 hours). Our speaker-independent classification experiments show that in particular the use of unlabeled data in our investigations improves performance of the classifiers and both fully supervised baseline approaches are outperformed considerably. We improve the classification of emotional valence on a discrete 5-point scale to 43.88% and on a 3-point scale to 49.80%, which is competitive to state-of-the-art performance.",
"title": ""
},
{
"docid": "ccaba0b30fc1a0c7d55d00003b07725a",
"text": "We collect a corpus of 1554 online news articles from 23 RSS feeds and analyze it in terms of controversy and sentiment. We use several existing sentiment lexicons and lists of controversial terms to perform a number of statistical analyses that explore how sentiment and controversy are related. We conclude that the negative sentiment and controversy are not necessarily positively correlated as has been claimed in the past. In addition, we apply an information theoretic approach and suggest that entropy might be a good predictor of controversy.",
"title": ""
},
{
"docid": "6a2e5831f2a2e1625be2bfb7941b9d1b",
"text": "Benefited from cloud storage services, users can save their cost of buying expensive storage and application servers, as well as deploying and maintaining applications. Meanwhile they lost the physical control of their data. So effective methods are needed to verify the correctness of the data stored at cloud servers, which are the research issues the Provable Data Possession (PDP) faced. The most important features in PDP are: 1) supporting for public, unlimited numbers of times of verification; 2) supporting for dynamic data update; 3) efficiency of storage space and computing. In mobile cloud computing, mobile end-users also need the PDP service. However, the computing workloads and storage burden of client in existing PDP schemes are too heavy to be directly used by the resource-constrained mobile devices. To solve this problem, with the integration of the trusted computing technology, this paper proposes a novel public PDP scheme, in which the trusted third-party agent (TPA) takes over most of the calculations from the mobile end-users. By using bilinear signature and Merkle hash tree (MHT), the scheme aggregates the verification tokens of the data file into one small signature to reduce communication and storage burden. MHT is also helpful to support dynamic data update. In our framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM) chips, and the needed computing workload and storage space is fit for mobile devices. Our scheme realizes provable secure storage service for resource-constrained mobile devices in mobile cloud computing.",
"title": ""
},
{
"docid": "53e668839e9d7e065dc7864830623790",
"text": "Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided.",
"title": ""
},
{
"docid": "9381ba0001262dd29d7ca74a98a56fc7",
"text": "Despite several advances in information retrieval systems and user interfaces, the specification of queries over text-based document collections remains a challenging problem. Query specification with keywords is a popular solution. However, given the widespread adoption of gesture-driven interfaces such as multitouch technologies in smartphones and tablets, the lack of a physical keyboard makes query specification with keywords inconvenient. We present BinGO, a novel gestural approach to querying text databases that allows users to refine their queries using a swipe gesture to either \"like\" or \"dislike\" candidate documents as well as express the reasons they like or dislike a document by swiping through automatically generated \"reason bins\". Such reasons refine a user's query with additional keywords. We present an online and efficient bin generation algorithm that presents reason bins at gesture articulation. We motivate and describe BinGo's unique interface design choices. Based on our analysis and user studies, we demonstrate that query specification by swiping through reason bins is easy and expressive.",
"title": ""
},
{
"docid": "8d4c66f9e12c1225df1e79628d666702",
"text": "Recently, wavelet transforms have gained very high attention in many fields and applications such as physics, engineering, signal processing, applied mathematics and statistics. In this paper, we present the advantage of wavelet transforms in forecasting financial time series data. Amman stock market (Jordan) was selected as a tool to show the ability of wavelet transform in forecasting financial time series, experimentally. This article suggests a novel technique for forecasting the financial time series data, based on Wavelet transforms and ARIMA model. Daily return data from 1993 until 2009 is used for this study. 316 S. Al Wadi et al",
"title": ""
},
{
"docid": "1977e7813b15ffb3a4238f3ed40f0e1f",
"text": "Despite the existence of standard protocol, many stabilization centers (SCs) continue to experience high mortality of children receiving treatment for severe acute malnutrition. Assessing treatment outcomes and identifying predictors may help to overcome this problem. Therefore, a 30-month retrospective cohort study was conducted among 545 randomly selected medical records of children <5 years of age admitted to SCs in Gedeo Zone. Data was entered by Epi Info version 7 and analyzed by STATA version 11. Cox proportional hazards model was built by forward stepwise procedure and compared by the likelihood ratio test and Harrell's concordance, and fitness was checked by Cox-Snell residual plot. During follow-up, 51 (9.3%) children had died, and 414 (76%) and 26 (4.8%) children had recovered and defaulted (missed follow-up for 2 consecutive days), respectively. The survival rates at the end of the first, second and third weeks were 95.3%, 90% and 85%, respectively, and the overall mean survival time was 79.6 days. Age <24 months (adjusted hazard ratio [AHR] =2.841, 95% confidence interval [CI] =1.101-7.329), altered pulse rate (AHR =3.926, 95% CI =1.579-9.763), altered temperature (AHR =7.173, 95% CI =3.05-16.867), shock (AHR =3.805, 95% CI =1.829-7.919), anemia (AHR =2.618, 95% CI =1.148-5.97), nasogastric tube feeding (AHR =3.181, 95% CI =1.18-8.575), hypoglycemia (AHR =2.74, 95% CI =1.279-5.87) and treatment at hospital stabilization center (AHR =4.772, 95% CI =1.638-13.9) were independent predictors of mortality. The treatment outcomes and incidence of death were in the acceptable ranges of national and international standards. Intervention to further reduce deaths has to focus on young children with comorbidities and altered general conditions.",
"title": ""
},
{
"docid": "1839d9e6ef4bad29381105f0a604b731",
"text": "Our focus is on the effects that dated ideas about the nature of science (NOS) have on curriculum, instruction and assessments. First we examine historical developments in teaching about NOS, beginning with the seminal ideas of James Conant. Next we provide an overview of recent developments in philosophy and cognitive sciences that have shifted NOS characterizations away from general heuristic principles toward cognitive and social elements. Next, we analyze two alternative views regarding ‘explicitly teaching’ NOS in pre-college programs. Version 1 is grounded in teachers presenting ‘Consensus-based Heuristic Principles’ in science lessons and activities. Version 2 is grounded in learners experience of ‘Building and Refining Model-Based Scientific Practices’ in critique and communication enactments that occur in longer immersion units and learning progressions. We argue that Version 2 is to be preferred over Version 1 because it develops the critical epistemic cognitive and social practices that scientists and science learners use when (1) developing and evaluating scientific evidence, explanations and knowledge and (2) critiquing and communicating scientific ideas and information; thereby promoting science literacy. 1 NOS and Science Education When and how did knowledge about science, as opposed to scientific content knowledge, become a targeted outcome of science education? From a US perspective, the decades of interest are the 1940s and 1950s when two major post-war developments in science education policy initiatives occurred. The first, in post secondary education, was the GI Bill An earlier version of this paper was presented as a plenary session by the first author at the ‘How Science Works—And How to Teach It’ workshop, Aarhus University, 23–25 June, 2011, Denmark. R. A. Duschl (&) The Pennsylvania State University, University Park, PA, USA e-mail: [email protected] R. Grandy Rice University, Houston, TX, USA 123 Sci & Educ DOI 10.1007/s11191-012-9539-4",
"title": ""
},
{
"docid": "e289f0f11ee99c57ede48988cc2dbd5c",
"text": "Generative Adversarial Networks (GANs) are becoming popular choices for unsupervised learning. At the same time there is a concerted effort in the machine learning community to expand the range of tasks in which learning can be applied as well as to utilize methods from other disciplines to accelerate learning. With this in mind, in the current work we suggest ways to enforce given constraints in the output of a GAN both for interpolation and extrapolation. The two cases need to be treated differently. For the case of interpolation, the incorporation of constraints is built into the training of the GAN. The incorporation of the constraints respects the primary gametheoretic setup of a GAN so it can be combined with existing algorithms. However, it can exacerbate the problem of instability during training that is well-known for GANs. We suggest adding small noise to the constraints as a simple remedy that has performed well in our numerical experiments. The case of extrapolation (prediction) is more involved. First, we employ a modified interpolation training process that uses noisy data but does not necessarily enforce the constraints during training. Second, the resulting modified interpolator is used for extrapolation where the constraints are enforced after each step through projection on the space of constraints.",
"title": ""
}
] |
scidocsrr
|
7c6c21ed8607b644f148470ffad804aa
|
Strangers on Your Phone: Why People Use Anonymous Communication Applications
|
[
{
"docid": "1ee74e505f5efc99331d5b63565882cf",
"text": "Consumers shopping in \"brick-and-mortar\" (non-virtual) stores often use their mobile phones to consult with others about potential purchases. Via a survey (n = 200), we detail current practices in seeking remote shopping advice. We then consider how emerging social platforms, such as social networking sites and crowd labor markets, could offer rich next-generation remote shopping advice experiences. We conducted a field experiment in which shoppers shared photographs of potential purchases via MMS, Facebook, and Mechanical Turk. Paid crowdsourcing, in particular, proved surprisingly useful and influential as a means of augmenting in-store shopping. Based on our findings, we offer design suggestions for next-generation remote shopping advice systems.",
"title": ""
},
{
"docid": "2d43992a8eb6e97be676c04fc9ebd8dd",
"text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.",
"title": ""
},
{
"docid": "0bb5bbdf7043eed23cafdd54df68c709",
"text": "We present two studies of online ephemerality and anonymity based on the popular discussion board /b/ at 4chan.org: a website with over 7 million users that plays an influential role in Internet culture. Although researchers and practitioners often assume that user identity and data permanence are central tools in the design of online communities, we explore how /b/ succeeds despite being almost entirely anonymous and extremely ephemeral. We begin by describing /b/ and performing a content analysis that suggests the community is dominated by playful exchanges of images and links. Our first study uses a large dataset of more than five million posts to quantify ephemerality in /b/. We find that most threads spend just five seconds on the first page and less than five minutes on the site before expiring. Our second study is an analysis of identity signals on 4chan, finding that over 90% of posts are made by fully anonymous users, with other identity signals adopted and discarded at will. We describe alternative mechanisms that /b/ participants use to establish status and frame their interactions.",
"title": ""
}
] |
[
{
"docid": "14fb6228827657ba6f8d35d169ad3c63",
"text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.",
"title": ""
},
{
"docid": "d6d55f2f3c29605835305d3cc72a34ad",
"text": "Most classification problems associate a single class to each example or instance. However, there are many classification tasks where each instance can be associated with one or more classes. This group of problems represents an area known as multi-label classification. One typical example of multi-label classification problems is the classification of documents, where each document can be assigned to more than one class. This tutorial presents the most frequently used techniques to deal with these problems in a pedagogical manner, with examples illustrating the main techniques and proposing a taxonomy of multi-label techniques that highlights the similarities and differences between these techniques.",
"title": ""
},
{
"docid": "d2b7e61ecedf80f613d25c4f509ddaf6",
"text": "We present a new image editing method, particularly effective for sharpening major edges by increasing the steepness of transition while eliminating a manageable degree of low-amplitude structures. The seemingly contradictive effect is achieved in an optimization framework making use of L0 gradient minimization, which can globally control how many non-zero gradients are resulted in to approximate prominent structure in a sparsity-control manner. Unlike other edge-preserving smoothing approaches, our method does not depend on local features, but instead globally locates important edges. It, as a fundamental tool, finds many applications and is particularly beneficial to edge extraction, clip-art JPEG artifact removal, and non-photorealistic effect generation.",
"title": ""
},
{
"docid": "0ff1837d40bbd6bbfe4f5ec69f83de90",
"text": "Nowadays, Telemarketing is an interactive technique of direct marketing that many banks apply to present a long term deposit to bank customers via the phone. Although the offering like this manner is powerful, it may make the customers annoyed. The data prediction is a popular task in data mining because it can be applied to solve this problem. However, the predictive performance may be decreased in case of the input data have many features like the bank customer information. In this paper, we focus on how to reduce the feature of input data and balance the training set for the predictive model to help the bank to increase the prediction rate. In the system performance evaluation, all accuracy rates of each predictive model based on the proposed approach compared with the original predictive model based on the truth positive and receiver operating characteristic measurement show the high performance in which the smaller number of features.",
"title": ""
},
{
"docid": "346d2ead797b07d9df0bccfb5bb07c9e",
"text": "There is no doubt that, chitosan is considered as one of the most important biopolymers that can easily extracted from nature resources or synthesized in the chemical laboratories. Chitosan also display a suitable number of important properties in different fields of applications. Recently, chitosan has been reported as a perfect candidate as a trestle macromolecule for variable biological fields of study. This include, tissue engineering. cell culture and gene delivery, etc. Furthermore, chitosan has widely used in different types of industries which include: food, agriculture, fragrance, and even cosmetic industries. Besides that, chitosan derivatives is treated as excellent tool in waste water treatment. Therefore, the present work gives a simple selective overview for different modifications of Chitosan macromolecule with a special attention to its biological interest. Prior that, a closer look to its resources, chemical structure as well as general properties has been also determined which include its solubility character and its molecular weight. Furthermore, the chemistry of chitosan has been also mentioned with selected examples of each type of interaction. Finally a brief for sulfone based modified chitosan has been reported including classical methods of synthesis and its experimental variants.",
"title": ""
},
{
"docid": "fd2bf3cc2097037d5141d2fe7cbead55",
"text": "We present a novel approach to video segmentation which won the 4th place in DAVIS challenge 2017. The method has two main components: in the first part we extract video object proposals from each frame. We develop a new algorithm based on one-shot video segmentation (OSVOS) algorithm to generate sequence-specific proposals that match to the human-annotated proposals in the first frame. This set is populated by the proposals from fully convolutional instance-aware image segmentation algorithm (FCIS). Then, we use the segment proposal tracking (SPT) algorithm to track object proposals in time and generate the spatio-temporal video object proposals. This approach learns video segments by bootstrapping them from temporally consistent object proposals, which can start from any frame. We extend this approach with a semi-Markov motion model to provide appearance motion multi-target inference, backtracking a segment started from frame T to the 1st frame, and a ”re-tracking” capability that learns a better object appearance model after inference has been done. With a dense CRF refinement method, this model achieved 61.5% overall accuracy in DAVIS challenge 2017.",
"title": ""
},
{
"docid": "a7addb99b27233e3b855af50d1f345d8",
"text": "Analog/mixed-signal machine learning (ML) accelerators exploit the unique computing capability of analog/mixed-signal circuits and inherent error tolerance of ML algorithms to obtain higher energy efficiencies than digital ML accelerators. Unfortunately, these analog/mixed-signal ML accelerators lack programmability, and even instruction set interfaces, to support diverse ML algorithms or to enable essential software control over the energy-vs-accuracy tradeoffs. We propose PROMISE, the first end-to-end design of a PROgrammable MIxed-Signal accElerator from Instruction Set Architecture (ISA) to high-level language compiler for acceleration of diverse ML algorithms. We first identify prevalent operations in widely-used ML algorithms and key constraints in supporting these operations for a programmable mixed-signal accelerator. Second, based on that analysis, we propose an ISA with a PROMISE architecture built with silicon-validated components for mixed-signal operations. Third, we develop a compiler that can take a ML algorithm described in a high-level programming language (Julia) and generate PROMISE code, with an IR design that is both language-neutral and abstracts away unnecessary hardware details. Fourth, we show how the compiler can map an application-level error tolerance specification for neural network applications down to low-level hardware parameters (swing voltages for each application Task) to minimize energy consumption. Our experiments show that PROMISE can accelerate diverse ML algorithms with energy efficiency competitive even with fixed-function digital ASICs for specific ML algorithms, and the compiler optimization achieves significant additional energy savings even for only 1% extra errors.",
"title": ""
},
{
"docid": "aa4e3c2db7f1a1ac749d5d34014e26a0",
"text": "In this paper, a novel text clustering technique is proposed to summarize text documents. The clustering method, so called ‘Ensemble Clustering Method’, combines both genetic algorithms (GA) and particle swarm optimization (PSO) efficiently and automatically to get the best clustering results. The summarization with this clustering method is to effectively avoid the redundancy in the summarized document and to show the good summarizing results, extracting the most significant and non-redundant sentence from clustering sentences of a document. We tested this technique with various text documents in the open benchmark datasets, DUC01 and DUC02. To evaluate the performances, we used F-measure and ROUGE. The experimental results show that the performance capability of our method is about 11% to 24% better than other summarization algorithms. Key-Words: Text Summarization; Extractive Summarization; Ensemble Clustering; Genetic Algorithms; Particle Swarm Optimization",
"title": ""
},
{
"docid": "933312292c64c916e69357c5aec42189",
"text": "Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.",
"title": ""
},
{
"docid": "640824047e480ef5582d140b6595dbd9",
"text": "A wideband transition from coplanar waveguide (CPW) to substrate integrated waveguide (SIW) is proposed and presented in the 50 GHz frequency range. Electrically thick alumina was used in this case, representative for other high-permittivity substrates such as semiconductors. Simulations predict less than -15 dB return loss within a 35 % bandwidth. CPW probe measurements were carried out and 40 % bandwidth were achieved at -0.5 dB insertion loss for a single transition. Modified SIW via configurations being suitable for simplified fabrication on electrically thick substrates in the upper millimeter-wave spectrum are discussed in the second part.",
"title": ""
},
{
"docid": "332bcd9b49f3551d8f07e4f21a881804",
"text": "Attention plays a critical role in effective learning. By means of attention assessment, it helps learners improve and review their learning processes, and even discover Attention Deficit Hyperactivity Disorder (ADHD). Hence, this work employs modified smart glasses which have an inward facing camera for eye tracking, and an inertial measurement unit for head pose estimation. The proposed attention estimation system consists of eye movement detection, head pose estimation, and machine learning. In eye movement detection, the central point of the iris is found by the locally maximum curve via the Hough transform where the region of interest is derived by the identified left and right eye corners. The head pose estimation is based on the captured inertial data to generate physical features for machine learning. Here, the machine learning adopts Genetic Algorithm (GA)-Support Vector Machine (SVM) where the feature selection of Sequential Floating Forward Selection (SFFS) is employed to determine adequate features, and GA is to optimize the parameters of SVM. Our experiments reveal that the proposed attention estimation system can achieve the accuracy of 93.1% which is fairly good as compared to the conventional systems. Therefore, the proposed system embedded in smart glasses brings users mobile, convenient, and comfortable to assess their attention on learning or medical symptom checker.",
"title": ""
},
{
"docid": "a3b18ade3e983d91b7a8fc8d4cb6a75d",
"text": "The IC stripline method is one of those suggested in IEC-62132 to evaluate the susceptibility of ICs to radiated electromagnetic interference. In practice, it allows the multiple injection of the interference through the capacitive and inductive coupling of the IC package with the guiding structure (the stripline) in which the device under test is inserted. The pros and cons of this method are discussed and a variant of it is proposed with the aim to address the main problems that arise when evaluating the susceptibility of ICs encapsulated in small packages.",
"title": ""
},
{
"docid": "dde075f427d729d028d6d382670f8346",
"text": "Using social media Web sites is among the most common activity of today's children and adolescents. Any Web site that allows social interaction is considered a social media site, including social networking sites such as Facebook, MySpace, and Twitter; gaming sites and virtual worlds such as Club Penguin, Second Life, and the Sims; video sites such as YouTube; and blogs. Such sites offer today's youth a portal for entertainment and communication and have grown exponentially in recent years. For this reason, it is important that parents become aware of the nature of social media sites, given that not all of them are healthy environments for children and adolescents. Pediatricians are in a unique position to help families understand these sites and to encourage healthy use and urge parents to monitor for potential problems with cyberbullying, \"Facebook depression,\" sexting, and exposure to inappropriate content.",
"title": ""
},
{
"docid": "dccec6a01de3b68d1e2a7ff8b0da7b9a",
"text": "Using social media for political analysis is becoming a common practice, especially during election time. Many researchers and media are trying to use social media to understand the public opinion and trend. In this paper, we investigate how we could use Twitter to predict public opinion and thus predict American republican presidential election results. We analyzed millions of tweets from September 2011 leading up to the republican primary elections. First we examine the previous methods regarding predicting election results with social media and then we integrate our understanding of social media and propose a prediction model to predict the public opinions towards Republican Presidential Elections. Our results highlight the feasibility of using social media to predict public opinions and thus replace traditional polling.",
"title": ""
},
{
"docid": "63baa6371fc07d3ef8186f421ddf1070",
"text": "With the first few words of Neural Networks and Intellect: Using Model-Based Concepts, Leonid Perlovsky embarks on the daring task of creating a mathematical concept of “the mind.” The content of the book actually exceeds even the most daring of expectations. A wide variety of concepts are linked together intertwining the development of artificial intelligence, evolutionary computation, and even the philosophical observations ranging from Aristotle and Plato to Kant and Gvdel. Perlovsky discusses fundamental questions with a number of engineering applications to filter them through philosophical categories (both ontological and epistemological). In such a fashion, the inner workings of the human mind, consciousness, language-mind relationships, learning, and emotions are explored mathematically in amazing details. Perlovsky even manages to discuss the concept of beauty perception in mathematical terms. Beginners will appreciate that Perlovsky starts with the basics. The first chapter contains an introduction to probability, statistics, and pattern recognition, along with the intuitive explanation of the complicated mathematical concepts. The second chapter reviews numerous mathematical approaches, algorithms, neural networks, and the fundamental mathematical ideas underlying each method. It analyzes fundamental limitations of the nearest neighbor methods and the simple neural network. Vapnik’s statistical learning theory, support vector machines, and Grossberg’s neural field theories are clearly explained. Roles of hierarchical organization and evolutionary computation are analyzed. Even experts in the field might find interesting the relationships among various algorithms and approaches. Fundamental mathematical issues include origins of combinatorial complexity (CC) of many algorithms and neural networks (operations or training) and its relationship to di-",
"title": ""
},
{
"docid": "d1c4e334e61c0d0d596311f8998996d7",
"text": "We consider the problem of scheduling small cloud functions on serverless computing platforms. Fast deployment and execution of these functions is critical, for example, for microservices architectures. However, functions that require large packages or libraries are bloated and start slowly. A solution is to cache packages at the worker nodes instead of bundling them with the functions. However, existing FaaS schedulers are vanilla load balancers, agnostic of any packages that may have been cached in response to prior function executions, and cannot reap the benefits of package caching (other than by chance). To address this problem, we propose a package-aware scheduling algorithm that tries to assign functions that require the same package to the same worker node. Our algorithm increases the hit rate of the package cache and, as a result, reduces the latency of the cloud functions. At the same time, we consider the load sustained by the workers and actively seek to avoid imbalance beyond a configurable threshold. Our preliminary evaluation shows that, even with our limited exploration of the configuration space so-far, we can achieve 66% performance improvement at the cost of a (manageable) higher node imbalance.",
"title": ""
},
{
"docid": "f83be6d305aed2929130ec6bab038820",
"text": "A design of single-feed dual-frequency patch antennas with different polarizations and radiation patterns is proposed. The antenna structure is composed of two stacked patches, in which the top is a square patch and the bottom is a corner-truncated square-ring patch, and the two patches are connected together with four conducting strips. Two operating frequencies can be found in the antenna structure. The radiations at the lower and higher frequencies are a broadside pattern with circular polarization and a conical pattern with linear polarization, respectively. A prototype operating at 1575 and 2400 MHz bands is constructed. Both experimental and simulated results show that the prototype has good performances and is suitable for GPS and WLAN applications.",
"title": ""
},
{
"docid": "c3ef6598f869e40fc399c89baf0dffd8",
"text": "In this article, a novel hybrid genetic algorithm is proposed. The selection operator, crossover operator and mutation operator of the genetic algorithm have effectively been improved according to features of Sudoku puzzles. The improved selection operator has impaired the similarity of the selected chromosome and optimal chromosome in the current population such that the chromosome with more abundant genes is more likely to participate in crossover; such a designed crossover operator has possessed dual effects of self-experience and population experience based on the concept of tactfully combining PSO, thereby making the whole iterative process highly directional; crossover probability is a random number and mutation probability changes along with the fitness value of the optimal solution in the current population such that more possibilities of crossover and mutation could then be considered during the algorithm iteration. The simulation results show that the convergence rate and stability of the novel algorithm has significantly been improved.",
"title": ""
},
{
"docid": "292ecbaf8275819635830f318efe07fc",
"text": "θ: model parameter p: momentum ξ: thermostat Euler Integrator θt+1 = θt + pth pt+1 = pt−∇θŨt(θt+1)h− diag(ξt)pth + √ 2Dζt+1 ξt+1 = ξt + (pt+1 pt+1− 1)h Symmetric Spltting Integrator A : θt+1/2 = θt + pth/2, ξt+1/2 = ξt + (pt pt− 1)h/2→ B : pt+1/3 = exp(−ξt+1/2h/2) pt→ O : pt+2/3 = pt+1/3−∇θŨt(θt+1/2)h + √ 2Dζt+1→ B : pt+1 = exp(−ξt+1/2h/2) pt+2/3→ A : θt+1 = θt+1/2 + pt+1h/2, ξt+1 = ξt+1/2 + (pt+1 pt+1− 1)h/2 where h is stepsize, D is diffusion factor, and ζ is Gaussian noise",
"title": ""
},
{
"docid": "3c7807921865fff76b8c65d510f5e32e",
"text": "We provide a general framework for privacy-preserving variational Bayes (VB) for a large class of probabilistic models, called the conjugate exponential (CE) family. Our primary observation is that when models are in the CE family, we can privatise the variational posterior distributions simply by perturbing the expected sufficient statistics of the completedata likelihood. For widely used non-CE models with binomial likelihoods, we exploit the Pólya-Gamma data augmentation scheme to bring such models into the CE family, such that inferences in the modified model resemble the private variational Bayes algorithm as closely as possible. The iterative nature of variational Bayes presents a further challenge since iterations increase the amount of noise needed. We overcome this by combining: (1) a relaxed notion of differential privacy, called concentrated differential privacy, which provides a tight bound on the privacy cost of multiple VB iterations and thus significantly decreases the amount of additive noise; and (2) the privacy amplification effect of subsampling mini-batches from large-scale data in stochastic learning. We empirically demonstrate the effectiveness of our method in CE and non-CE models including latent Dirichlet allocation, Bayesian logistic regression, and sigmoid belief networks, evaluated on real-world datasets.",
"title": ""
}
] |
scidocsrr
|
5bbed6c30b7cef1945c29e36e8777be3
|
Intelligent irrigation system — An IOT based approach
|
[
{
"docid": "0ef58b9966c7d3b4e905e8306aad3359",
"text": "Agriculture is the back bone of India. To make the sustainable agriculture, this system is proposed. In this system ARM 9 processor is used to control and monitor the irrigation system. Different kinds of sensors are used. This paper presents a fully automated drip irrigation system which is controlled and monitored by using ARM9 processor. PH content and the nitrogen content of the soil are frequently monitored. For the purpose of monitoring and controlling, GSM module is implemented. The system informs user about any abnormal conditions like less moisture content and temperature rise, even concentration of CO2 via SMS through the GSM module.",
"title": ""
},
{
"docid": "a50f168329c1b44ed881e99d66fe7c13",
"text": "Indian agriculture is diverse; ranging from impoverished farm villages to developed farms utilizing modern agricultural technologies. Facility agriculture area in China is expanding, and is leading the world. However, its ecosystem control technology and system is still immature, with low level of intelligence. Promoting application of modern information technology in agriculture will solve a series of problems facing by farmers. Lack of exact information and communication leadsto the loss in production. Our paper is designed to over come these problems. This regulator provides an intelligent monitoring platform framework and system structure for facility agriculture ecosystem based on IOT[3]. This will be a catalyst for the transition from traditional farming to modern farming. This also provides opportunity for creating new technology and service development in IOT (internet of things) farming application. The Internet Of Things makes everything connected. Over 50 years since independence, India has made immense progress towards food productivity. The Indian population has tripled, but food grain production more than quadrupled[1]: there has thus been a substantial increase in available food grain per ca-pita. Modern agriculture practices have a great promise for the economic development of a nation. So we have brought-in an innovative project for the welfare of farmers and also for the farms. There are no day or night restrictions. This is helpful at any time.",
"title": ""
}
] |
[
{
"docid": "5251605df4db79f6a0fc2779a51938e2",
"text": "Drug bioavailability to the developing brain is a major concern in the treatment of neonates and infants as well as pregnant and breast-feeding women. Central adverse drug reactions can have dramatic consequences for brain development, leading to major neurological impairment. Factors setting the cerebral bioavailability of drugs include protein-unbound drug concentration in plasma, local cerebral blood flow, permeability across blood-brain interfaces, binding to neural cells, volume of cerebral fluid compartments, and cerebrospinal fluid secretion rate. Most of these factors change during development, which will affect cerebral drug concentrations. Regarding the impact of blood-brain interfaces, the blood-brain barrier located at the cerebral endothelium and the blood-cerebrospinal fluid barrier located at the choroid plexus epithelium both display a tight phenotype early on in embryos. However, the developmental regulation of some multispecific efflux transporters that also limit the entry of numerous drugs into the brain through barrier cells is expected to favor drug penetration in the neonatal brain. Finally, drug cerebral bioavailability is likely to be affected following perinatal injuries that alter blood-brain interface properties. A thorough investigation of these mechanisms is mandatory for a better risk assessment of drug treatments in pregnant or breast-feeding women, and in neonate and pediatric patients.",
"title": ""
},
{
"docid": "0b5ca91480dfff52de5c1d65c3b32f3d",
"text": "Spotting anomalies in large multi-dimensional databases is a crucial task with many applications in finance, health care, security, etc. We introduce COMPREX, a new approach for identifying anomalies using pattern-based compression. Informally, our method finds a collection of dictionaries that describe the norm of a database succinctly, and subsequently flags those points dissimilar to the norm---with high compression cost---as anomalies.\n Our approach exhibits four key features: 1) it is parameter-free; it builds dictionaries directly from data, and requires no user-specified parameters such as distance functions or density and similarity thresholds, 2) it is general; we show it works for a broad range of complex databases, including graph, image and relational databases that may contain both categorical and numerical features, 3) it is scalable; its running time grows linearly with respect to both database size as well as number of dimensions, and 4) it is effective; experiments on a broad range of datasets show large improvements in both compression, as well as precision in anomaly detection, outperforming its state-of-the-art competitors.",
"title": ""
},
{
"docid": "c9b9ac230838ffaff404784b66862013",
"text": "On the Mathematical Foundations of Theoretical Statistics. Author(s): R. A. Fisher. Source: Philosophical Transactions of the Royal Society of London. Series A Solutions to Exercises. 325. Bibliography. 347. Index Discrete mathematics is an essential part of the foundations of (theoretical) computer science, statistics . 2) Statistical Methods by S.P.Gupta. 3) Mathematical Statistics by Saxena & Kapoor. 4) Statistics by Sancheti & Kapoor. 5) Introduction to Mathematical Statistics Fundamentals of Mathematical statistics by Guptha, S.C &Kapoor, V.K (Sulthan chand. &sons). 2. Introduction to Mathematical statistics by Hogg.R.V and and .",
"title": ""
},
{
"docid": "bf65f2c68808755cfcd13e6cc7d0ccab",
"text": "Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.",
"title": ""
},
{
"docid": "3fcb9ab92334e3e214a7db08a93d5acd",
"text": "BACKGROUND\nA growing body of literature indicates that physical activity can have beneficial effects on mental health. However, previous research has mainly focussed on clinical populations, and little is known about the psychological effects of physical activity in those without clinically defined disorders.\n\n\nAIMS\nThe present study investigates the association between physical activity and mental health in an undergraduate university population based in the United Kingdom.\n\n\nMETHOD\nOne hundred students completed questionnaires measuring their levels of anxiety and depression using the Hospital Anxiety and Depression Scale (HADS) and their physical activity regime using the Physical Activity Questionnaire (PAQ).\n\n\nRESULTS\nSignificant differences were observed between the low, medium and high exercise groups on the mental health scales, indicating better mental health for those who engage in more exercise.\n\n\nCONCLUSIONS\nEngagement in physical activity can be an important contributory factor in the mental health of undergraduate students.",
"title": ""
},
{
"docid": "64d45fa63ac1ea987cec76bf69c4cc30",
"text": "Recently, community psychologists have re-vamped a set of 18 competencies considered important for how we practice community psychology. Three competencies are: (1) ethical, reflexive practice, (2) community inclusion and partnership, and (3) community education, information dissemination, and building public awareness. This paper will outline lessons I-a white working class woman academic-learned about my competency development through my research collaborations, using the lens of affective politics. I describe three lessons, from school-based research sites (elementary schools serving working class students of color and one elite liberal arts school serving wealthy white students). The first lesson, from an elementary school, concerns ethical, reflective practice. I discuss understanding my affect as a barometer of my ability to conduct research from a place of solidarity. The second lesson, which centers community inclusion and partnership, illustrates how I learned about the importance of \"before the beginning\" conversations concerning social justice and conflict when working in elementary schools. The third lesson concerns community education, information dissemination, and building public awareness. This lesson, from a college, taught me that I could stand up and speak out against classism in the face of my career trajectory being threatened. With these lessons, I flesh out key aspects of community practice competencies.",
"title": ""
},
{
"docid": "9d700ef057eb090336d761ebe7f6acb0",
"text": "This article presents initial results on a supervised machine learning approach to determine the semantics of noun compounds in Dutch and Afrikaans. After a discussion of previous research on the topic, we present our annotation methods used to provide a training set of compounds with the appropriate semantic class. The support vector machine method used for this classification experiment utilizes a distributional lexical semantics representation of the compound’s constituents to make its classification decision. The collection of words that occur in the near context of the constituent are considered an implicit representation of the semantics of this constituent. Fscores were reached of 47.8% for Dutch and 51.1% for Afrikaans. Keywords—compound semantics; Afrikaans; Dutch; machine learning; distributional methods",
"title": ""
},
{
"docid": "504377fd7a3b7c17d702d81d01a71bb6",
"text": "We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10% over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speakerindependent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks.",
"title": ""
},
{
"docid": "c953895c57d8906736352698a55c24a9",
"text": "Data scientists and physicians are starting to use artificial intelligence (AI) even in the medical field in order to better understand the relationships among the huge amount of data coming from the great number of sources today available. Through the data interpretation methods made available by the recent AI tools, researchers and AI companies have focused on the development of models allowing to predict the risk of suffering from a specific disease, to make a diagnosis, and to recommend a treatment that is based on the best and most updated scientific evidence. Even if AI is used to perform unimaginable tasks until a few years ago, the awareness about the ongoing revolution has not yet spread through the medical community for several reasons including the lack of evidence about safety, reliability and effectiveness of these tools, the lack of regulation accompanying hospitals in the use of AI by health care providers, the difficult attribution of liability in case of errors and malfunctions of these systems, and the ethical and privacy questions that they raise and that, as of today, are still unanswered.",
"title": ""
},
{
"docid": "44cf5669d05a759ab21b3ebc1f6c340d",
"text": "Linear variable differential transformer (LVDT) sensors are widely used in hydraulic and pneumatic mechatronic systems for measuring physical quantities like displacement, force or pressure. The LVDT sensor consists of two magnetic coupled coils with a common core and this sensor converts the displacement of core into reluctance variation of magnetic circuit. LVDT sensors combines good accuracy (0.1 % error) with low cost, but they require relative complex electronics. Standard electronics for LVDT sensor conditioning is analog $the coupled coils constitute an inductive half-bridge supplied with 5 kHz sinus excitation from a quadrate oscillator. The output phase span is amplified and synchronous demodulated. This analog technology works well but has its drawbacks - hard to adjust, many components and packages, no connection to computer systems. To eliminate all these disadvantages, our team from \"Politehnica\" University of Bucharest has developed a LVDT signal conditioner using system on chip microcontroller MSP430F149 from Texas Instruments. This device integrates all peripherals required for LVDT signal conditioning (pulse width modulation modules, analog to digital converter, timers, enough memory resources and processing power) and offers also excellent low-power options. Resulting electronic module is a one-chip solution made entirely in SMD technology and its small dimensions allow its integration into sensor's body. Present paper focuses on specific issues of this digital solution for LVDT conditioning and compares it with classic analog solution from different points of view: error curve, power consumption, communication options, dimensions and production cost. Microcontroller software (firmware) and digital signal conditioning techniques for LVDT are also analyzed. Use of system on chip devices for signal conditioning allows realization of low cost compact transducers with same or better performances than their analog counterparts, but with extra options like serial communication channels, self-calibration, local storage of measured values and fault detection",
"title": ""
},
{
"docid": "8b5ad6c53d58feefe975e481e2352c52",
"text": "Virtual machine (VM) live migration is a critical feature for managing virtualized environments, enabling dynamic load balancing, consolidation for power management, preparation for planned maintenance, and other management features. However, not all virtual machine live migration is created equal. Variants include memory migration, which relies on shared backend storage between the source and destination of the migration, and storage migration, which migrates storage state as well as memory state. We have developed an automated testing framework that measures important performance characteristics of live migration, including total migration time, the time a VM is unresponsive during migration, and the amount of data transferred over the network during migration. We apply this testing framework and present the results of studying live migration, both memory migration and storage migration, in various virtualization systems including KVM, XenServer, VMware, and Hyper-V. The results provide important data to guide the migration decisions of both system administrators and autonomic cloud management systems.",
"title": ""
},
{
"docid": "8791b422ebeb347294db174168bab439",
"text": "Sleep is superior to waking for promoting performance improvements between sessions of visual perceptual and motor learning tasks. Few studies have investigated possible effects of sleep on auditory learning. A key issue is whether sleep specifically promotes learning, or whether restful waking yields similar benefits. According to the \"interference hypothesis,\" sleep facilitates learning because it prevents interference from ongoing sensory input, learning and other cognitive activities that normally occur during waking. We tested this hypothesis by comparing effects of sleep, busy waking (watching a film) and restful waking (lying in the dark) on auditory tone sequence learning. Consistent with recent findings for human language learning, we found that compared with busy waking, sleep between sessions of auditory tone sequence learning enhanced performance improvements. Restful waking provided similar benefits, as predicted based on the interference hypothesis. These findings indicate that physiological, behavioral and environmental conditions that accompany restful waking are sufficient to facilitate learning and may contribute to the facilitation of learning that occurs during sleep.",
"title": ""
},
{
"docid": "a583bbf2deac0bf99e2790c47598cddd",
"text": "We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.",
"title": ""
},
{
"docid": "54ef290e7c8fbc5c1bcd459df9bc4a06",
"text": "Augmenter of Liver Regeneration (ALR) is a sulfhydryl oxidase carrying out fundamental functions facilitating protein disulfide bond formation. In mammals, it also functions as a hepatotrophic growth factor that specifically stimulates hepatocyte proliferation and promotes liver regeneration after liver damage or partial hepatectomy. Whether ALR also plays a role during vertebrate hepatogenesis is unknown. In this work, we investigated the function of alr in liver organogenesis in zebrafish model. We showed that alr is expressed in liver throughout hepatogenesis. Knockdown of alr through morpholino antisense oligonucleotide (MO) leads to suppression of liver outgrowth while overexpression of alr promotes liver growth. The small-liver phenotype in alr morphants results from a reduction of hepatocyte proliferation without affecting apoptosis. When expressed in cultured cells, zebrafish Alr exists as dimer and is localized in mitochondria as well as cytosol but not in nucleus or secreted outside of the cell. Similar to mammalian ALR, zebrafish Alr is a flavin-linked sulfhydryl oxidase and mutation of the conserved cysteine in the CxxC motif abolishes its enzymatic activity. Interestingly, overexpression of either wild type Alr or enzyme-inactive Alr(C131S) mutant promoted liver growth and rescued the liver growth defect of alr morphants. Nevertheless, alr(C131S) is less efficacious in both functions. Meantime, high doses of alr MOs lead to widespread developmental defects and early embryonic death in an alr sequence-dependent manner. These results suggest that alr promotes zebrafish liver outgrowth using mechanisms that are dependent as well as independent of its sulfhydryl oxidase activity. This is the first demonstration of a developmental role of alr in vertebrate. It exemplifies that a low-level sulfhydryl oxidase activity of Alr is essential for embryonic development and cellular survival. The dose-dependent and partial suppression of alr expression through MO-mediated knockdown allows the identification of its late developmental role in vertebrate liver organogenesis.",
"title": ""
},
{
"docid": "8fa721c98dac13157bcc891c06561ec7",
"text": "Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total child-care is not yet being promoted, there are indications that it is „on the cards‟. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care.",
"title": ""
},
{
"docid": "b5f2b13b5266c30ba02ff6d743e4b114",
"text": "The increasing scale, technology advances and services of modern networks have dramatically complicated their management such that in the near future it will be almost impossible for human administrators to monitor them. To control this complexity, IBM has introduced a promising approach aiming to create self-managed systems. This approach, called Autonomic Computing, aims to design computing equipment able to self-adapt its configuration and to self-optimize its performance depending on its situation in order to fulfill high-level objectives defined by the human operator. In this paper, we present our autonomic network management architecture (ANEMA) that implements several policy forms to achieve autonomic behaviors in the network equipments. In ANEMA, the high-level objectives of the human administrators and the users are captured and expressed in terms of ‘Utility Function’ policies. The ‘Goal’ policies describe the high-level management directives needed to guide the network to achieve the previous utility functions. Finally, the ‘behavioral’ policies describe the behaviors that should be followed by network equipments to react to changes in their context and to achieve the given ‘Goal’ policies. In order to highlight the benefits of ANEMA architecture and the continuum of policies to introduce autonomic management in a multiservice IP network, a testbed has been implemented and several scenarios have been executed. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f9b110890c90d48b6d2f84aa419c1598",
"text": "Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise-minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes, and it could eventually provide a framework to study the behavior of humans and animals as they encounter surprising events.",
"title": ""
},
{
"docid": "2871d80088d7cabd0cd5bdd5101e6018",
"text": "Owing to superior physical properties such as high electron saturation velocity and high electric breakdown field, GaN-based high electron mobility transistors (HEMTs) are capable of delivering superior performance in microwave amplifiers, high power switches, and high temperature integrated circuits (ICs). Compared to the conventional D-mode HEMTs with negative threshold voltages, enhancement-mode (E-mode) or normally-off HEMTs are desirable in these applications, for reduced circuit design complexity and fail-safe operation. Fluorine plasma treatment has been used to fabricate E-mode HEMTs [1], and is a robust process for the channel threshold voltage modulation. However, there is no standard equipment for this process and various groups have reported a wide range of process parameters [1–4]. In this work, we demonstrate the self-aligned enhancement-mode AlGaN/GaN HEMTs fabricated with a standard fluorine ion implantation. Ion implantation is widely used in semiconductor industry with well-controlled dose and precise implantation profile.",
"title": ""
},
{
"docid": "c62cc1b0a9c1c4cadede943b4cbd8050",
"text": "The problem of parsing has been studied extensively for various formal grammars. Given an input string and a grammar, the parsing problem is to check if the input string belongs to the language generated by the grammar. A closely related problem of great importance is one where the input are a string I and a grammar G and the task is to produce a string I ′ that belongs to the language generated by G and the ‘distance’ between I and I ′ is the smallest (from among all the strings in the language). Specifically, if I is in the language generated by G, then the output should be I. Any parser that solves this version of the problem is called an error correcting parser. In 1972 Aho and Peterson presented a cubic time error correcting parser for context free grammars. Since then this asymptotic time bound has not been improved under the (standard) assumption that the grammar size is a constant. In this paper we present an error correcting parser for context free grammars that runs in O(T (n)) time, where n is the length of the input string and T (n) is the time needed to compute the tropical product of two n× n matrices. In this paper we also present an n M -approximation algorithm for the language edit distance problem that has a run time of O(Mnω), where O(nω) is the time taken to multiply two n× n matrices. To the best of our knowledge, no approximation algorithms have been proposed for error correcting parsing for general context free grammars.",
"title": ""
},
{
"docid": "d64d589068d68ef19d7ac77ab55c8318",
"text": "Cloud computing is a revolutionary paradigm to deliver computing resources, ranging from data storage/processing to software, as a service over the network, with the benefits of efficient resource utilization and improved manageability. The current popular cloud computing models encompass a cluster of expensive and dedicated machines to provide cloud computing services, incurring significant investment in capital outlay and ongoing costs. A more cost effective solution would be to exploit the capabilities of an ad hoc cloud which consists of a cloud of distributed and dynamically untapped local resources. The ad hoc cloud can be further classified into static and mobile clouds: an ad hoc static cloud harnesses the underutilized computing resources of general purpose machines, whereas an ad hoc mobile cloud harnesses the idle computing resources of mobile devices. However, the dynamic and distributed characteristics of ad hoc cloud introduce challenges in system management. In this article, we propose a generic em autonomic mobile cloud (AMCloud) management framework for automatic and efficient service/resource management of ad hoc cloud in both static and mobile modes. We then discuss in detail the possible security and privacy issues in ad hoc cloud computing. A general security architecture is developed to facilitate the study of prevention and defense approaches toward a secure autonomic cloud system. This article is expected to be useful for exploring future research activities to achieve an autonomic and secure ad hoc cloud computing system.",
"title": ""
}
] |
scidocsrr
|
128fb3f7c2349f4e0d863b0a971a2752
|
A survey on information visualization: recent advances and challenges
|
[
{
"docid": "564675e793834758bd66e440b65be206",
"text": "While it is still most common for information visualization researchers to develop new visualizations from a data-or taskdriven perspective, there is growing interest in understanding the types of visualizations people create by themselves for personal use. As part of this recent direction, we have studied a large collection of whiteboards in a research institution, where people make active use of combinations of words, diagrams and various types of visuals to help them further their thought processes. Our goal is to arrive at a better understanding of the nature of visuals that are created spontaneously during brainstorming, thinking, communicating, and general problem solving on whiteboards. We use the qualitative approaches of open coding, interviewing, and affinity diagramming to explore the use of recognizable and novel visuals, and the interplay between visualization and diagrammatic elements with words, numbers and labels. We discuss the potential implications of our findings on information visualization design.",
"title": ""
},
{
"docid": "0a6a3e82b701bfbdbb73a9e8573fc94a",
"text": "Providing effective feedback on resource consumption in the home is a key challenge of environmental conservation efforts. One promising approach for providing feedback about residential energy consumption is the use of ambient and artistic visualizations. Pervasive computing technologies enable the integration of such feedback into the home in the form of distributed point-of-consumption feedback devices to support decision-making in everyday activities. However, introducing these devices into the home requires sensitivity to the domestic context. In this paper we describe three abstract visualizations and suggest four design requirements that this type of device must meet to be effective: pragmatic, aesthetic, ambient, and ecological. We report on the findings from a mixed methods user study that explores the viability of using ambient and artistic feedback in the home based on these requirements. Our findings suggest that this approach is a viable way to provide resource use feedback and that both the aesthetics of the representation and the context of use are important elements that must be considered in this design space.",
"title": ""
}
] |
[
{
"docid": "07ef9eece7de49ee714d4a2adf9bb078",
"text": "Vegetable oil has been proven to be advantageous as a non-toxic, cost-effective and biodegradable solvent to extract polycyclic aromatic hydrocarbons (PAHs) from contaminated soils for remediation purposes. The resulting vegetable oil contained PAHs and therefore required a method for subsequent removal of extracted PAHs and reuse of the oil in remediation processes. In this paper, activated carbon adsorption of PAHs from vegetable oil used in soil remediation was assessed to ascertain PAH contaminated oil regeneration. Vegetable oils, originating from lab scale remediation, with different PAH concentrations were examined to study the adsorption of PAHs on activated carbon. Batch adsorption tests were performed by shaking oil-activated carbon mixtures in flasks. Equilibrium data were fitted with the Langmuir and Freundlich isothermal models. Studies were also carried out using columns packed with activated carbon. In addition, the effects of initial PAH concentration and activated carbon dosage on sorption capacities were investigated. Results clearly revealed the effectiveness of using activated carbon as an adsorbent to remove PAHs from the vegetable oil. Adsorption equilibrium of PAHs on activated carbon from the vegetable oil was successfully evaluated by the Langmuir and Freundlich isotherms. The initial PAH concentrations and carbon dosage affected adsorption significantly. The results indicate that the reuse of vegetable oil was feasible.",
"title": ""
},
{
"docid": "20c3bfb61bae83494d7451b083bc2202",
"text": "Peripheral nerve hyperexcitability (PNH) syndromes can be subclassified as primary and secondary. The main primary PNH syndromes are neuromyotonia, cramp-fasciculation syndrome (CFS), and Morvan's syndrome, which cause widespread symptoms and signs without the association of an evident peripheral nerve disease. Their major symptoms are muscle twitching and stiffness, which differ only in severity between neuromyotonia and CFS. Cramps, pseudomyotonia, hyperhidrosis, and some other autonomic abnormalities, as well as mild positive sensory phenomena, can be seen in several patients. Symptoms reflecting the involvement of the central nervous system occur in Morvan's syndrome. Secondary PNH syndromes are generally seen in patients with focal or diffuse diseases affecting the peripheral nervous system. The PNH-related symptoms and signs are generally found incidentally during clinical or electrodiagnostic examinations. The electrophysiological findings that are very useful in the diagnosis of PNH are myokymic and neuromyotonic discharges in needle electromyography along with some additional indicators of increased nerve fiber excitability. Based on clinicopathological and etiological associations, PNH syndromes can also be classified as immune mediated, genetic, and those caused by other miscellaneous factors. There has been an increasing awareness on the role of voltage-gated potassium channel complex autoimmunity in primary PNH pathogenesis. Then again, a long list of toxic compounds and genetic factors has also been implicated in development of PNH. The management of primary PNH syndromes comprises symptomatic treatment with anticonvulsant drugs, immune modulation if necessary, and treatment of possible associated dysimmune and/or malignant conditions.",
"title": ""
},
{
"docid": "a0db56f55e2d291cb7cf871c064cf693",
"text": "It's being very important to listen to social media streams whether it's Twitter, Facebook, Messenger, LinkedIn, email or even company own application. As many customers may be using this streams to reach out to company because they need help. The company have setup social marketing team to monitor this stream. But due to huge volumes of users it's very difficult to analyses each and every social message and take a relevant action to solve users grievances, which lead to many unsatisfied customers or may even lose a customer. This papers proposes a system architecture which will try to overcome the above shortcoming by analyzing messages of each ejabberd users to check whether it's actionable or not. If it's actionable then an automated Chatbot will initiates conversation with that user and help the user to resolve the issue by providing a human way interactions using LUIS and cognitive services. To provide a highly robust, scalable and extensible architecture, this system is implemented on AWS public cloud.",
"title": ""
},
{
"docid": "19d4662287a5c3ce1cef85fa601b74ba",
"text": "This paper compares two approaches in identifying outliers in multivariate datasets; Mahalanobis distance (MD) and robust distance (RD). MD has been known suffering from masking and swamping effects and RD is an approach that was developed to overcome problems that arise in MD. There are two purposes of this paper, first is to identify outliers using MD and RD and the second is to show that RD performs better than MD in identifying outliers. An observation is classified as an outlier if MD or RD is larger than a cut-off value. Outlier generating model is used to generate a set of data and MD and RD are computed from this set of data. The results showed that RD can identify outliers better than MD. However, in non-outliers data the performance for both approaches are similar. The results for RD also showed that RD can identify multivariate outliers much better when the number of dimension is large.",
"title": ""
},
{
"docid": "ca6e39436be1b44ab0e20e0024cd0bbe",
"text": "This paper introduces a new approach, named micro-crowdfunding, for motivating people to participate in achieving a sustainable society. Increasing people's awareness of how they participate in maintaining the sustainability of common resources, such as public sinks, toilets, shelves, and office areas, is central to achieving a sustainable society. Micro-crowdfunding, as proposed in the paper, is a new type of community-based crowdsourcing architecture that is based on the crowdfunding concept and uses the local currency idea as a tool for encouraging people who live in urban environments to increase their awareness of how important it is to sustain small, common resources through their minimum efforts. Because our approach is lightweight and uses a mobile phone, people can participate in micro-crowdfunding activities with little effort anytime and anywhere.\n We present the basic concept of micro-crowdfunding and a prototype system. We also describe our experimental results, which show how economic and social factors are effective in facilitating micro-crowdfunding. Our results show that micro-crowdfunding increases the awareness about social sustainability, and we believe that micro-crowdfunding makes it possible to motivate people for achieving a sustainable society.",
"title": ""
},
{
"docid": "38935c773fb3163a1841fcec62b3e15a",
"text": "We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can implement a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input incrementally, also performs very well on this task: the network learns to predict the outcome of the arithmetic expressions with high accuracy, although performance deteriorates somewhat with increasing length. To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice. Therefore, we develop an approach where we formulate and test multiple hypotheses on the information encoded and processed by the network. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train ‘diagnostic classifiers’ to test those predictions. Our results indicate that the networks follow a strategy similar to our hypothesised ‘cumulative strategy’, which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This is turn shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks. We argue that diagnostic classification, unlike most visualisation techniques, does scale up from small networks in a toy domain, to larger and deeper recurrent networks dealing with real-life data, and may therefore contribute to a better understanding of the internal dynamics of current state-of-the-art models in natural language processing.",
"title": ""
},
{
"docid": "6064bdefac3e861bcd46fa303b0756be",
"text": "Some models of textual corpora employ text generation methods involving n-gram statistics, while others use latent topic variables inferred using the \"bag-of-words\" assumption, in which word order is ignored. Previously, these methods have not been combined. In this work, I explore a hierarchical generative probabilistic model that incorporates both n-gram statistics and latent topic variables by extending a unigram topic model to include properties of a hierarchical Dirichlet bigram language model. The model hyperparameters are inferred using a Gibbs EM algorithm. On two data sets, each of 150 documents, the new model exhibits better predictive accuracy than either a hierarchical Dirichlet bigram language model or a unigram topic model. Additionally, the inferred topics are less dominated by function words than are topics discovered using unigram statistics, potentially making them more meaningful.",
"title": ""
},
{
"docid": "aa4e3c2db7f1a1ac749d5d34014e26a0",
"text": "In this paper, a novel text clustering technique is proposed to summarize text documents. The clustering method, so called ‘Ensemble Clustering Method’, combines both genetic algorithms (GA) and particle swarm optimization (PSO) efficiently and automatically to get the best clustering results. The summarization with this clustering method is to effectively avoid the redundancy in the summarized document and to show the good summarizing results, extracting the most significant and non-redundant sentence from clustering sentences of a document. We tested this technique with various text documents in the open benchmark datasets, DUC01 and DUC02. To evaluate the performances, we used F-measure and ROUGE. The experimental results show that the performance capability of our method is about 11% to 24% better than other summarization algorithms. Key-Words: Text Summarization; Extractive Summarization; Ensemble Clustering; Genetic Algorithms; Particle Swarm Optimization",
"title": ""
},
{
"docid": "5e7b935a73180c9ccad3bc0e82311503",
"text": "What happens if one pushes a cup sitting on a table toward the edge of the table? How about pushing a desk against a wall? In this paper, we study the problem of understanding the movements of objects as a result of applying external forces to them. For a given force vector applied to a specific location in an image, our goal is to predict long-term sequential movements caused by that force. Doing so entails reasoning about scene geometry, objects, their attributes, and the physical rules that govern the movements of objects. We design a deep neural network model that learns long-term sequential dependencies of object movements while taking into account the geometry and appearance of the scene by combining Convolutional and Recurrent Neural Networks. Training our model requires a large-scale dataset of object movements caused by external forces. To build a dataset of forces in scenes, we reconstructed all images in SUN RGB-D dataset in a physics simulator to estimate the physical movements of objects caused by external forces applied to them. Our Forces in Scenes (ForScene) dataset contains 10,335 images in which a variety of external forces are applied to different types of objects resulting in more than 65,000 object movements represented in 3D. Our experimental evaluations show that the challenging task of predicting longterm movements of objects as their reaction to external forces is possible from a single image.",
"title": ""
},
{
"docid": "39cb45c62b83a40f8ea42cb872a7aa59",
"text": "Levy flights are employed in a lattice model of contaminant migration by bioturbation, the reworking of sediment by benthic organisms. The model couples burrowing, foraging, and conveyor-belt feeding with molecular diffusion. The model correctly predicts a square-root dependence on bioturbation rates over a wide range of biomass densities. The model is used to predict the effect of bioturbation on the redistribution of contaminants in laboratory microcosms containing pyrene-inoculated sediments and the tubificid oligochaete Limnodrilus hoffmeisteri. The model predicts the dynamic flux from the sediment and in-bed concentration profiles that are consistent with observations. The sensitivity of flux and concentration profiles to the specific mechanisms of bioturbation are explored with the model. The flux of pyrene to the overlying water was largely controlled by the simulated foraging activities.",
"title": ""
},
{
"docid": "716f8cadac94110c4a00bc81480a4b66",
"text": "The last decade has witnessed the prevalence of sensor and GPS technologies that produce a sheer volume of trajectory data representing the motion history of moving objects. Measuring similarity between trajectories is undoubtedly one of the most important tasks in trajectory data management since it serves as the foundation of many advanced analyses such as similarity search, clustering, and classification. In this light, tremendous efforts have been spent on this topic, which results in a large number of trajectory similarity measures. Generally, each individual work introducing a new distance measure has made specific claims on the superiority of their proposal. However, for most works, the experimental study was focused on demonstrating the efficiency of the search algorithms, leaving the effectiveness aspect unverified empirically. In this paper, we conduct a comparative experimental study on the effectiveness of six widely used trajectory similarity measures based on a real taxi trajectory dataset. By applying a variety of transformations we designed for each original trajectory, our experimental observations demonstrate the advantages and drawbacks of these similarity measures in different circumstances.",
"title": ""
},
{
"docid": "7a2aef39046fe0704061195cc37a010a",
"text": "Conventional design of ferrite-cored inductor employs air gaps to store magnetic energy. In this work, the gap length is allowed to be smaller than the conventional value so that the nonlinear ferrite material is biased in the region with low permeability and, hence, significant energy density. A peak in the inductance-gap relationship has thus been uncovered where the total energy stored in the gaps and the core is maximized. A reluctance model is formulated to explain the peaking behavior, and is verified experimentally. Curves of inductance versus gap length are generated to aid the design of swinging inductance and reduce the core size.",
"title": ""
},
{
"docid": "8b41f536667fda5bfaf25d7ac8d71ab0",
"text": "Video question answering (VideoQA) always involves visual reasoning. When answering questions composing of multiple logic correlations, models need to perform multi-step reasoning. In this paper, we formulate multi-step reasoning in VideoQA as a new task to answer compositional and logical structured questions based on video content. Existing VideoQA datasets are inadequate as benchmarks for the multi-step reasoning due to limitations such as lacking logical structure and having language biases. Thus we design a system to automatically generate a large-scale dataset, namely SVQA (Synthetic Video Question Answering). Compared with other VideoQA datasets, SVQA contains exclusively long and structured questions with various spatial and temporal relations between objects. More importantly, questions in SVQA can be decomposed into human readable logical tree or chain layouts, each node of which represents a sub-task requiring a reasoning operation such as comparison or arithmetic. Towards automatic question answering in SVQA, we develop a new VideoQA model. Particularly, we construct a new attention module, which contains spatial attention mechanism to address crucial and multiple logical sub-tasks embedded in questions, as well as a refined GRU called ta-GRU (temporal-attention GRU) to capture the long-term temporal dependency and gather complete visual cues. Experimental results show the capability of multi-step reasoning of SVQA and the effectiveness of our model when compared with other existing models.",
"title": ""
},
{
"docid": "15881d5448e348c6e1a63e195daa68eb",
"text": "Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, we observed that bottleneck autoencoders produce subjectively low quality reconstructed images. In this work, we explore the ability of sparse coding to improve reconstructed image quality for the same degree of compression. We observe that sparse image compression produces visually superior reconstructed images and yields higher values of pixel-wise measures of reconstruction quality (PSNR and SSIM) compared to bottleneck autoencoders. In addition, we find that using alternative metrics that correlate better with human perception, such as feature perceptual loss and the classification accuracy, sparse image compression scores up to 18.06% and 2.7% higher, respectively, compared to bottleneck autoencoders. Although computationally much more intensive, we find that sparse coding is otherwise superior to bottleneck autoencoders for the same degree of compression.",
"title": ""
},
{
"docid": "f8f58b75c754f1ed41cdf223a59521b0",
"text": "Domain-invariant (view-invariant and modality-invariant) feature representation is essential for human action recognition. Moreover, given a discriminative visual representation, it is critical to discover the latent correlations among multiple actions in order to facilitate action modeling. To address these problems, we propose a multi-domain and multi-task learning (MDMTL) method to: 1) extract domain-invariant information for multi-view and multi-modal action representation and 2) explore the relatedness among multiple action categories. Specifically, we present a sparse transfer learning-based method to co-embed multi-domain (multi-view and multi-modality) data into a single common space for discriminative feature learning. Additionally, visual feature learning is incorporated into the multi-task learning framework, with the Frobenius-norm regularization term and the sparse constraint term, for joint task modeling and task relatedness-induced feature learning. To the best of our knowledge, MDMTL is the first supervised framework to jointly realize domain-invariant feature learning and task modeling for multi-domain action recognition. Experiments conducted on the INRIA Xmas Motion Acquisition Sequences data set, the MSR Daily Activity 3D (DailyActivity3D) data set, and the Multi-modal & Multi-view & Interactive data set, which is the most recent and largest multi-view and multi-model action recognition data set, demonstrate the superiority of MDMTL over the state-of-the-art approaches.",
"title": ""
},
{
"docid": "6c9c06604d5ef370b803bb54b4fe1e0c",
"text": "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of minibatch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.",
"title": ""
},
{
"docid": "d647470f1fd0ba1898ca766001d20de6",
"text": "Despite the fact that many people suffer from it, an unequivocal definition of dry nose (DN) is not available. Symptoms range from the purely subjective sensation of a rather dry nose to visible crusting of the (inner) nose (nasal mucosa), and a wide range of combinations are met with. Relevant diseases are termed rhinitis sicca anterior, primary and secondary rhinitis atrophicans, rhinitis atrophicans with foetor (ozena), and empty nose syndrome. The diagnosis is based mainly on the patient’s history, inspection of the external and inner nose, endoscopy of the nasal cavity (and paranasal sinuses) and the nasopharynx, with CT, allergy testing and microbiological swabs being performed where indicated. Treatment consists in the elimination of predisposing factors, moistening, removal of crusts, avoidance of injurious factors, care of the mucosa, treatment of infections and where applicable, correction of an over-large air space. Since the uncritical resection of the nasal turbinates is a significant and frequent factor in the genesis of dry nose, secondary RA and ENS, the inferior and middle turbinate should not be resected without adequate justification, and the simultaneous removal of both should not be done other than for a malignant condition. In this paper, we review both the aetiology and clinical presentation of the conditions associated with the symptom dry nose, and its conservative and surgical management.",
"title": ""
},
{
"docid": "6db5de1bb37513c3c251624947ee4e8f",
"text": "The proliferation of Ambient Intelligence (AmI) devices and services and their integration in smart environments creates the need for a simple yet effective way of controlling and communicating with them. Towards that direction, the application of the Trigger -- Action model has attracted a lot of research with many systems and applications having been developed following that approach. This work introduces ParlAmI, a multimodal conversational interface aiming to give its users the ability to determine the behavior of AmI environments, by creating rules using natural language as well as a GUI. The paper describes ParlAmI, its requirements and functionality, and presents the findings of a user-based evaluation which was conducted.",
"title": ""
},
{
"docid": "4bdcc552853c8b658762c0c5d509f362",
"text": "In this work, we study the problem of partof-speech tagging for Tweets. In contrast to newswire articles, Tweets are usually informal and contain numerous out-ofvocabulary words. Moreover, there is a lack of large scale labeled datasets for this domain. To tackle these challenges, we propose a novel neural network to make use of out-of-domain labeled data, unlabeled in-domain data, and labeled indomain data. Inspired by adversarial neural networks, the proposed method tries to learn common features through adversarial discriminator. In addition, we hypothesize that domain-specific features of target domain should be preserved in some degree. Hence, the proposed method adopts a sequence-to-sequence autoencoder to perform this task. Experimental results on three different datasets show that our method achieves better performance than state-of-the-art methods.",
"title": ""
},
{
"docid": "7e5b18a0356a89a0285f80a2224d8b12",
"text": "Machine recognition of a handwritten mathematical expression (HME) is challenging due to the ambiguities of handwritten symbols and the two-dimensional structure of mathematical expressions. Inspired by recent work in deep learning, we present Watch, Attend and Parse (WAP), a novel end-to-end approach based on neural network that learns to recognize HMEs in a two-dimensional layout and outputs them as one-dimensional character sequences in LaTeX format. Inherently unlike traditional methods, our proposed model avoids problems that stem from symbol segmentation, and it does not require a predefined expression grammar. Meanwhile, the problems of symbol recognition and structural analysis are handled, respectively, using a watcher and a parser. We employ a convolutional neural network encoder that takes HME images as input as the watcher and employ a recurrent neural network decoder equipped with an attention mechanism as the parser to generate LaTeX sequences. Moreover, the correspondence between the input expressions and the output LaTeX sequences is learned automatically by the attention mechanism. We validate the proposed approach on a benchmark published by the CROHME international competition. Using the official training dataset, WAP significantly outperformed the state-of-the-art method with an expression recognition accuracy of 46.55% on CROHME 2014 and 44.55% on CROHME 2016. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
d93f93049619c519e11f4b4601712615
|
Gamifying Information Systems - a synthesis of Gamification mechanics and Dynamics
|
[
{
"docid": "4f6a6f633e512a33fc0b396765adcdf0",
"text": "Interactive systems often require calibration to ensure that input and output are optimally configured. Without calibration, user performance can degrade (e.g., if an input device is not adjusted for the user's abilities), errors can increase (e.g., if color spaces are not matched), and some interactions may not be possible (e.g., use of an eye tracker). The value of calibration is often lost, however, because many calibration processes are tedious and unenjoyable, and many users avoid them altogether. To address this problem, we propose calibration games that gather calibration data in an engaging and entertaining manner. To facilitate the creation of calibration games, we present design guidelines that map common types of calibration to core tasks, and then to well-known game mechanics. To evaluate the approach, we developed three calibration games and compared them to standard procedures. Users found the game versions significantly more enjoyable than regular calibration procedures, without compromising the quality of the data. Calibration games are a novel way to motivate users to carry out calibrations, thereby improving the performance and accuracy of many human-computer systems.",
"title": ""
},
{
"docid": "78e21364224b9aa95f86ac31e38916ef",
"text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "bf5f08174c55ed69e454a87ff7fbe6e2",
"text": "In much of the current literature on supply chain management, supply networks are recognized as a system. In this paper, we take this observation to the next level by arguing the need to recognize supply networks as a complex adaptive system (CAS). We propose that many supply networks emerge rather than result from purposeful design by a singular entity. Most supply chain management literature emphasizes negative feedback for purposes of control; however, the emergent patterns in a supply network can much better be managed through positive feedback, which allows for autonomous action. Imposing too much control detracts from innovation and flexibility; conversely, allowing too much emergence can undermine managerial predictability and work routines. Therefore, when managing supply networks, managers must appropriately balance how much to control and how much to let emerge. © 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "f9cddbf2b0df51aeaf240240bd324b33",
"text": "Grammatical agreement means that features associated with one linguistic unit (for example number or gender) become associated with another unit and then possibly overtly expressed, typically with morphological markers. It is one of the key mechanisms used in many languages to show that certain linguistic units within an utterance grammatically depend on each other. Agreement systems are puzzling because they can be highly complex in terms of what features they use and how they are expressed. Moreover, agreement systems have undergone considerable change in the historical evolution of languages. This article presents language game models with populations of agents in order to find out for what reasons and by what cultural processes and cognitive strategies agreement systems arise. It demonstrates that agreement systems are motivated by the need to minimize combinatorial search and semantic ambiguity, and it shows, for the first time, that once a population of agents adopts a strategy to invent, acquire and coordinate meaningful markers through social learning, linguistic self-organization leads to the spontaneous emergence and cultural transmission of an agreement system. The article also demonstrates how attested grammaticalization phenomena, such as phonetic reduction and conventionalized use of agreement markers, happens as a side effect of additional economizing principles, in particular minimization of articulatory effort and reduction of the marker inventory. More generally, the article illustrates a novel approach for studying how key features of human languages might emerge.",
"title": ""
},
{
"docid": "143da39941ecc8fb69e87d611503b9c0",
"text": "A dual-core 64b Xeonreg MP processor is implemented in a 65nm 8M process. The 435mm2 die has 1.328B transistors. Each core has two threads and a unified 1MB L2 cache. The 16MB unified, 16-way set-associative L3 cache implements both sleep and shut-off leakage reduction modes",
"title": ""
},
{
"docid": "e5f38cb3857c5101111c69d7318ebcbc",
"text": "Rotator cuff tendinitis is one of the main causes of shoulder pain. The objective of this study was to evaluate the possible additive effects of low-power laser treatment in combination with conventional physiotherapy endeavors in these patients. A total of 50 patients who were referred to the Physical Medicine and Rehabilitation Clinic with shoulder pain and rotator cuff disorders were selected. Pain severity measured with visual analogue scale (VAS), abduction, and external rotation range of motion in shoulder joint was measured by goniometry, and evaluation of daily functional abilities of patients was measured by shoulder disability questionnaire. Twenty-five of the above patients were randomly assigned into the control group and received only routine physiotherapy. The other 25 patients were assigned into the experimental group and received conventional therapy plus low-level laser therapy (4 J/cm2 at each point over a maximum of ten painful points of shoulder region for total 5 min duration). The above measurements were assessed at the end of the third week of therapy in each group and the results were analyzed statistically. In both groups, statistically significant improvement was detected in all outcome measures compared to baseline (p < 0.05). Comparison between two different groups revealed better results for control of pain (reduction in VAS average) and shoulder disability problems in the experimental group versus the control (3.1 ± 2.2 vs. 5 ± 2.6, p = 0.029 and 4.4 ± 3.1 vs. 8.5 ± 5.1, p = 0.031, respectively ) after intervention. Positive objective signs also had better results in the experimental group, but the mean range of active abduction (144.92 ± 31.6 vs. 132.80 ± 31.3) and external rotation (78.0 ± 19.5 vs. 76.3 ± 19.1) had no significant difference between the two groups (p = 0.20 and 0.77, respectively). As one of physical modalities, gallium-arsenide low-power laser combined with conventional physiotherapy has superiority over routine physiotherapy from the view of decreasing pain and improving the patient’s function, but no additional advantages were detected in increasing shoulder joint range of motion in comparison to other physical agents.",
"title": ""
},
{
"docid": "e1bd202db576085b70f0494d29791a5b",
"text": "Object class labelling is the task of annotating images with labels on the presence or absence of objects from a given class vocabulary. Simply asking one yes-no question per class, however, has a cost that is linear in the vocabulary size and is thus inefficient for large vocabularies. Modern approaches rely on a hierarchical organization of the vocabulary to reduce annotation time, but remain expensive (several minutes per image for the 200 classes in ILSVRC). Instead, we propose a new interface where classes are annotated via speech. Speaking is fast and allows for direct access to the class name, without searching through a list or hierarchy. As additional advantages, annotators can simultaneously speak and scan the image for objects, the interface can be kept extremely simple, and using it requires less mouse movement. However, a key challenge is to train annotators to only say words from the given class vocabulary. We present a way to tackle this challenge and show that our method yields high-quality annotations at significant speed gains (2.3− 14.9× faster than existing methods).",
"title": ""
},
{
"docid": "0485beab9d781e99046042a15ea913c5",
"text": "Systems for processing continuous monitoring queries over data streams must be adaptive because data streams are often bursty and data characteristics may vary over time. We focus on one particular type of adaptivity: the ability to gracefully degrade performance via \"load shedding\" (dropping unprocessed tuples to reduce system load) when the demands placed on the system cannot be met in full given available resources. Focusing on aggregation queries, we present algorithms that determine at what points in a query plan should load shedding be performed and what amount of load should be shed at each point in order to minimize the degree of inaccuracy introduced into query answers. We report the results of experiments that validate our analytical conclusions.",
"title": ""
},
{
"docid": "9e208e6beed62575a92f32031b7af8ad",
"text": "Recently, interests on cleaning robots workable in pipes (termed as in-pipe cleaning robot) are increasing because Garbage Automatic Collection Facilities (i.e, GACF) are widely being installed in Seoul metropolitan area of Korea. So far research on in-pipe robot has been focused on inspection rather than cleaning. In GACF, when garbage is moving, we have to remove the impurities which are stuck to the inner face of the pipe (diameter: 300mm or 400mm). Thus, in this paper, by using TRIZ (Inventive Theory of Problem Solving in Russian abbreviation), we will propose an in-pipe cleaning robot of GACF with the 6-link sliding mechanism which can be adjusted to fit into the inner face of pipe using pneumatic pressure(not spring). The proposed in-pipe cleaning robot for GACF can have forward/backward movement itself as well as rotation of brush in cleaning. The robot body should have the limited size suitable for the smaller pipe with diameter of 300mm. In addition, for the pipe with diameter of 400mm, the links of robot should stretch to fit into the diameter of the pipe by using the sliding mechanism. Based on the conceptual design using TRIZ, we will set up the initial design of the robot in collaboration with a field engineer of Robot Valley, Inc. in Korea. For the optimal design of in-pipe cleaning robot, the maximum impulsive force of collision between the robot and the inner face of pipe is simulated by using RecurDyn® when the link of sliding mechanism is stretched to fit into the 400mm diameter of the pipe. The stresses exerted on the 6 links of sliding mechanism by the maximum impulsive force will be simulated by using ANSYS® Workbench based on the Design Of Experiment(in short DOE). Finally the optimal dimensions including thicknesses of 4 links will be decided in order to have the best safety factor as 2 in this paper as well as having the minimum mass of 4 links. It will be verified that the optimal design of 4 links has the best safety factor close to 2 as well as having the minimum mass of 4 links, compared with the initial design performed by the expert of Robot Valley, Inc. In addition, the prototype of in-pipe cleaning robot will be stated with further research.",
"title": ""
},
{
"docid": "84646992c6de3b655f8ccd2bda3e6d4c",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.eswa.2012.02.064 ⇑ Corresponding author. E-mail addresses: [email protected] (R. C bo.it (M. Ferrara). This paper proposes a novel fingerprint retrieval system that combines level-1 (local orientation and frequencies) and level-2 (minutiae) features. Various scoreand rank-level fusion strategies and a novel hybrid fusion approach are evaluated. Extensive experiments are carried out on six public databases and a systematic comparison is made with eighteen retrieval methods and seventeen exclusive classification techniques published in the literature. The novel approach achieves impressive results: its retrieval accuracy is definitely higher than competing state-of-the-art methods, with error rates that in some cases are even one or two orders of magnitude smaller. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4de971edc8e677d554ae77f6976fc5d3",
"text": "With the widespread use of encrypted data transport network traffic encryption is becoming a standard nowadays. This presents a challenge for traffic measurement, especially for analysis and anomaly detection methods which are dependent on the type of network traffic. In this paper, we survey existing approaches for classification and analysis of encrypted traffic. First, we describe the most widespread encryption protocols used throughout the Internet. We show that the initiation of an encrypted connection and the protocol structure give away a lot of information for encrypted traffic classification and analysis. Then, we survey payload and feature-based classification methods for encrypted traffic and categorize them using an established taxonomy. The advantage of some of described classification methods is the ability to recognize the encrypted application protocol in addition to the encryption protocol. Finally, we make a comprehensive comparison of the surveyed feature-based classification methods and present their weaknesses and strengths. Copyright c © 2014 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "83f067159913e65410a054681461ab4d",
"text": "Cloud computing has revolutionized the way computing and software services are delivered to the clients on demand. It offers users the ability to connect to computing resources and access IT managed services with a previously unknown level of ease. Due to this greater level of flexibility, the cloud has become the breeding ground of a new generation of products and services. However, the flexibility of cloud-based services comes with the risk of the security and privacy of users' data. Thus, security concerns among users of the cloud have become a major barrier to the widespread growth of cloud computing. One of the security concerns of cloud is data mining based privacy attacks that involve analyzing data over a long period to extract valuable information. In particular, in current cloud architecture a client entrusts a single cloud provider with his data. It gives the provider and outside attackers having unauthorized access to cloud, an opportunity of analyzing client data over a long period to extract sensitive information that causes privacy violation of clients. This is a big concern for many clients of cloud. In this paper, we first identify the data mining based privacy risks on cloud data and propose a distributed architecture to eliminate the risks.",
"title": ""
},
{
"docid": "804ddcaf56ef34b0b578cc53d7cca304",
"text": "This review article describes two protocols adapted from lung ultrasound: the bedside lung ultrasound in emergency (BLUE)-protocol for the immediate diagnosis of acute respiratory failure and the fluid administration limited by lung sonography (FALLS)-protocol for the management of acute circulatory failure. These applications require the mastery of 10 signs indicating normal lung surface (bat sign, lung sliding, A-lines), pleural effusions (quad and sinusoid sign), lung consolidations (fractal and tissue-like sign), interstitial syndrome (lung rockets), and pneumothorax (stratosphere sign and the lung point). These signs have been assessed in adults, with diagnostic accuracies ranging from 90% to 100%, allowing consideration of ultrasound as a reasonable bedside gold standard. In the BLUE-protocol, profiles have been designed for the main diseases (pneumonia, congestive heart failure, COPD, asthma, pulmonary embolism, pneumothorax), with an accuracy > 90%. In the FALLS-protocol, the change from A-lines to lung rockets appears at a threshold of 18 mm Hg of pulmonary artery occlusion pressure, providing a direct biomarker of clinical volemia. The FALLS-protocol sequentially rules out obstructive, then cardiogenic, then hypovolemic shock for expediting the diagnosis of distributive (usually septic) shock. These applications can be done using simple grayscale machines and one microconvex probe suitable for the whole body. Lung ultrasound is a multifaceted tool also useful for decreasing radiation doses (of interest in neonates where the lung signatures are similar to those in adults), from ARDS to trauma management, and from ICUs to points of care. If done in suitable centers, training is the least of the limitations for making use of this kind of visual medicine.",
"title": ""
},
{
"docid": "733ddc5a642327364c2bccb6b1258fac",
"text": "Human memory is unquestionably a vital cognitive ability but one that can often be unreliable. External memory aids such as diaries, photos, alarms and calendars are often employed to assist in remembering important events in our past and future. The recent trend for lifelogging, continuously documenting ones life through wearable sensors and cameras, presents a clear opportunity to augment human memory beyond simple reminders and actually improve its capacity to remember. This article surveys work from the fields of computer science and psychology to understand the potential for such augmentation, the technologies necessary for realising this opportunity and to investigate what the possible benefits and ethical pitfalls of using such technology might be.",
"title": ""
},
{
"docid": "f9bd86958566868d2da17aad9c5029df",
"text": "A Multi-Agent System (MAS) is an organization of coordinated autonomous agents that interact in order to achieve common goals. Considering real world organizations as an analogy, this paper proposes architectural styles for MAS which adopt concepts from organization theory and strategic alliances literature. The styles are intended to represent a macro-level architecture of a MAS, and they are modeled using the i* framework which offers the notions of actor, goal and actor dependency for modeling multi-agent settings. The styles are also specified as metaconcepts in the Telos modeling language. Moreover, each style is evaluated with respect to a set of software quality attributes, such as predictability and adaptability. The paper also explores the adoption of micro-level patterns proposed elsewhere in order to give a finer-grain description of a MAS architecture. These patterns define how goals assigned to actors participating in an organizational architecture will be fulfilled by agents. An e-business example illustrates both the styles and patterns proposed in this work. The research is being conducted within the context of Tropos, a comprehensive software development methodology for agent-oriented software.",
"title": ""
},
{
"docid": "b181559966c55d90741f62e645b7d2f7",
"text": "BACKGROUND AND AIMS\nPsychological stress is associated with inflammatory bowel disease [IBD], but the nature of this relationship is complex. At present, there is no simple tool to screen for stress in IBD clinical practice or assess stress repeatedly in longitudinal studies. Our aim was to design a single-question 'stressometer' to rapidly measure stress and validate this in IBD patients.\n\n\nMETHODS\nIn all, 304 IBD patients completed a single-question 'stressometer'. This was correlated with stress as measured by the Depression Anxiety Stress Scales [DASS-21], quality of life, and disease activity. Test-retest reliability was assessed in 31 patients who completed the stressometer and the DASS-21 on two occasions 4 weeks apart.\n\n\nRESULTS\nStressometer levels correlated with the DASS-21 stress dimension in both Crohn's disease [CD] (Spearman's rank correlation coefficient [rs] 0.54; p < 0.001) and ulcerative colitis [UC] [rs 0.59; p < 0.001]. Stressometer levels were less closely associated with depression and anxiety [rs range 0.36 to 0.49; all p-values < 0.001]. Stressometer scores correlated with all four Short Health Scale quality of life dimensions in both CD and UC [rs range 0.35 to 0.48; all p-values < 0.001] and with disease activity in Crohn's disease [rs 0.46; p < 0.001] and ulcerative colitis [rs 0.20; p = 0.02]. Responsiveness was confirmed with a test-retest correlation of 0.43 [p = 0.02].\n\n\nCONCLUSIONS\nThe stressometer is a simple, valid, and responsive measure of psychological stress in IBD patients and may be a useful patient-reported outcome measure in future IBD clinical and research assessments.",
"title": ""
},
{
"docid": "f3b9269e3d6e6098384eda277129864c",
"text": "Action planning using learned and differentiable forward models of the world is a general approach which has a number of desirable properties, including improved sample complexity over modelfree RL methods, reuse of learned models across different tasks, and the ability to perform efficient gradient-based optimization in continuous action spaces. However, this approach does not apply straightforwardly when the action space is discrete. In this work, we show that it is in fact possible to effectively perform planning via backprop in discrete action spaces, using a simple paramaterization of the actions vectors on the simplex combined with input noise when training the forward model. Our experiments show that this approach can match or outperform model-free RL and discrete planning methods on gridworld navigation tasks in terms of performance and/or planning time while using limited environment interactions, and can additionally be used to perform model-based control in a challenging new task where the action space combines discrete and continuous actions. We furthermore propose a policy distillation approach which yields a fast policy network which can be used at inference time, removing the need for an iterative planning procedure.",
"title": ""
},
{
"docid": "46e8609b7cf5cfc970aa75fa54d3551d",
"text": "BACKGROUND\nAims were to assess the efficacy of metacognitive training (MCT) in people with a recent onset of psychosis in terms of symptoms as a primary outcome and metacognitive variables as a secondary outcome.\n\n\nMETHOD\nA multicenter, randomized, controlled clinical trial was performed. A total of 126 patients were randomized to an MCT or a psycho-educational intervention with cognitive-behavioral elements. The sample was composed of people with a recent onset of psychosis, recruited from nine public centers in Spain. The treatment consisted of eight weekly sessions for both groups. Patients were assessed at three time-points: baseline, post-treatment, and at 6 months follow-up. The evaluator was blinded to the condition of the patient. Symptoms were assessed with the PANSS and metacognition was assessed with a battery of questionnaires of cognitive biases and social cognition.\n\n\nRESULTS\nBoth MCT and psycho-educational groups had improved symptoms post-treatment and at follow-up, with greater improvements in the MCT group. The MCT group was superior to the psycho-educational group on the Beck Cognitive Insight Scale (BCIS) total (p = 0.026) and self-certainty (p = 0.035) and dependence self-subscale of irrational beliefs, comparing baseline and post-treatment. Moreover, comparing baseline and follow-up, the MCT group was better than the psycho-educational group in self-reflectiveness on the BCIS (p = 0.047), total BCIS (p = 0.045), and intolerance to frustration (p = 0.014). Jumping to Conclusions (JTC) improved more in the MCT group than the psycho-educational group (p = 0.021). Regarding the comparison within each group, Theory of Mind (ToM), Personalizing Bias, and other subscales of irrational beliefs improved in the MCT group but not the psycho-educational group (p < 0.001-0.032).\n\n\nCONCLUSIONS\nMCT could be an effective psychological intervention for people with recent onset of psychosis in order to improve cognitive insight, JTC, and tolerance to frustration. It seems that MCT could be useful to improve symptoms, ToM, and personalizing bias.",
"title": ""
},
{
"docid": "31c2dc8045f43c7bf1aa045e0eb3b9ad",
"text": "This paper addresses the task of functional annotation of genes from biomedical literature. We view this task as a hierarchical text categorization problem with Gene Ontology as a class hierarchy. We present a novel global hierarchical learning approach that takes into account the semantics of a class hierarchy. This algorithm with AdaBoost as the underlying learning procedure significantly outperforms the corresponding “flat” approach, i.e. the approach that does not consider any hierarchical information. In addition, we propose a novel hierarchical evaluation measure that gives credit to partially correct classification and discriminates errors by both distance and depth in a class hierarchy.",
"title": ""
},
{
"docid": "d6d8ef59feb54c76fdcc43b31b9bf5f8",
"text": "We consider the classical TD(0) algorithm implemented on a network of agents wherein the agents also incorporate updates received from neighboring agents using a gossip-like mechanism. The combined scheme is shown to converge for both discounted and average cost problems.",
"title": ""
},
{
"docid": "38a18bfce2cb33b390dd7c7cf5a4afd1",
"text": "Automatic photo assessment is a high emerging research field with wide useful ‘real-world’ applications. Due to the recent advances in deep learning, one can observe very promising approaches in the last years. However, the proposed solutions are adapted and optimized for ‘isolated’ datasets making it hard to understand the relationship between them and to benefit from the complementary information. Following a unifying approach, we propose in this paper a learning model that integrates the knowledge from different datasets. We conduct a study based on three representative benchmark datasets for photo assessment. Instead of developing for each dataset a specific model, we design and adapt sequentially a unique model which we nominate UNNA. UNNA consists of a deep convolutional neural network, that predicts for a given image three kinds of aesthetic information: technical quality, high-level semantical quality, and a detailed description of photographic rules. Due to the sequential adaptation that exploits the common features between the chosen datasets, UNNA has comparable performances with the state-of-the-art solutions with effectively less parameter. The final architecture of UNNA gives us some interesting indication of the kind of shared features as well as individual aspects of the considered datasets.",
"title": ""
},
{
"docid": "91ed0637e0533801be8b03d5ad21d586",
"text": "With the rapid development of modern wireless communication systems, the desirable miniaturization, multifunctionality strong harmonic suppression, and enhanced bandwidth of the rat-race coupler has generated much interest and continues to be a focus of research. Whether the current rat-race coupler is sufficient to adapt to the future development of microwave systems has become a heated topic.",
"title": ""
}
] |
scidocsrr
|
6dd5e9a9d8a29e0d8b3251309efd4f97
|
OrthoNoC: A Broadcast-Oriented Dual-Plane Wireless Network-on-Chip Architecture
|
[
{
"docid": "6e848928859248e0597124cee0560e43",
"text": "The scaling of microchip technologies has enabled large scale systems-on-chip (SoC). Network-on-chip (NoC) research addresses global communication in SoC, involving (i) a move from computation-centric to communication-centric design and (ii) the implementation of scalable communication structures. This survey presents a perspective on existing NoC research. We define the following abstractions: system, network adapter, network, and link to explain and structure the fundamental concepts. First, research relating to the actual network design is reviewed. Then system level design and modeling are discussed. We also evaluate performance analysis techniques. The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.",
"title": ""
},
{
"docid": "268891c1f135f6674b73304d22a3932c",
"text": "© CACTI 6.0: A Tool to Model Large Caches Naveen Muralimanohar, Rajeev Balasubramonian, Norman P. Jouppi HP Laboratories HPL-2009-85 No keywords available. Future processors will likely have large on-chip caches with a possibility of dedicating an entire die for on-chip storage in a 3D stacked design. With the ever growing disparity between transistor and wire delay, the properties of such large caches will primarily depend on the characteristics of the interconnection networks that connect various sub-modules of a cache. CACTI 6.0 is a significantly enhanced version of the tool that primarily focuses on interconnect design for large caches. In addition to strengthening the existing analytical model of the tool for dominant cache components, CACTI 6.0 includes two major extensions over earlier versions: first, the ability to model Non-Uniform Cache Access (NUCA), and second, the ability to model different types of wires, such as RC based wires with different power, delay, and area characteristics and differential low-swing buses. This report details the analytical model assumed for the newly added modules along with their validation analysis. External Posting Date: April 21, 2009 [Fulltext] Approved for External Publication Internal Posting Date: April 21, 2009 [Fulltext] Published in International Symposium on Microarchitecture, Chicago, Dec 2007. Copyright International Symposium on Microarchitecture, 2007. CACTI 6.0: A Tool to Model Large Caches Naveen Muralimanohar , Rajeev Balasubramonian , Norman P. Jouppi ‡ † School of Computing, University of Utah ‡ Hewlett-Packard Laboratories Abstract Future processors will likely have large on-chip caches wit h a possibility of dedicating an entire die for on-chip storage in a 3D stacked design. With the ever growing d sparity between transistor and wire delay, the properties of such large caches will primarily depend on the c aracteristics of the interconnection networks that connect various sub-modules of a cache. CACTI 6.0 is a signifi ca tly enhanced version of the tool that primarily focuses on interconnect design for large caches. In additio n to strengthening the existing analytical model of the tool for dominant cache components, CACTI 6.0 includes t wo major extensions over earlier versions: first, the ability to model Non-Uniform Cache Access (NUCA), and se cond, the ability to model different types of wires, such as RC based wires with different power, delay, an d area characteristics and differential low-swing buses. This report details the analytical model assumed for the newly added modules along with their validation analysis.Future processors will likely have large on-chip caches wit h a possibility of dedicating an entire die for on-chip storage in a 3D stacked design. With the ever growing d sparity between transistor and wire delay, the properties of such large caches will primarily depend on the c aracteristics of the interconnection networks that connect various sub-modules of a cache. CACTI 6.0 is a signifi ca tly enhanced version of the tool that primarily focuses on interconnect design for large caches. In additio n to strengthening the existing analytical model of the tool for dominant cache components, CACTI 6.0 includes t wo major extensions over earlier versions: first, the ability to model Non-Uniform Cache Access (NUCA), and se cond, the ability to model different types of wires, such as RC based wires with different power, delay, an d area characteristics and differential low-swing buses. This report details the analytical model assumed for the newly added modules along with their validation analysis.",
"title": ""
}
] |
[
{
"docid": "a978e501345f02005ca4129b42c7db28",
"text": "Recently, some recommendation methods try to improve the prediction results by integrating information from user’s multiple types of behaviors. How to model the dependence and independence between different behaviors is critical for them. In this paper, we propose a novel recommendation model, the Group-Sparse Matrix Factorization (GSMF), which factorizes the rating matrices for multiple behaviors into the user and item latent factor space with group sparsity regularization. It can (1) select out the different subsets of latent factors for different behaviors, addressing that users’ decisions on different behaviors are determined by different sets of factors; (2) model the dependence and independence between behaviors by learning the shared and private factors for multiple behaviors automatically ; (3) allow the shared factors between different behaviors to be different, instead of all the behaviors sharing the same set of factors. Experiments on the real-world dataset demonstrate that our model can integrate users’ multiple types of behaviors into recommendation better, compared with other state-of-the-arts.",
"title": ""
},
{
"docid": "23a21e2d967c8fb8ccc5d282c597ff06",
"text": "Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.",
"title": ""
},
{
"docid": "38666c5299ee67e336dc65f23f528a56",
"text": "Different modalities of magnetic resonance imaging (MRI) can indicate tumor-induced tissue changes from different perspectives, thus benefit brain tumor segmentation when they are considered together. Meanwhile, it is always interesting to examine the diagnosis potential from single modality, considering the cost of acquiring multi-modality images. Clinically, T1-weighted MRI is the most commonly used MR imaging modality, although it may not be the best option for contouring brain tumor. In this paper, we investigate whether synthesizing FLAIR images from T1 could help improve brain tumor segmentation from the single modality of T1. This is achieved by designing a 3D conditional Generative Adversarial Network (cGAN) for FLAIR image synthesis and a local adaptive fusion method to better depict the details of the synthesized FLAIR images. The proposed method can effectively handle the segmentation task of brain tumors that vary in appearance, size and location across samples.",
"title": ""
},
{
"docid": "51e0caf419babd61615e1537545e40e8",
"text": "Past research on automatic facial expression analysis has focused mostly on the recognition of prototypic expressions of discrete emotions rather than on the analysis of dynamic changes over time, although the importance of temporal dynamics of facial expressions for interpretation of the observed facial behavior has been acknowledged for over 20 years. For instance, it has been shown that the temporal dynamics of spontaneous and volitional smiles are fundamentally different from each other. In this work, we argue that the same holds for the temporal dynamics of brow actions and show that velocity, duration, and order of occurrence of brow actions are highly relevant parameters for distinguishing posed from spontaneous brow actions. The proposed system for discrimination between volitional and spontaneous brow actions is based on automatic detection of Action Units (AUs) and their temporal segments (onset, apex, offset) produced by movements of the eyebrows. For each temporal segment of an activated AU, we compute a number of mid-level feature parameters including the maximal intensity, duration, and order of occurrence. We use Gentle Boost to select the most important of these parameters. The selected parameters are used further to train Relevance Vector Machines to determine per temporal segment of an activated AU whether the action was displayed spontaneously or volitionally. Finally, a probabilistic decision function determines the class (spontaneous or posed) for the entire brow action. When tested on 189 samples taken from three different sets of spontaneous and volitional facial data, we attain a 90.7% correct recognition rate.",
"title": ""
},
{
"docid": "300fc7db67ef42d8e370e0d13989fd45",
"text": "We define certain higher-dimensional Dedekind sums that generalize the classical Dedekind-Rademacher sums, and show how to compute them effectively using a generalization of the continued-fraction algorithm. We present two applications. First, we show how to express special values of partial zeta functions associated to totally real number fields in terms of these sums via the Eisenstein cocycle introduced by the second author. Hence we obtain a polynomial-time algorithm for computing these special values. Second, we show how to use our techniques to compute certain special values of the Witten zetafunction, and compute some explicit examples.",
"title": ""
},
{
"docid": "2eab5efc609ad0039f1e68ad6305103d",
"text": "Women (n 160) aged 50 to 65 years were asked to weigh their food for 4 d on four occasions over the period of 1 year, using the PETRA (Portable Electronic Tape Recorded Automatic) scales. Throughout the year, they were asked to complete seven other dietary assessment methods: a simple 24 h recall, a structured 24 h recall with portion size assessments using photographs, two food-frequency questionnaires, a 7 d estimated record or open-ended food diary, a structured food-frequency (menu) record, and a structured food-frequency (menu) record with portion sizes assessed using photographs. Comparisons between the average of the 16 d weighed records and the first presentation of each method indicated that food-frequency questionnaires were not appreciably better at placing individuals in the distribution of habitual diet than 24 h recalls, due partly to inaccuracies in the estimation of frequency of food consumption. With a 7 d estimated record or open-ended food diary, however, individual values of nutrients were most closely associated with those obtained from 16 d weighed records, and there were no significant differences in average food or nutrient intakes.",
"title": ""
},
{
"docid": "30a4239a93234d2c07e6618f4da730fa",
"text": "BACKGROUND\nAortic stiffness is a marker of cardiovascular disease and an independent predictor of cardiovascular risk. Although an association between inflammatory markers and increased arterial stiffness has been suggested, the causative relationship between inflammation and arterial stiffness has not been investigated.\n\n\nMETHODS AND RESULTS\nOne hundred healthy individuals were studied according to a randomized, double-blind, sham procedure-controlled design. Each substudy consisted of 2 treatment arms, 1 with Salmonella typhi vaccination and 1 with sham vaccination. Vaccination produced a significant (P<0.01) increase in pulse wave velocity (at 8 hours by 0.43 m/s), denoting an increase in aortic stiffness. Wave reflections were reduced significantly (P<0.01) by vaccination (decrease in augmentation index of 5.0% at 8 hours and 2.5% at 32 hours) as a result of peripheral vasodilatation. These effects were associated with significant increases in inflammatory markers such as high-sensitivity C-reactive protein (P<0.001), high-sensitivity interleukin-6 (P<0.001), and matrix metalloproteinase-9 (P<0.01). With aspirin pretreatment (1200 mg PO), neither pulse wave velocity nor augmentation index changed significantly after vaccination (increase of 0.11 m/s and 0.4%, respectively; P=NS for both).\n\n\nCONCLUSIONS\nThis is the first study to show through a cause-and-effect relationship that acute systemic inflammation leads to deterioration of large-artery stiffness and to a decrease in wave reflections. These findings have important implications, given the importance of aortic stiffness for cardiovascular function and risk and the potential of therapeutic interventions with antiinflammatory properties.",
"title": ""
},
{
"docid": "f33147619ba2d24efcea9e32f70c7695",
"text": "The wide use of micro bloggers such as Twitter offers a valuable and reliable source of information during natural disasters. The big volume of Twitter data calls for a scalable data management system whereas the semi-structured data analysis requires full-text searching function. As a result, it becomes challenging yet essential for disaster response agencies to take full advantage of social media data for decision making in a near-real-time fashion. In this work, we use Lucene to empower HBase with full-text searching ability to build a scalable social media data analytics system for observing and analyzing human behaviors during the Hurricane Sandy disaster. Experiments show the scalability and efficiency of the system. Furthermore, the discovery of communities has the benefit of identifying influential users and tracking the topical changes as the disaster unfolds. We develop a novel approach to discover communities in Twitter by applying spectral clustering algorithm to retweet graph. The topics and influential users of each community are also analyzed and demonstrated using Latent Semantic Indexing (LSI).",
"title": ""
},
{
"docid": "4a3f7e89874c76f62aa97ef6a114d574",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
},
{
"docid": "340291da58133d9df02f6f566ebbffa7",
"text": "This paper describes an implementation strategy of nonlinear model predictive controller for FPGA systems. A high-level synthesis of a real-time MPC algorithm by means of the MATLAB HDL Coder as well as the Vivado HLS tool is discussed. In order to exploit the parallel processing of FPGAs, the included integration schemes are parallelized using a fixed-point iteration approach. The synthesis results are demonstrated for two different example systems.",
"title": ""
},
{
"docid": "01534202e7db5d9059651290e1720bf0",
"text": "The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across variou s CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin.",
"title": ""
},
{
"docid": "5473962c6c270df695b965cbcc567369",
"text": "Medical professionals need a reliable prediction methodology to diagnose cancer and distinguish between the different stages in cancer. Classification is a data mining function that assigns items in a collection to target groups or classes. C4.5 classification algorithm has been applied to SEER breast cancer dataset to classify patients into either “Carcinoma in situ” (beginning or pre-cancer stage) or “Malignant potential” group. Pre-processing techniques have been applied to prepare the raw dataset and identify the relevant attributes for classification. Random test samples have been selected from the pre-processed data to obtain classification rules. The rule set obtained was tested with the remaining data. The results are presented and discussed. Keywords— Breast Cancer Diagnosis, Classification, Clinical Data, SEER Dataset, C4.5 Algorithm",
"title": ""
},
{
"docid": "e31901738e78728a7376457f7d1acd26",
"text": "Feature selection plays a critical role in biomedical data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that have gained appeal by striking an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.",
"title": ""
},
{
"docid": "5ef958d4033ef9e6b2834aa3667252c3",
"text": "Intrinsic decomposition from a single image is a highly challenging task, due to its inherent ambiguity and the scarcity of training data. In contrast to traditional fully supervised learning approaches, in this paper we propose learning intrinsic image decomposition by explaining the input image. Our model, the Rendered Intrinsics Network (RIN), joins together an image decomposition pipeline, which predicts reflectance, shape, and lighting conditions given a single image, with a recombination function, a learned shading model used to recompose the original input based off of intrinsic image predictions. Our network can then use unsupervised reconstruction error as an additional signal to improve its intermediate representations. This allows large-scale unlabeled data to be useful during training, and also enables transferring learned knowledge to images of unseen object categories, lighting conditions, and shapes. Extensive experiments demonstrate that our method performs well on both intrinsic image decomposition and knowledge transfer.",
"title": ""
},
{
"docid": "baad68c1adef7b72d78745fe03db0c57",
"text": "0020-0255/$ see front matter 2012 Elsevier Inc http://dx.doi.org/10.1016/j.ins.2012.10.039 ⇑ Corresponding author. E-mail addresses: [email protected] (P. Cor In this paper, we propose a new visualization approach based on a Sensitivity Analysis (SA) to extract human understandable knowledge from supervised learning black box data mining models, such as Neural Networks (NNs), Support Vector Machines (SVMs) and ensembles, including Random Forests (RFs). Five SA methods (three of which are purely new) and four measures of input importance (one novel) are presented. Also, the SA approach is adapted to handle discrete variables and to aggregate multiple sensitivity responses. Moreover, several visualizations for the SA results are introduced, such as input pair importance color matrix and variable effect characteristic surface. A wide range of experiments was performed in order to test the SA methods and measures by fitting four well-known models (NN, SVM, RF and decision trees) to synthetic datasets (five regression and five classification tasks). In addition, the visualization capabilities of the SA are demonstrated using four real-world datasets (e.g., bank direct marketing and white wine quality). 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "35e671088cb28f44d729fd21f0ccd7db",
"text": "Sound event detection (SED) in environmental recordings is a key topic of research in machine listening, with applications in noise monitoring for smart cities, self-driving cars, surveillance, bioa-coustic monitoring, and indexing of large multimedia collections. Developing new solutions for SED often relies on the availability of strongly labeled audio recordings, where the annotation includes the onset, offset and source of every event. Generating such precise annotations manually is very time consuming, and as a result existing datasets for SED with strong labels are scarce and limited in size. To address this issue, we present Scaper, an open-source library for soundscape synthesis and augmentation. Given a collection of iso-lated sound events, Scaper acts as a high-level sequencer that can generate multiple soundscapes from a single, probabilistically defined, “specification”. To increase the variability of the output, Scaper supports the application of audio transformations such as pitch shifting and time stretching individually to every event. To illustrate the potential of the library, we generate a dataset of 10,000 sound-scapes and use it to compare the performance of two state-of-the-art algorithms, including a breakdown by soundscape characteristics. We also describe how Scaper was used to generate audio stimuli for an audio labeling crowdsourcing experiment, and conclude with a discussion of Scaper's limitations and potential applications.",
"title": ""
},
{
"docid": "6b0860d2547ab7c4498cfb94a0ca7df4",
"text": "Photo sharing is one of the most popular Web services. Photo sharing sites provide functions to add tags and geo-tags to photos to make photo organization easy. Considering that people take photos to record something that attracts them, geo-tagged photos are a rich data source that reflects people's memorable events associated with locations. In this paper, we focus on geo-tagged photos and propose a method to detect people's frequent trip patterns, i.e., typical sequences of visited cities and durations of stay as well as descriptive tags that characterize the trip patterns. Our method first segments photo collections into trips and categorizes them based on their trip themes, such as visiting landmarks or communing with nature. Our method mines frequent trip patterns for each trip theme category. We crawled 5.7 million geo-tagged photos and performed photo trip pattern mining. The experimental result shows that our method outperforms other baseline methods and can correctly segment photo collections into photo trips with an accuracy of 78%. For trip categorization, our method can categorize about 80% of trips using tags and titles of photos and visited cities as features. Finally, we illustrate interesting examples of trip patterns detected from our dataset and show an application with which users can search frequent trip patterns by querying a destination, visit duration, and trip theme on the trip.",
"title": ""
},
{
"docid": "00d14c0c07d04c9bd6995ff0ee065ab9",
"text": "The pathways for olfactory learning in the fruitfly Drosophila have been extensively investigated, with mounting evidence that that the mushroom body is the site of the olfactory associative memory trace (Heisenberg, Nature 4:266–275, 2003; Gerber et al., Curr Opin Neurobiol 14:737–744, 2004). Heisenberg’s description of the mushroom body as an associative learning device is a testable hypothesis that relates the mushroom body’s function to its neural structure and input and output pathways. Here, we formalise a relatively complete computational model of the network interactions in the neural circuitry of the insect antennal lobe and mushroom body, to investigate their role in olfactory learning, and specifically, how this might support learning of complex (non-elemental; Giurfa, Curr Opin Neuroethol 13:726–735, 2003) discriminations involving compound stimuli. We find that the circuit is able to learn all tested non-elemental paradigms. This does not crucially depend on the number of Kenyon cells but rather on the connection strength of projection neurons to Kenyon cells, such that the Kenyon cells require a certain number of coincident inputs to fire. As a consequence, the encoding in the mushroom body resembles a unique cue or configural representation of compound stimuli (Pearce, Psychol Rev 101:587–607, 1994). Learning of some conditions, particularly negative patterning, is strongly affected by the assumption of normalisation effects occurring at the level of the antennal lobe. Surprisingly, the learning capacity of this circuit, which is a simplification of the actual circuitry in the fly, seems to be greater than the capacity expressed by the fly in shock-odour association experiments (Young et al. 2010).",
"title": ""
},
{
"docid": "832a208d5f0e0c9d965bf6037d002bb3",
"text": "Littering constitutes a major societal problem, and any simple intervention that reduces its prevalence would be widely beneficial. In previous research, we have found that displaying images of watching eyes in the environment makes people less likely to litter. Here, we investigate whether the watching eyes images can be transferred onto the potential items of litter themselves. In two field experiments on a university campus, we created an opportunity to litter by attaching leaflets that either did or did not feature an image of watching eyes to parked bicycles. In both experiments, the watching eyes leaflets were substantially less likely to be littered than control leaflets (odds ratios 0.22-0.32). We also found that people were less likely to litter when there other people in the immediate vicinity than when there were not (odds ratios 0.04-0.25) and, in one experiment but not the other, that eye leaflets only reduced littering when there no other people in the immediate vicinity. We suggest that designing cues of observation into packaging could be a simple but fruitful strategy for reducing littering.",
"title": ""
},
{
"docid": "dd82e1c54a2b73e98788eb7400600be3",
"text": "Supernovae Type-Ia (SNeIa) play a significant role in exploring the history of the expansion of the Universe, since they are the best-known standard candles with which we can accurately measure the distance to the objects. Finding large samples of SNeIa and investigating their detailed characteristics has become an important issue in cosmology and astronomy. Existing methods relied on a photometric approach that first measures the luminance of supernova candidates precisely and then fits the results to a parametric function of temporal changes in luminance. However, it inevitably requires a lot of observations and complex luminance measurements. In this work, we present a novel method for detecting SNeIa simply from single-shot observation images without any complex measurements, by effectively integrating the state-of-the-art computer vision methodology into the standard photometric approach. Experimental results show the effectiveness of the proposed method and reveal classification performance comparable to existing photometric methods with many observations.",
"title": ""
}
] |
scidocsrr
|
508518916728355dfc8cf4473600339e
|
Classification and Comparison of Range-Based Localization Techniques in Wireless Sensor Networks
|
[
{
"docid": "ef39b902bb50be657b3b9626298da567",
"text": "We consider the problem of node positioning in ad hoc networks. We propose a distributed, infrastructure-free positioning algorithm that does not rely on GPS (Global Positioning System). Instead, the algorithm uses the distances between the nodes to build a relative coordinate system in which the node positions are computed in two dimensions. Despite the distance measurement errors and the motion of the nodes, the algorithm provides sufficient location information and accuracy to support basic network functions. Examples of applications where this algorithm can be used include Location Aided Routing [10] and Geodesic Packet Forwarding [2]. Another example are sensor networks, where mobility is less of a problem. The main contribution of this work is to define and compute relative positions of the nodes in an ad hoc network without using GPS. We further explain how the proposed approach can be applied to wide area ad hoc networks.",
"title": ""
}
] |
[
{
"docid": "40b69a316255b26c77cfb37dee10c719",
"text": "Lake and Baroni (2018) recently introduced the SCAN data set, which consists of simple commands paired with action sequences and is intended to test the strong generalization abilities of recurrent sequence-to-sequence models. Their initial experiments suggested that such models may fail because they lack the ability to extract systematic rules. Here, we take a closer look at SCAN and show that it does not always capture the kind of generalization that it was designed for. To mitigate this we propose a complementary dataset, which requires mapping actions back to the original commands, called NACS. We show that models that do well on SCAN do not necessarily do well on NACS, and that NACS exhibits properties more closely aligned with realistic usecases for sequence-to-sequence models.",
"title": ""
},
{
"docid": "c73af0945ac35847c7a86a7f212b4d90",
"text": "We report a case of planned complex suicide (PCS) by a young man who had previously tried to commit suicide twice. He was found dead hanging by his neck, with a shot in his head. The investigation of the scene, the method employed, and previous attempts at suicide altogether pointed toward a suicidal etiology. The main difference between PCS and those cases defined in the medicolegal literature as combined suicides lies in the complex mechanism used by the victim as a protection against a failure in one of the mechanisms.",
"title": ""
},
{
"docid": "6f45bc16969ed9deb5da46ff8529bb8a",
"text": "In the future, mobile systems will increasingly feature more advanced organic light-emitting diode (OLED) displays. The power consumption of these displays is highly dependent on the image content. However, existing OLED power-saving techniques either change the visual experience of users or degrade the visual quality of images in exchange for a reduction in the power consumption. Some techniques attempt to enhance the image quality by employing a compound objective function. In this article, we present a win-win scheme that always enhances the image quality while simultaneously reducing the power consumption. We define metrics to assess the benefits and cost for potential image enhancement and power reduction. We then introduce algorithms that ensure the transformation of images into their quality-enhanced power-saving versions. Next, the win-win scheme is extended to process videos at a justifiable computational cost. All the proposed algorithms are shown to possess the win-win property without assuming accurate OLED power models. Finally, the proposed scheme is realized through a practical camera application and a video camcorder on mobile devices. The results of experiments conducted on a commercial tablet with a popular image database and on a smartphone with real-world videos are very encouraging and provide valuable insights for future research and practices.",
"title": ""
},
{
"docid": "9ad8a5b73430e4fe6b86d5fb8e2412b0",
"text": "We apply coset codes to adaptive modulation in fading channels. Adaptive modulation is a powerful technique to improve the energy efficiency and increase the data rate over a fading channel. Coset codes are a natural choice to use with adaptive modulation since the channel coding and modulation designs are separable. Therefore, trellis and lattice codes designed for additive white Gaussian noise (AWGN) channels can be superimposed on adaptive modulation for fading channels, with the same approximate coding gains. We first describe the methodology for combining coset codes with a general class of adaptive modulation techniques. We then apply this methodology to a spectrally efficient adaptive M -ary quadrature amplitude modulation (MQAM) to obtain trellis-coded adaptive MQAM. We present analytical and simulation results for this design which show an effective coding gain of 3 dB relative to uncoded adaptive MQAM for a simple four-state trellis code, and an effective 3.6-dB coding gain for an eight-state trellis code. More complex trellis codes are shown to achieve higher gains. We also compare the performance of trellis-coded adaptive MQAM to that of coded modulation with built-in time diversity and fixed-rate modulation. The adaptive method exhibits a power savings of up to 20 dB.",
"title": ""
},
{
"docid": "26f76aa41a64622ee8f0eaaed2aac529",
"text": "OBJECTIVE\nIn this study, we explored the impact of an occupational therapy wellness program on daily habits and routines through the perspectives of youth and their parents.\n\n\nMETHOD\nData were collected through semistructured interviews with children and their parents, the Pizzi Healthy Weight Management Assessment(©), and program activities.\n\n\nRESULTS\nThree themes emerged from the interviews: Program Impact, Lessons Learned, and Time as a Barrier to Health. The most common areas that both youth and parents wanted to change were time spent watching television and play, fun, and leisure time. Analysis of activity pie charts indicated that the youth considerably increased active time in their daily routines from Week 1 to Week 6 of the program.\n\n\nCONCLUSION\nAn occupational therapy program focused on health and wellness may help youth and their parents be more mindful of their daily activities and make health behavior changes.",
"title": ""
},
{
"docid": "efa066fc7ed815cc43a40c9c327b2cb3",
"text": "Induction surface hardening of parts with non-uniform cylindrical shape requires a multi-frequency process in order to obtain a uniform surface hardened depth. This paper presents an induction heating high power supply constituted of an only inverter circuit and a specially designed output resonant circuit. The whole circuit supplies both medium and high frequency power signals to the heating inductor simultaneously",
"title": ""
},
{
"docid": "d90a66cf63abdc1d0caed64812de7043",
"text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.",
"title": ""
},
{
"docid": "e6c0aa517c857ed217fc96aad58d7158",
"text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.",
"title": ""
},
{
"docid": "c0d7cd54a947d9764209e905a6779d45",
"text": "The mainstream approach to protecting the location-privacy of mobile users in location-based services (LBSs) is to alter the users' actual locations in order to reduce the location information exposed to the service provider. The location obfuscation algorithm behind an effective location-privacy preserving mechanism (LPPM) must consider three fundamental elements: the privacy requirements of the users, the adversary's knowledge and capabilities, and the maximal tolerated service quality degradation stemming from the obfuscation of true locations. We propose the first methodology, to the best of our knowledge, that enables a designer to find the optimal LPPM for a LBS given each user's service quality constraints against an adversary implementing the optimal inference algorithm. Such LPPM is the one that maximizes the expected distortion (error) that the optimal adversary incurs in reconstructing the actual location of a user, while fulfilling the user's service-quality requirement. We formalize the mutual optimization of user-adversary objectives (location privacy vs. correctness of localization) by using the framework of Stackelberg Bayesian games. In such setting, we develop two linear programs that output the best LPPM strategy and its corresponding optimal inference attack. Our optimal user-centric LPPM can be easily integrated in the users' mobile devices they use to access LBSs. We validate the efficacy of our game theoretic method against real location traces. Our evaluation confirms that the optimal LPPM strategy is superior to a straightforward obfuscation method, and that the optimal localization attack performs better compared to a Bayesian inference attack.",
"title": ""
},
{
"docid": "4e75d06e1e23cf8efdcafd2f59a0313f",
"text": "The International Solid-State Circuits Conference (ISSCC) is the flagship conference of the IEEE Solid-State Circuits Society. This year, for the 65th ISSCC, the theme is \"Silicon Engineering a Social World.\" Continued advances in solid-state circuits and systems have brought ever-more powerful communication and computational capabilities into mobile form factors. Such ubiquitous smart devices lie at the heart of a revolution shaping how we connect, collaborate, build relationships, and share information. These social technologies allow people to maintain connections and support networks not otherwise possible; they provide the ability to access information instantaneously and from any location, thereby helping to shape world events and culture, empowering citizens of all nations, and creating social networks that allow worldwide communities to develop and form bonds based on common interests.",
"title": ""
},
{
"docid": "107b95c3bb00c918c73d82dd678e46c0",
"text": "Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).",
"title": ""
},
{
"docid": "8bd78fe6aa4aab1476f0599acba64181",
"text": "The denoising step for Computed Tomography (CT) images is an important challenge in the medical image processing. These images are degraded by low resolution and noise. In this paper, we propose a new method for 3D CT denoising based on Coherence Enhancing Diffusion model. Quantitative measures such as PSNR, SSIM and RMSE are computed to a phantom CT image in order to improve the efficiently of our proposed model, compared to a number of denoising algorithms. Furthermore, experimental results on a real 3D CT data show that this approach is effective and promising in removing noise and preserving details.",
"title": ""
},
{
"docid": "9c8daaa2770a109604988700e4eaca27",
"text": "In this paper, the neural-network-based robust optimal control design for a class of uncertain nonlinear systems via adaptive dynamic programming approach is investigated. First, the robust controller of the original uncertain system is derived by adding a feedback gain to the optimal controller of the nominal system. It is also shown that this robust controller can achieve optimality under a specified cost function, which serves as the basic idea of the robust optimal control design. Then, a critic network is constructed to solve the Hamilton– Jacobi–Bellman equation corresponding to the nominal system, where an additional stabilizing term is introduced to verify the stability. The uniform ultimate boundedness of the closed-loop system is also proved by using the Lyapunov approach. Moreover, the obtained results are extended to solve decentralized optimal control problem of continuous-time nonlinear interconnected large-scale systems. Finally, two simulation examples are presented to illustrate the effectiveness of the established control scheme. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "046bcb0a39184bdf5a97dba120d8ba0f",
"text": "Finishing 90-epoch ImageNet-1k training with ResNet-50 on a NVIDIA M40 GPU takes 14 days. This training requires 10 single precision operations in total. On the other hand, the world’s current fastest supercomputer can finish 2× 10 single precision operations per second (Dongarra et al. 2017). If we can make full use of the supercomputer for DNN training, we should be able to finish the 90-epoch ResNet-50 training in five seconds. However, the current bottleneck for fast DNN training is in the algorithm level. Specifically, the current batch size (e.g. 512) is too small to make efficient use of many processors For large-scale DNN training, we focus on using large-batch data-parallelism synchronous SGD without losing accuracy in the fixed epochs. The LARS algorithm (You, Gitman, and Ginsburg 2017) enables us to scale the batch size to extremely large case (e.g. 32K). We finish the 100-epoch ImageNet training with AlexNet in 24 minutes. Same as Facebook’s result (Goyal et al. 2017), we finish the 90-epoch ImageNet training with ResNet-50 in one hour by 512 Intel KNLs.",
"title": ""
},
{
"docid": "9157266c7dea945bf5a68f058836e681",
"text": "For the task of implicit discourse relation recognition, traditional models utilizing manual features can suffer from data sparsity problem. Neural models provide a solution with distributed representations, which could encode the latent semantic information, and are suitable for recognizing semantic relations between argument pairs. However, conventional vector representations usually adopt embeddings at the word level and cannot well handle the rare word problem without carefully considering morphological information at character level. Moreover, embeddings are assigned to individual words independently, which lacks of the crucial contextual information. This paper proposes a neural model utilizing context-aware character-enhanced embeddings to alleviate the drawbacks of the current word level representation. Our experiments show that the enhanced embeddings work well and the proposed model obtains state-of-the-art results.",
"title": ""
},
{
"docid": "239e3f68b790bed3f8e9ec28f99f91d4",
"text": "This study evaluated the structure and validity of the Problem Behavior Frequency Scale-Teacher Report Form (PBFS-TR) for assessing students' frequency of specific forms of aggression and victimization, and positive behavior. Analyses were conducted on two waves of data from 727 students from two urban middle schools (Sample 1) who were rated by their teachers on the PBFS-TR and the Social Skills Improvement System (SSIS), and on data collected from 1,740 students from three urban middle schools (Sample 2) for whom data on both the teacher and student report version of the PBFS were obtained. Confirmatory factor analyses supported first-order factors representing 3 forms of aggression (physical, verbal, and relational), 3 forms of victimization (physical, verbal and relational), and 2 forms of positive behavior (prosocial behavior and effective nonviolent behavior), and higher-order factors representing aggression, victimization, and positive behavior. Strong measurement invariance was established over gender, grade, intervention condition, and time. Support for convergent validity was found based on correlations between corresponding scales on the PBFS-TR and teacher ratings on the SSIS in Sample 1. Significant correlations were also found between teacher ratings on the PBFS-TR and student ratings of their behavior on the Problem Behavior Frequency Scale-Adolescent Report (PBFS-AR) and a measure of nonviolent behavioral intentions in Sample 2. Overall the findings provided support for the PBFS-TR and suggested that teachers can provide useful data on students' aggressive and prosocial behavior and victimization experiences within the school setting. (PsycINFO Database Record (c) 2018 APA, all rights reserved).",
"title": ""
},
{
"docid": "74beaea9eccab976dc1ee7b2ddf3e4ca",
"text": "We develop theory that distinguishes trust among employees in typical task contexts (marked by low levels of situational unpredictability and danger) from trust in “highreliability” task contexts (those marked by high levels of situational unpredictability and danger). A study of firefighters showed that trust in high-reliability task contexts was based on coworkers’ integrity, whereas trust in typical task contexts was also based on benevolence and identification. Trust in high-reliability contexts predicted physical symptoms, whereas trust in typical contexts predicted withdrawal. Job demands moderated linkages with performance: trust in high-reliability task contexts was a more positive predictor of performance when unpredictable and dangerous calls were more frequent.",
"title": ""
},
{
"docid": "f519d349d928e7006955943043ab0eae",
"text": "A critical application of metabolomics is the evaluation of tissues, which are often the primary sites of metabolic dysregulation in disease. Laboratory rodents have been widely used for metabolomics studies involving tissues due to their facile handing, genetic manipulability and similarity to most aspects of human metabolism. However, the necessary step of administration of anesthesia in preparation for tissue sampling is not often given careful consideration, in spite of its potential for causing alterations in the metabolome. We examined, for the first time using untargeted and targeted metabolomics, the effect of several commonly used methods of anesthesia and euthanasia for collection of skeletal muscle, liver, heart, adipose and serum of C57BL/6J mice. The data revealed dramatic, tissue-specific impacts of tissue collection strategy. Among many differences observed, post-euthanasia samples showed elevated levels of glucose 6-phosphate and other glycolytic intermediates in skeletal muscle. In heart and liver, multiple nucleotide and purine degradation metabolites accumulated in tissues of euthanized compared to anesthetized animals. Adipose tissue was comparatively less affected by collection strategy, although accumulation of lactate and succinate in euthanized animals was observed in all tissues. Among methods of tissue collection performed pre-euthanasia, ketamine showed more variability compared to isoflurane and pentobarbital. Isoflurane induced elevated liver aspartate but allowed more rapid initiation of tissue collection. Based on these findings, we present a more optimal collection strategy mammalian tissues and recommend that rodent tissues intended for metabolomics studies be collected under anesthesia rather than post-euthanasia.",
"title": ""
},
{
"docid": "4d902e421b6371fc40b6d7178d69426e",
"text": "Recently, Social media has arisen not only as a personal communication media, but also, as a media to communicate opinions about products and services or even political and general events among its users. Due to its widespread and popularity, there is a massive amount of user reviews or opinions produced and shared daily. Twitter is one of the most widely used social media micro blogging sites. Mining user opinions from social media data is not a straight forward task; it can be accomplished in different ways. In this work, an open source approach is presented, throughout which, twitter Microblogs data has been collected, pre-processed, analyzed and visualized using open source tools to perform text mining and sentiment analysis for analyzing user contributed online reviews about two giant retail stores in the UK namely Tesco and Asda stores over Christmas period 2014. Collecting customer opinions can be expensive and time consuming task using conventional methods such as surveys. The sentiment analysis of the customer opinions makes it easier for businesses to understand their competitive value in a changing market and to understand their customer views about their products and services, which also provide an insight into future marketing strategies and decision making policies.",
"title": ""
},
{
"docid": "96d5a0fb4bb0666934819d162f1b060c",
"text": "Human gait is an important indicator of health, with applications ranging from diagnosis, monitoring, and rehabilitation. In practice, the use of gait analysis has been limited. Existing gait analysis systems are either expensive, intrusive, or require well-controlled environments such as a clinic or a laboratory. We present an accurate gait analysis system that is economical and non-intrusive. Our system is based on the Kinect sensor and thus can extract comprehensive gait information from all parts of the body. Beyond standard stride information, we also measure arm kinematics, demonstrating the wide range of parameters that can be extracted. We further improve over existing work by using information from the entire body to more accurately measure stride intervals. Our system requires no markers or battery-powered sensors, and instead relies on a single, inexpensive commodity 3D sensor with a large preexisting install base. We suggest that the proposed technique can be used for continuous gait tracking at home.",
"title": ""
}
] |
scidocsrr
|
a92324172cfd09afa05ef9065dc06edc
|
The Utility of Hello Messages for Determining Link Connectivity
|
[
{
"docid": "ef5f1aa863cc1df76b5dc057f407c473",
"text": "GLS is a new distributed location service which tracks mobile node locations. GLS combined with geographic forwarding allows the construction of ad hoc mobile networks that scale to a larger number of nodes than possible with previous work. GLS is decentralized and runs on the mobile nodes themselves, requiring no fixed infrastructure. Each mobile node periodically updates a small set of other nodes (its location servers) with its current location. A node sends its position updates to its location servers without knowing their actual identities, assisted by a predefined ordering of node identifiers and a predefined geographic hierarchy. Queries for a mobile node's location also use the predefined identifier ordering and spatial hierarchy to find a location server for that node.\nExperiments using the ns simulator for up to 600 mobile nodes show that the storage and bandwidth requirements of GLS grow slowly with the size of the network. Furthermore, GLS tolerates node failures well: each failure has only a limited effect and query performance degrades gracefully as nodes fail and restart. The query performance of GLS is also relatively insensitive to node speeds. Simple geographic forwarding combined with GLS compares favorably with Dynamic Source Routing (DSR): in larger networks (over 200 nodes) our approach delivers more packets, but consumes fewer network resources.",
"title": ""
}
] |
[
{
"docid": "30b1b4df0901ab61ab7e4cfb094589d1",
"text": "Direct modulation at 56 and 50 Gb/s of 1.3-μm InGaAlAs ridge-shaped-buried heterostructure (RS-BH) asymmetric corrugation-pitch-modulation (ACPM) distributed feedback lasers is experimentally demonstrated. The fabricated lasers have a low threshold current (5.6 mA at 85°C), high temperature characteristics (71 K), high slope relaxation frequency (3.2 GHz/mA1/2 at 85°C), and wide bandwidth (22.1 GHz at 85°C). These superior properties enable the lasers to run at 56 Gb/s and 55°C and 50 Gb/s at up to 80°C for backto-back operation with clear eye openings. This is achieved by the combination of a low-leakage RS-BH and an ACPM grating. Moreover, successful transmission of 56and 50-Gb/s modulated signals over a 10-km standard single-mode fiber is achieved. These results confirm the suitability of this type of laser for use as a cost-effective light source in 400 GbE and OTU5 applications.",
"title": ""
},
{
"docid": "701fb71923bb8a2fc90df725074f576b",
"text": "Quantum computing poses challenges to public key signatures as we know them today. LMS and XMSS are two hash based signature schemes that have been proposed in the IETF as quantum secure. Both schemes are based on well-studied hash trees, but their similarities and differences have not yet been discussed. In this work, we attempt to compare the two standards. We compare their security assumptions and quantify their signature and public key sizes. We also address the computation overhead they introduce. Our goal is to provide a clear understanding of the schemes’ similarities and differences for implementers and protocol designers to be able to make a decision as to which standard to chose.",
"title": ""
},
{
"docid": "56b42c551ad57c82ad15e6fc2e98f528",
"text": "Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. 1 The Optimal Reward Problem In this work, we consider the scenario of an agent designer building an autonomous agent. The designer has his or her own goals which must be translated into goals for the autonomous agent. We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. Typically, the designer assigns his or her own reward to the agent. However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). PGRD has few parameters, improves the reward",
"title": ""
},
{
"docid": "09132f8695e6f8d32d95a37a2bac46ee",
"text": "Social media has become one of the main channels for people to access and consume news, due to the rapidness and low cost of news dissemination on it. However, such properties of social media also make it a hotbed of fake news dissemination, bringing negative impacts on both individuals and society. Therefore, detecting fake news has become a crucial problem attracting tremendous research effort. Most existing methods of fake news detection are supervised, which require an extensive amount of time and labor to build a reliably annotated dataset. In search of an alternative, in this paper, we investigate if we could detect fake news in an unsupervised manner. We treat truths of news and users’ credibility as latent random variables, and exploit users’ engagements on social media to identify their opinions towards the authenticity of news. We leverage a Bayesian network model to capture the conditional dependencies among the truths of news, the users’ opinions, and the users’ credibility. To solve the inference problem, we propose an efficient collapsed Gibbs sampling approach to infer the truths of news and the users’ credibility without any labelled data. Experiment results on two datasets show that the proposed method significantly outperforms the compared unsupervised methods.",
"title": ""
},
{
"docid": "e729d7b399b3a4d524297ae79b28f45d",
"text": "The aim of this paper is to solve optimal design problems for industrial applications when the objective function value requires the evaluation of expensive simulation codes and its first derivatives are not available. In order to achieve this goal we propose two new algorithms that draw inspiration from two existing approaches: a filled function based algorithm and a Particle Swarm Optimization method. In order to test the efficiency of the two proposed algorithms, we perform a numerical comparison both with the methods we drew inspiration from, and with some standard Global Optimization algorithms that are currently adopted in industrial design optimization. Finally, a realistic ship design problem, namely the reduction of the amplitude of the heave motion of a ship advancing in head seas (a problem connected to both safety and comfort), is solved using the new codes and other global and local derivativeThis work has been partially supported by the Ministero delle Infrastrutture e dei Trasporti in the framework of the research plan “Programma di Ricerca sulla Sicurezza”, Decreto 17/04/2003 G.U. n. 123 del 29/05/2003, by MIUR, FIRB 2001 Research Program Large-Scale Nonlinear Optimization and by the U.S. Office of Naval Research (NICOP grant N. 000140510617). E.F. Campana ( ) · D. Peri · A. Pinto INSEAN—Istituto Nazionale per Studi ed Esperienze di Architettura Navale, Via di Vallerano 139, 00128 Roma, Italy e-mail: [email protected] G. Liuzzi Consiglio Nazionale delle Ricerche, Istituto di Analisi dei Sistemi ed Informatica “A. Ruberti”, Viale Manzoni 30, 00185 Roma, Italy S. Lucidi Dipartimento di Informatica e Sistemistica “A. Ruberti”, Università degli Studi di Roma “Sapienza”, Via Ariosto 25, 00185 Roma, Italy V. Piccialli Dipartimento di Ingegneria dell’Impresa, Università degli Studi di Roma “Tor Vergata”, Via del Policlinico 1, 00133 Roma, Italy 534 E.F. Campana et al. free optimization methods. All the numerical results show the effectiveness of the two new algorithms.",
"title": ""
},
{
"docid": "e95649b06c70682ba4229cff11fefeaf",
"text": "In this paper, we present Black SDN, a Software Defined Networking (SDN) architecture for secure Internet of Things (IoT) networking and communications. SDN architectures were developed to provide improved routing and networking performance for broadband networks by separating the control plain from the data plain. This basic SDN concept is amenable to IoT networks, however, the common SDN implementations designed for wired networks are not directly amenable to the distributed, ad hoc, low-power, mesh networks commonly found in IoT systems. SDN promises to improve the overall lifespan and performance of IoT networks. However, the SDN architecture changes the IoT network's communication patterns, allowing new types of attacks, and necessitating a new approach to securing the IoT network. Black SDN is a novel SDN-based secure networking architecture that secures both the meta-data and the payload within each layer of an IoT communication packet while utilizing the SDN centralized controller as a trusted third party for secure routing and optimized system performance management. We demonstrate through simulation the feasibility of Black SDN in networks where nodes are asleep most of their lives, and specifically examine a Black SDN IoT network based upon the IEEE 802.15.4 LR WPAN (Low Rate - Wireless Personal Area Network) protocol.",
"title": ""
},
{
"docid": "01d74a3a50d1121646ddab3ea46b5681",
"text": "Sleep quality is important, especially given the considerable number of sleep-related pathologies. The distribution of sleep stages is a highly effective and objective way of quantifying sleep quality. As a standard multi-channel recording used in the study of sleep, polysomnography (PSG) is a widely used diagnostic scheme in sleep medicine. However, the standard process of sleep clinical test, including PSG recording and manual scoring, is complex, uncomfortable, and time-consuming. This process is difficult to implement when taking the whole PSG measurements at home for general healthcare purposes. This work presents a novel sleep stage classification system, based on features from the two forehead EEG channels FP1 and FP2. By recording EEG from forehead, where there is no hair, the proposed system can monitor physiological changes during sleep in a more practical way than previous systems. Through a headband or self-adhesive technology, the necessary sensors can be applied easily by users at home. Analysis results demonstrate that classification performance of the proposed system overcomes the individual differences between different participants in terms of automatically classifying sleep stages. Additionally, the proposed sleep stage classification system can identify kernel sleep features extracted from forehead EEG, which are closely related with sleep clinician's expert knowledge. Moreover, forehead EEG features are classified into five sleep stages by using the relevance vector machine. In a leave-one-subject-out cross validation analysis, we found our system to correctly classify five sleep stages at an average accuracy of 76.7 ± 4.0 (SD) % [average kappa 0.68 ± 0.06 (SD)]. Importantly, the proposed sleep stage classification system using forehead EEG features is a viable alternative for measuring EEG signals at home easily and conveniently to evaluate sleep quality reliably, ultimately improving public healthcare.",
"title": ""
},
{
"docid": "6b1dc94c4c70e1c78ea32a760b634387",
"text": "3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of singleimage depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.",
"title": ""
},
{
"docid": "a341bcf8efb975c078cc452e0eecc183",
"text": "We show that, during inference with Convolutional Neural Networks (CNNs), more than 2× to 8× ineffectual work can be exposed if instead of targeting those weights and activations that are zero, we target different combinations of value stream properties. We demonstrate a practical application with Bit-Tactical (TCL), a hardware accelerator which exploits weight sparsity, per layer precision variability and dynamic fine-grain precision reduction for activations, and optionally the naturally occurring sparse effectual bit content of activations to improve performance and energy efficiency. TCL benefits both sparse and dense CNNs, natively supports both convolutional and fully-connected layers, and exploits properties of all activations to reduce storage, communication, and computation demands. While TCL does not require changes to the CNN to deliver benefits, it does reward any technique that would amplify any of the aforementioned weight and activation value properties. Compared to an equivalent data-parallel accelerator for dense CNNs, TCLp, a variant of TCL improves performance by 5.05× and is 2.98× more energy efficient while requiring 22% more area.",
"title": ""
},
{
"docid": "5700ba2411f9b4e4ed59c8c5839dc87d",
"text": "Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g., MSR-RF: C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.",
"title": ""
},
{
"docid": "081c350100f4db11818c75507f715cda",
"text": "Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.",
"title": ""
},
{
"docid": "051c530bf9d49bf1066ddf856488dff1",
"text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.",
"title": ""
},
{
"docid": "dce75562a7e8b02364d39fd7eb407748",
"text": "The ability to predict future user activity is invaluable when it comes to content recommendation and personalization. For instance, knowing when users will return to an online music service and what they will listen to increases user satisfaction and therefore user retention.\n We present a model based on Long-Short Term Memory to estimate when a user will return to a site and what their future listening behavior will be. In doing so, we aim to solve the problem of Just-In-Time recommendation, that is, to recommend the right items at the right time. We use tools from survival analysis for return time prediction and exponential families for future activity analysis. We show that the resulting multitask problem can be solved accurately, when applied to two real-world datasets.",
"title": ""
},
{
"docid": "9dde89f24f55602e21823620b49633dd",
"text": "Darier's disease is a rare late-onset genetic disorder of keratinisation. Mosaic forms of the disease characterised by localised and unilateral keratotic papules carrying post-zygotic ATP2A2 mutation in affected areas have been documented. Segmental forms of Darier's disease are classified into two clinical subtypes: type 1 manifesting with distinct lesions on a background of normal appearing skin and type 2 with well-defined areas of Darier's disease occurring on a background of less severe non-mosaic phenotype. Herein we describe two cases of type 1 segmental Darier's disease with favourable response to topical retinoids.",
"title": ""
},
{
"docid": "c0c064fdc011973848568f5b087ba20b",
"text": "’InfoVis novices’ have been found to struggle with visual data exploration. A ’conversational interface’ which would take natural language inputs to visualization generation and modification, while maintaining a history of the requests, visualizations and findings of the user, has the potential to ameliorate many of these challenges. We present Articulate2, initial work toward a conversational interface to visual data exploration.",
"title": ""
},
{
"docid": "0b024671e04090051292b5e76a4690ae",
"text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.",
"title": ""
},
{
"docid": "25828231caaf3288ed4fdb27df7f8740",
"text": "This paper reports on an algorithm to support autonomous vehicles in reasoning about occluded regions of their environment to make safe, reliable decisions. In autonomous driving scenarios, other traffic participants are often occluded from sensor measurements by buildings or large vehicles like buses or trucks, which makes tracking dynamic objects challenging.We present a method to augment standard dynamic object trackers with means to 1) estimate the occluded state of other traffic agents and 2) robustly associate the occluded estimates with new observations after the tracked object reenters the visible region of the sensor horizon. We perform occluded state estimation using a dynamics model that accounts for the driving behavior of traffic agents and a hybrid Gaussian mixture model (hGMM) to capture multiple hypotheses over discrete behavior, such as driving along different lanes or turning left or right at an intersection. Upon new observations, we associate them to existing estimates in terms of the Kullback-Leibler divergence (KLD). We evaluate the proposed method in simulation and using a real-world traffic-tracking dataset from an autonomous vehicle platform. Results show that our method can handle significantly prolonged occlusions when compared to a standard dynamic object tracking system.",
"title": ""
},
{
"docid": "2318fbd8ca703c0ff5254606b8dce442",
"text": "Historically, the inspection and maintenance of high-voltage power lines have been performed by linemen using various traditional means. In recent years, the use of robots appeared as a new and complementary method of performing such tasks, as several initiatives have been explored around the world. Among them is the teleoperated robotic platform called LineScout Technology, developed by Hydro-Québec, which has the capacity to clear most obstacles found on the grid. Since its 2006 introduction in the operations, it is considered by many utilities as the pioneer project in the domain. This paper’s purpose is to present the mobile platform design and its main mechatronics subsystems to support a comprehensive description of the main functions and application modules it offers. This includes sensors and a compact modular arm equipped with tools to repair cables and broken conductor strands. This system has now been used on many occasions to assess the condition of power line infrastructure and some results are presented. Finally, future developments and potential technologies roadmap are briefly discussed.",
"title": ""
}
] |
scidocsrr
|
91380eb925f106edf8ef1d44f266a0cb
|
Rain Bar: Robust Application-Driven Visual Communication Using Color Barcodes
|
[
{
"docid": "046f15ecf1037477b10bfb4fa315c9c9",
"text": "With the rapid proliferation of camera-equipped smart devices (e.g., smartphones, pads, tablets), visible light communication (VLC) over screen-camera links emerges as a novel form of near-field communication. Such communication via smart devices is highly competitive for its user-friendliness, security, and infrastructure-less (i.e., no dependency on WiFi or cellular infrastructure). However, existing approaches mostly focus on improving the transmission speed and ignore the transmission reliability. Considering the interplay between the transmission speed and reliability towards effective end-to-end communication, in this paper, we aim to boost the throughput over screen-camera links by enhancing the transmission reliability. To this end, we propose RDCode, a robust dynamic barcode which enables a novel packet-frame-block structure. Based on the layered structure, we design different error correction schemes at three levels: intra-blocks, inter-blocks and inter-frames, in order to verify and recover the lost blocks and frames. Finally, we implement RDCode and experimentally show that RDCode reaches a high level of transmission reliability (e.g., reducing the error rate to 10%) and yields a at least doubled transmission rate, compared with the existing state-of-the-art approach COBRA.",
"title": ""
}
] |
[
{
"docid": "600ecbb2ae0e5337a568bb3489cd5e29",
"text": "This paper presents a novel approach for haptic object recognition with an anthropomorphic robot hand. Firstly, passive degrees of freedom are introduced to the tactile sensor system of the robot hand. This allows the planar tactile sensor patches to optimally adjust themselves to the object's surface and to acquire additional sensor information for shape reconstruction. Secondly, this paper presents an approach to classify an object directly from the haptic sensor data acquired by a palpation sequence with the robot hand - without building a 3d-model of the object. Therefore, a finite set of essential finger positions and tactile contact patterns are identified which can be used to describe a single palpation step. A palpation sequence can then be merged into a simple statistical description of the object and finally be classified. The proposed approach for haptic object recognition and the new tactile sensor system are evaluated with an anthropomorphic robot hand.",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "b4b6417ea0e1bc70c5faa50f8e2edf59",
"text": "As secure processing as well as correct recovery of data getting more important, digital forensics gain more value each day. This paper investigates the digital forensics tools available on the market and analyzes each tool based on the database perspective. We present a survey of digital forensics tools that are either focused on data extraction from databases or assist in the process of database recovery. In our work, a detailed list of current database extraction software is provided. We demonstrate examples of database extractions executed on representative selections from among tools provided in the detailed list. We use a standard sample database with each tool for comparison purposes. Based on the execution results obtained, we compare these tools regarding different criteria such as runtime, static or live acquisition, and more.",
"title": ""
},
{
"docid": "070a1de608a35cddb69b84d5f081e94d",
"text": "Identifying potentially vulnerable locations in a code base is critical as a pre-step for effective vulnerability assessment; i.e., it can greatly help security experts put their time and effort to where it is needed most. Metric-based and pattern-based methods have been presented for identifying vulnerable code. The former relies on machine learning and cannot work well due to the severe imbalance between non-vulnerable and vulnerable code or lack of features to characterize vulnerabilities. The latter needs the prior knowledge of known vulnerabilities and can only identify similar but not new types of vulnerabilities. In this paper, we propose and implement a generic, lightweight and extensible framework, LEOPARD, to identify potentially vulnerable functions through program metrics. LEOPARD requires no prior knowledge about known vulnerabilities. It has two steps by combining two sets of systematically derived metrics. First, it uses complexity metrics to group the functions in a target application into a set of bins. Then, it uses vulnerability metrics to rank the functions in each bin and identifies the top ones as potentially vulnerable. Our experimental results on 11 real-world projects have demonstrated that, LEOPARD can cover 74.0% of vulnerable functions by identifying 20% of functions as vulnerable and outperform machine learning-based and static analysis-based techniques. We further propose three applications of LEOPARD for manual code review and fuzzing, through which we discovered 22 new bugs in real applications like PHP, radare2 and FFmpeg, and eight of them are new vulnerabilities.",
"title": ""
},
{
"docid": "09168164e47fd781e4abeca45fb76c35",
"text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].",
"title": ""
},
{
"docid": "4bd123c2c44e703133e9a6093170db39",
"text": "This paper presents a single-phase cascaded H-bridge converter for a grid-connected photovoltaic (PV) application. The multilevel topology consists of several H-bridge cells connected in series, each one connected to a string of PV modules. The adopted control scheme permits the independent control of each dc-link voltage, enabling, in this way, the tracking of the maximum power point for each string of PV panels. Additionally, low-ripple sinusoidal-current waveforms are generated with almost unity power factor. The topology offers other advantages such as the operation at lower switching frequency or lower current ripple compared to standard two-level topologies. Simulation and experimental results are presented for different operating conditions.",
"title": ""
},
{
"docid": "ca3c3dec83821747896d44261ba2f9ad",
"text": "Building discriminative representations for 3D data has been an important task in computer graphics and computer vision research. Convolutional Neural Networks (CNNs) have shown to operate on 2D images with great success for a variety of tasks. Lifting convolution operators to 3D (3DCNNs) seems like a plausible and promising next step. Unfortunately, the computational complexity of 3D CNNs grows cubically with respect to voxel resolution. Moreover, since most 3D geometry representations are boundary based, occupied regions do not increase proportionately with the size of the discretization, resulting in wasted computation. In this work, we represent 3D spaces as volumetric fields, and propose a novel design that employs field probing filters to efficiently extract features from them. Each field probing filter is a set of probing points — sensors that perceive the space. Our learning algorithm optimizes not only the weights associated with the probing points, but also their locations, which deforms the shape of the probing filters and adaptively distributes them in 3D space. The optimized probing points sense the 3D space “intelligently”, rather than operating blindly over the entire domain. We show that field probing is significantly more efficient than 3DCNNs, while providing state-of-the-art performance, on classification tasks for 3D object recognition benchmark datasets.",
"title": ""
},
{
"docid": "741ba628eacb59d7b9f876520406e600",
"text": "Awareness of the physical location for each node is required by many wireless sensor network applications. The discovery of the position can be realized utilizing range measurements including received signal strength, time of arrival, time difference of arrival and angle of arrival. In this paper, we focus on localization techniques based on angle of arrival information between neighbor nodes. We propose a new localization and orientation scheme that considers beacon information multiple hops away. The scheme is derived under the assumption of noisy angle measurements. We show that the proposed method achieves very good accuracy and precision despite inaccurate angle measurements and a small number of beacons",
"title": ""
},
{
"docid": "75f895ff76e7a55d589ff30637524756",
"text": "This paper details the coreference resolution system submitted by Stanford at the CoNLL2011 shared task. Our system is a collection of deterministic coreference resolution models that incorporate lexical, syntactic, semantic, and discourse information. All these models use global document-level information by sharing mention attributes, such as gender and number, across mentions in the same cluster. We participated in both the open and closed tracks and submitted results using both predicted and gold mentions. Our system was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.",
"title": ""
},
{
"docid": "ba6b016ace0c098ab345cd5a01af470d",
"text": "This paper describes a vehicle detection system fusing radar and vision data. Radar data are used to locate areas of interest on images. Vehicle search in these areas is mainly based on vertical symmetry. All the vehicles found in different image areas are mixed together, and a series of filters is applied in order to delete false detections. In order to speed up and improve system performance, guard rail detection and a method to manage overlapping areas are also included. Both methods are explained and justified in this paper. The current algorithm analyzes images on a frame-by-frame basis without any temporal correlation. Two different statistics, namely: 1) frame based and 2) event based, are computed to evaluate vehicle detection efficiency, while guard rail detection efficiency is computed in terms of time savings and correct detection rates. Results and problems are discussed, and directions for future enhancements are provided",
"title": ""
},
{
"docid": "be9b4dbfc747daf36894d6fe11b0db4e",
"text": "type: op op_type: Conv name: conv1 inputs: [ bottom, weight ] outputs: [ top ] location: ip: 127.0.0.1 device: 0 thread: 1 other fields ... Example Op defined in YAML Location: The location that the blob/op resides on, including: ● ip address of the target machine ● what device it is on (CPU/GPU) Thread: Thread is needed for op because both CPU and GPU can be multiple threaded (Streams in terms of NVIDIA GPU).",
"title": ""
},
{
"docid": "7d5556e2bfd8ca3dbc5817e9575148fc",
"text": "We present in this paper a calibration program that controls a calibration board integrated in a Smart Electrical Energy Meter (SEEM). The “SEEM” allows to measure the energy from a single phase line and transmits the value of this energy to a central through a wireless network. The “SEEM” needs to be calibrated in only one point of load to correct the gain and compensate the phase added by the system of measure. Since the calibration is performed for one point of load, this reduces the material used, therefore reduces the cost. Furthermore, the calibration of gain and phase is performed simultaneously which decrease the time of this operation.",
"title": ""
},
{
"docid": "0109c8c7663df5e8ac2abd805924d9f6",
"text": "To ensure system stability and availability during disturbances, industrial facilities equipped with on-site generation, generally utilize some type of load shedding scheme. In recent years, conventional underfrequency and PLC-based load shedding schemes have been integrated with computerized power management systems to provide an “automated” load shedding system. However, these automated solutions lack system operating knowledge and are still best-guess methods which typically result in excessive or insufficient load shedding. An intelligent load shedding system can provide faster and optimal load relief by utilizing actual operating conditions and knowledge of past system disturbances. This paper presents the need for an intelligent, automated load shedding system. Simulation of case studies for two industrial electrical networks are performed to demonstrate the advantages of an intelligent load shedding system over conventional load shedding methods from the design and operation perspectives. Index Terms — Load Shedding (LS), Intelligent Load Shedding (ILS), Power System Transient Stability, Frequency Relay, Programmable Logic Controller (PLC), Power Management System",
"title": ""
},
{
"docid": "819f5df03cebf534a51eb133cd44cb0d",
"text": "Although DBP (di-n-butyl phthalate) is commonly encountered as an artificially-synthesized plasticizer with potential to impair fertility, we confirm that it can also be biosynthesized as microbial secondary metabolites from naturally occurring filamentous fungi strains cultured either in an artificial medium or natural water. Using the excreted crude enzyme from the fungi for catalyzing a variety of substrates, we found that the fungal generation of DBP was largely through shikimic acid pathway, which was assembled by phthalic acid with butyl alcohol through esterification. The DBP production ability of the fungi was primarily influenced by fungal spore density and incubation temperature. This study indicates an important alternative natural waterborne source of DBP in addition to artificial synthesis, which implied fungal contribution must be highlighted for future source control and risk management of DBP.",
"title": ""
},
{
"docid": "e99c8800033f33caa936a6ff8dd79995",
"text": "Terms of service of on-line platforms too often contain clauses that are potentially unfair to the consumer. We present an experimental study where machine learning is employed to automatically detect such potentially unfair clauses. Results show that the proposed system could provide a valuable tool for lawyers and consumers alike.",
"title": ""
},
{
"docid": "ec9c15e543444e88cc5d636bf1f6e3b9",
"text": "Which ZSL method is more robust to GZSL? An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild Wei-Lun Chao*1, Soravit Changpinyo*1, Boqing Gong2, and Fei Sha1,3 1U. of Southern California, 2U. of Central Florida, 3U. of California, Los Angeles NSF IIS-1566511, 1065243, 1451412, 1513966, 1208500, CCF-1139148, USC Graduate Fellowship, a Google Research Award, an Alfred P. Sloan Research Fellowship and ARO# W911NF-12-1-0241 and W911NF-15-1-0484.",
"title": ""
},
{
"docid": "e68992d53fa5bac20f8a4f17d72c7d0d",
"text": "In the field of pattern recognition, data analysis, and machine learning, data points are usually modeled as high-dimensional vectors. Due to the curse-of-dimensionality, it is non-trivial to efficiently process the orginal data directly. Given the unique properties of nonlinear dimensionality reduction techniques, nonlinear learning methods are widely adopted to reduce the dimension of data. However, existing nonlinear learning methods fail in many real applications because of the too-strict requirements (for real data) or the difficulty in parameters tuning. Therefore, in this paper, we investigate the manifold learning methods which belong to the family of nonlinear dimensionality reduction methods. Specifically, we proposed a new manifold learning principle for dimensionality reduction named Curved Cosine Mapping (CCM). Based on the law of cosines in Euclidean space, CCM applies a brand new mapping pattern to manifold learning. In CCM, the nonlinear geometric relationships are obtained by utlizing the law of cosines, and then quantified as the dimensionality-reduced features. Compared with the existing approaches, the model has weaker theoretical assumptions over the input data. Moreover, to further reduce the computation cost, an optimized version of CCM is developed. Finally, we conduct extensive experiments over both artificial and real-world datasets to demonstrate the performance of proposed techniques.",
"title": ""
},
{
"docid": "e770120d43a03e9b43d7de4d47f9a2eb",
"text": "Twitter is an online social networking service on which users worldwide publish their opinions on a variety of topics, discuss current issues, complain, and express many kinds of emotions. Therefore, Twitter is a rich source of data for opinion mining, sentiment and emotion analysis. This paper focuses on this issue by analysing symbols called emotion tokens, including emotion symbols (e.g. emoticons and emoji ideograms). According to observations, emotion tokens are commonly used in many tweets. They directly express one’s emotions regardless of his/her language, hence they have become a useful signal for sentiment analysis in multilingual tweets. The paper describes the approach to extending existing binary sentiment classification approaches using a multi-way emotions classification.",
"title": ""
},
{
"docid": "66a6e9bbdd461fa85a0a09ec1ceb2031",
"text": "BACKGROUND\nConverging evidence indicates a functional disruption in the neural systems for reading in adults with dyslexia. We examined brain activation patterns in dyslexic and nonimpaired children during pseudoword and real-word reading tasks that required phonologic analysis (i.e., tapped the problems experienced by dyslexic children in sounding out words).\n\n\nMETHODS\nWe used functional magnetic resonance imaging (fMRI) to study 144 right-handed children, 70 dyslexic readers, and 74 nonimpaired readers as they read pseudowords and real words.\n\n\nRESULTS\nChildren with dyslexia demonstrated a disruption in neural systems for reading involving posterior brain regions, including parietotemporal sites and sites in the occipitotemporal area. Reading skill was positively correlated with the magnitude of activation in the left occipitotemporal region. Activation in the left and right inferior frontal gyri was greater in older compared with younger dyslexic children.\n\n\nCONCLUSIONS\nThese findings provide neurobiological evidence of an underlying disruption in the neural systems for reading in children with dyslexia and indicate that it is evident at a young age. The locus of the disruption places childhood dyslexia within the same neurobiological framework as dyslexia, and acquired alexia, occurring in adults.",
"title": ""
}
] |
scidocsrr
|
87692edca81182c14462fe3465d18bf2
|
Mobile activity recognition for a whole day: recognizing real nursing activities with big dataset
|
[
{
"docid": "e700afa9064ef35f7d7de40779326cb0",
"text": "Human activity recognition is important for many applications. This paper describes a human activity recognition framework based on feature selection techniques. The objective is to identify the most important features to recognize human activities. We first design a set of new features (called physical features) based on the physical parameters of human motion to augment the commonly used statistical features. To systematically analyze the impact of the physical features on the performance of the recognition system, a single-layer feature selection framework is developed. Experimental results indicate that physical features are always among the top features selected by different feature selection methods and the recognition accuracy is generally improved to 90%, or 8% better than when only statistical features are used. Moreover, we show that the performance is further improved by 3.8% by extending the single-layer framework to a multi-layer framework which takes advantage of the inherent structure of human activities and performs feature selection and classification in a hierarchical manner.",
"title": ""
},
{
"docid": "7786fac57e0c1392c6a5101681baecb0",
"text": "We deployed 72 sensors of 10 modalities in 15 wireless and wired networked sensor systems in the environment, in objects, and on the body to create a sensor-rich environment for the machine recognition of human activities. We acquired data from 12 subjects performing morning activities, yielding over 25 hours of sensor data. We report the number of activity occurrences observed during post-processing, and estimate that over 13000 and 14000 object and environment interactions occurred. We describe the networked sensor setup and the methodology for data acquisition, synchronization and curation. We report on the challenges and outline lessons learned and best practice for similar large scale deployments of heterogeneous networked sensor systems. We evaluate data acquisition quality for on-body and object integrated wireless sensors; there is less than 2.5% packet loss after tuning. We outline our use of the dataset to develop new sensor network self-organization principles and machine learning techniques for activity recognition in opportunistic sensor configurations. Eventually this dataset will be made public.",
"title": ""
}
] |
[
{
"docid": "a4457f8c560d65a80cb03209b4a0a380",
"text": "Purpose – Fundamentally, the success of schools depends on first-rate school leadership, on leaders reinforcing the teachers’ willingness to adhere to the school’s vision, creating a sense of purpose, binding them together and encouraging them to engage in continuous learning. Leadership, vision and organizational learning are considered to be the key to school improvement. However, systematic empirical evidence of a direct relationship between leadership, vision and organizational learning is limited. The present study aims to explore the influence of principals’ leadership style on school organizational learning, using school vision as a mediator. Design/methodology/approach – The data were collected from 1,474 teachers at 104 elementary schools in northern Israel, and aggregated to the school level. Findings – Mediating regression analysis demonstrated that the school vision was a significant predictor of school organizational learning and functioned as a partial mediator only between principals’ transformational leadership style and school organizational learning. Moreover, the principals’ transformational leadership style predicted school organizational vision and school organizational learning processes. In other words, school vision, as shaped by the principal and the staff, is a powerful motivator of the process of organizational learning in school. Research implications/limitations – The research results have implications for the guidance of leadership practice, training, appraisal and professional development. Originality/value – The paper explores the centrality of school vision and its effects on the achievement of the school’s aims by means of organizational learning processes.",
"title": ""
},
{
"docid": "ee0e4dda5654896a27fa6525c23199cc",
"text": "This paper addresses the task of designing a modular neural network architecture that jointly solves different tasks. As an example we use the tasks of depth estimation and semantic segmentation given a single RGB image. The main focus of this work is to analyze the cross-modality influence between depth and semantic prediction maps on their joint refinement. While most of the previous works solely focus on measuring improvements in accuracy, we propose a way to quantify the cross-modality influence. We show that there is a relationship between final accuracy and cross-modality influence, although not a simple linear one. Hence a larger cross-modality influence does not necessarily translate into an improved accuracy. We find that a beneficial balance between the cross-modality influences can be achieved by network architecture and conjecture that this relationship can be utilized to understand different network design choices. Towards this end we propose a Convolutional Neural Network (CNN) architecture that fuses the state-of-the-art results for depth estimation and semantic labeling. By balancing the cross-modality influences between depth and semantic prediction, we achieve improved results for both tasks using the NYU-Depth v2 benchmark.",
"title": ""
},
{
"docid": "c34b6fac632c05c73daee2f0abce3ae8",
"text": "OBJECTIVES\nUnilateral strength training produces an increase in strength of the contralateral homologous muscle group. This process of strength transfer, known as cross education, is generally attributed to neural adaptations. It has been suggested that unilateral strength training of the free limb may assist in maintaining the functional capacity of an immobilised limb via cross education of strength, potentially enhancing recovery outcomes following injury. Therefore, the purpose of this review is to examine the impact of immobilisation, the mechanisms that may contribute to cross education, and possible implications for the application of unilateral training to maintain strength during immobilisation.\n\n\nDESIGN\nCritical review of literature.\n\n\nMETHODS\nSearch of online databases.\n\n\nRESULTS\nImmobilisation is well known for its detrimental effects on muscular function. Early reductions in strength outweigh atrophy, suggesting a neural contribution to strength loss, however direct evidence for the role of the central nervous system in this process is limited. Similarly, the precise neural mechanisms responsible for cross education strength transfer remain somewhat unknown. Two recent studies demonstrated that unilateral training of the free limb successfully maintained strength in the contralateral immobilised limb, although the role of the nervous system in this process was not quantified.\n\n\nCONCLUSIONS\nCross education provides a unique opportunity for enhancing rehabilitation following injury. By gaining an understanding of the neural adaptations occurring during immobilisation and cross education, future research can utilise the application of unilateral training in clinical musculoskeletal injury rehabilitation.",
"title": ""
},
{
"docid": "4b3576e6451fa78886ce440e55b04979",
"text": "In this paper, we model the document revision detection problem as a minimum cost branching problem that relies on computing document distances. Furthermore, we propose two new document distance measures, word vector-based Dynamic Time Warping (wDTW) and word vector-based Tree Edit Distance (wTED). Our revision detection system is designed for a large scale corpus and implemented in Apache Spark. We demonstrate that our system can more precisely detect revisions than state-of-the-art methods by utilizing the Wikipedia revision dumps 1 and simulated data sets.",
"title": ""
},
{
"docid": "af78c57378a472c8f7be4eb354feb442",
"text": "Mutations in the human sonic hedgehog gene ( SHH) are the most frequent cause of autosomal dominant inherited holoprosencephaly (HPE), a complex brain malformation resulting from incomplete cleavage of the developing forebrain into two separate hemispheres and ventricles. Here we report the clinical and molecular findings in five unrelated patients with HPE and their relatives with an identified SHH mutation. Three new and one previously reported SHH mutations were identified, a fifth proband was found to carry a reciprocal subtelomeric rearrangement involving the SHH locus in 7q36. An extremely wide intrafamilial phenotypic variability was observed, ranging from the classical phenotype with alobar HPE accompanied by typical severe craniofacial abnormalities to very mild clinical signs of choanal stenosis or solitary median maxillary central incisor (SMMCI) only. Two families were initially ascertained because of microcephaly in combination with developmental delay and/or mental retardation and SMMCI, the latter being a frequent finding in patients with an identified SHH mutation. In other affected family members a delay in speech acquisition and learning disabilities were the leading clinical signs. Conclusion: mutational analysis of the sonic hedgehog gene should not only be considered in patients presenting with the classical holoprosencephaly phenotype but also in those with two or more clinical signs of the wide phenotypic spectrum of associated abnormalities, especially in combination with a positive family history.",
"title": ""
},
{
"docid": "13d7ccd473e5db8fabdf4af18688774f",
"text": "Aortopathies pose a significant healthcare burden due to excess early mortality, increasing incidence, and underdiagnosis. Understanding the underlying genetic causes, early diagnosis, timely surveillance, prophylactic repair, and family screening are keys to addressing these diseases. Next-generation sequencing continues to expand our understanding of the genetic causes of heritable aortopathies, rapidly clarifying their underlying molecular pathophysiology and suggesting new potential therapeutic targets. This review will summarize the pathogenetic mechanisms and management of heritable genetic aortopathies with attention to specific forms of both syndromic and nonsyndromic disorders, including Marfan syndrome, Loeys-Dietz syndrome, vascular Ehlers-Danlos syndrome, and familial thoracic aortic aneurysm and dissection.",
"title": ""
},
{
"docid": "5905846f7763039d4f89fcb0b05c66fe",
"text": "This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration, and diabetic retinopathy, these ocular pathologies being the major causes of irreversible visual impairment.",
"title": ""
},
{
"docid": "886924ad0c7b354c1ac8aec3955639cc",
"text": "Collaborative filtering is one of the most successful and extensive methods used by recommender systems for predicting the preferences of users. However, traditional collaborative filtering only uses rating information to model the user, the data sparsity problem and the cold start problem will severely reduce the recommendation performance. To overcome these problems, we propose two neural network models to improve recommendations. The first one called TDAE uses a denoising autoencoder to integrate the ratings and the explicit trust relationships between users in the social networks in order to model the preferences of users more accurately. However, the explicit trust information is very sparse, which limits the performance of this model. Therefore, we propose a second method called TDAE++ for extracting the implicit trust relationships between users with similarity measures, where we employ both the explicit and implicit trust information together to improve the quality of recommendations. Finally, we inject the trust information into both the input and the hidden layer in order to fuse these two types of different information to learn more reliable semantic representations of users. Comprehensive experiments based on three popular data sets verify that our proposed models perform better than other state-of-the-art approaches in common recommendation tasks.",
"title": ""
},
{
"docid": "c89a7027de2362aa1bfe64b084073067",
"text": "This paper considers pick-and-place tasks using aerial vehicles equipped with manipulators. The main focus is on the development and experimental validation of a nonlinear model-predictive control methodology to exploit the multi-body system dynamics and achieve optimized performance. At the core of the approach lies a sequential Newton method for unconstrained optimal control and a high-frequency low-level controller tracking the generated optimal reference trajectories. A low cost quadrotor prototype with a simple manipulator extending more than twice the radius of the vehicle is designed and integrated with an on-board vision system for object tracking. Experimental results show the effectiveness of model-predictive control to motivate the future use of real-time optimal control in place of standard ad-hoc gain scheduling techniques.",
"title": ""
},
{
"docid": "216d4c4dc479588fb91a27e35b4cb403",
"text": "At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices.\n We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and PageRank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%.",
"title": ""
},
{
"docid": "264d5db966f9cbed6b128087c7e3761e",
"text": "We study auction mechanisms for sharing spectrum among a group of users, subject to a constraint on the interference temperature at a measurement point. The users access the channel using spread spectrum signaling and so interfere with each other. Each user receives a utility that is a function of the received signal-to-interference plus noise ratio. We propose two auction mechanisms for allocating the received power. The first is an auction in which users are charged for received SINR, which, when combined with logarithmic utilities, leads to a weighted max-min fair SINR allocation. The second is an auction in which users are charged for power, which maximizes the total utility when the bandwidth is large enough and the receivers are co-located. Both auction mechanisms are shown to be socially optimal for a limiting “large system” with co-located receivers, where bandwidth, power and the number of users are increased in fixed proportion. We also formulate an iterative and distributed bid updating algorithm, and specify conditions under which this algorithm converges globally to the Nash equilibrium of the auction.",
"title": ""
},
{
"docid": "d4acd79e2fdbc9b87b2dbc6ebfa2dd43",
"text": "Airbnb, an online marketplace for accommodations, has experienced a staggering growth accompanied by intense debates and scattered regulations around the world. Current discourses, however, are largely focused on opinions rather than empirical evidences. Here, we aim to bridge this gap by presenting the first large-scale measurement study on Airbnb, using a crawled data set containing 2.3 million listings, 1.3 million hosts, and 19.3 million reviews. We measure several key characteristics at the heart of the ongoing debate and the sharing economy. Among others, we find that Airbnb has reached a global yet heterogeneous coverage. The majority of its listings across many countries are entire homes, suggesting that Airbnb is actually more like a rental marketplace rather than a spare-room sharing platform. Analysis on star-ratings reveals that there is a bias toward positive ratings, amplified by a bias toward using positive words in reviews. The extent of such bias is greater than Yelp reviews, which were already shown to exhibit a positive bias. We investigate a key issue - commercial hosts who own multiple listings on Airbnb - repeatedly discussed in the current debate. We find that their existence is prevalent, they are early movers towards joining Airbnb, and their listings are disproportionately entire homes and located in the US. Our work advances the current understanding of how Airbnb is being used and may serve as an independent and empirical reference to inform the debate.",
"title": ""
},
{
"docid": "b16b04f55e7d2ce4f0ba86eb7c0a1996",
"text": "Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. In this Opinion article, Hosny et al. discuss the application of artificial intelligence to image-based tasks in the field of radiology and consider the advantages and challenges of its clinical implementation.",
"title": ""
},
{
"docid": "f9cd0767487bd46e760d9e3adb35f1fd",
"text": "In this paper, the exciton transport properties of an octa(butyl)-substituted metal-free phthalocyanine (H2-OBPc) molecular crystal have been explored by means of a combined computational (molecular dynamics and electronic structure calculations) and theoretical (model Hamiltonian) approximation. The excitonic couplings in phthalocyanines, where multiple quasi-degenerate excited states are present in the isolated chromophore, are computed with a multistate diabatization scheme which is able to capture both shortand long-range excitonic coupling effects. Thermal motions in phthalocyanine molecular crystals at room temperature cause substantial fluctuation of the excitonic couplings between neighboring molecules (dynamic disorder). The average values of the excitonic couplings are found to be not much smaller than the reorganization energy for the excitation energy transfer and the commonly assumed incoherent regime for this class of materials cannot be invoked. A simple but realistic model Hamiltonian is proposed to study the exciton dynamics in phthalocyanine molecular crystals or aggregates beyond the incoherent regime.",
"title": ""
},
{
"docid": "a46460113926b688f144ddec74e03918",
"text": "The authors describe a new self-report instrument, the Inventory of Depression and Anxiety Symptoms (IDAS), which was designed to assess specific symptom dimensions of major depression and related anxiety disorders. They created the IDAS by conducting principal factor analyses in 3 large samples (college students, psychiatric patients, community adults); the authors also examined the robustness of its psychometric properties in 5 additional samples (high school students, college students, young adults, postpartum women, psychiatric patients) who were not involved in the scale development process. The IDAS contains 10 specific symptom scales: Suicidality, Lassitude, Insomnia, Appetite Loss, Appetite Gain, Ill Temper, Well-Being, Panic, Social Anxiety, and Traumatic Intrusions. It also includes 2 broader scales: General Depression (which contains items overlapping with several other IDAS scales) and Dysphoria (which does not). The scales (a) are internally consistent, (b) capture the target dimensions well, and (c) define a single underlying factor. They show strong short-term stability and display excellent convergent validity and good discriminant validity in relation to other self-report and interview-based measures of depression and anxiety.",
"title": ""
},
{
"docid": "ecd99c9f87e1c5e5f529cb5fcbb206f2",
"text": "The concept of supply chain is about managing coordinated information and material flows, plant operations, and logistics. It provides flexibility and agility in responding to consumer demand shifts without cost overlays in resource utilization. The fundamental premise of this philosophy is; synchronization among multiple autonomous business entities represented in it. That is, improved coordination within and between various supply-chain members. Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decision-making processes, and improvement in the overall performance of each member as well as the supply chain. Describes architecture to create the appropriate structure, install proper controls, and implement principles of optimization to synchronize the supply chain. A supply-chain model based on a collaborative system approach is illustrated utilizing the example of the textile industry. process flexibility and coordination of processes across many sites. More and more organizations are promoting employee empowerment and the need for rules-based, real-time decision support systems to attain organizational and process flexibility, as well as to respond to competitive pressure to introduce new products more quickly, cheaply and of improved quality. The underlying philosophy of managing supply chains has evolved to respond to these changing business trends. Supply-chain management phenomenon has received the attention of researchers and practitioners in various topics. In the earlier years, the emphasis was on materials planning utilizing materials requirements planning techniques, inventory logistics management with one warehouse multi-retailer distribution system, and push and pull operation techniques for production systems. In the last few years, however, there has been a renewed interest in designing and implementing integrated systems, such as enterprise resource planning, multi-echelon inventory, and synchronous-flow manufacturing, respectively. A number of factors have contributed to this shift. First, there has been a realization that better planning and management of complex interrelated systems, such as materials planning, inventory management, capacity planning, logistics, and production systems will lead to overall improvement in enterprise productivity. Second, advances in information and communication technologies complemented by sophisticated decision support systems enable the designing, implementing and controlling of the strategic and tactical strategies essential to delivery of integrated systems. In the next section, a framework that offers an unified approach to dealing with enterprise related problems is presented. A framework for analysis of enterprise integration issues As mentioned in the preceding section, the availability of advanced production and logistics management systems has the potential of fundamentally influencing enterprise integration issues. The motivation in pursuing research issues described in this paper is to propose a framework that enables dealing with these effectively. The approach suggested in this paper utilizing supply-chain philosophy for enterprise integration proposes domain independent problem solving and modeling, and domain dependent analysis and implementation. The purpose of the approach is to ascertain characteristics of the problem independent of the specific problem environment. Consequently, the approach delivers solution(s) or the solution method that are intrinsic to the problem and not its environment. Analysis methods help to understand characteristics of the solution methodology, as well as providing specific guarantees of effectiveness. Invariably, insights gained from these analyses can be used to develop effective problem solving tools and techniques for complex enterprise integration problems. The discussion of the framework is organized as follows. First, the key guiding principles of the proposed framework on which a supply chain ought to be built are outlined. Then, a cooperative supply-chain (CSC) system is described as a special class of a supply-chain network implementation. Next, discussion on a distributed problemsolving strategy that could be employed in integrating this type of system is presented. Following this, key components of a CSC system are described. Finally, insights on modeling a CSC system are offered. Key modeling principles are elaborated through two distinct modeling approaches in the management science discipline. Supply chain guiding principles Firms have increasingly been adopting enterprise/supply-chain management techniques in order to deal with integration issues. To focus on these integration efforts, the following guiding principles for the supply-chain framework are proposed. These principles encapsulate trends in production and logistics management that a supplychain arrangement may be designed to capture. . Supply chain is a cooperative system. The supply-chain arrangement exists on cooperation among its members. Cooperation occurs in many forms, such as sharing common objectives and goals for the group entity; utilizing joint policies, for instance in marketing and production; setting up common budgets, cost and price structures; and identifying commitments on capacity, production plans, etc. . Supply chain exists on the group dynamics of its members. The existence of a supply chain is dependent on the interaction among its members. This interaction occurs in the form of exchange of information with regard to input, output, functions and controls, such as objectives and goals, and policies. By analyzing this [ 291 ] Charu Chandra and Sameer Kumar Enterprise architectural framework for supply-chain integration Industrial Management & Data Systems 101/6 [2001] 290±303 information, members of a supply chain may choose to modify their behavior attuned with group expectations. . Negotiation and compromise are norms of operation in a supply chain. In order to realize goals and objectives of the group, members negotiate on commitments made to one another for price, capacity, production plans, etc. These negotiations often lead to compromises by one or many members on these issues, leading up to realization of sub-optimal goals and objectives by members. . Supply-chain system solutions are Paretooptimal (satisficing), not optimizing. Supply-chain problems similar to many real world applications involve several objective functions of its members simultaneously. In all such applications, it is extremely rare to have one feasible solution that simultaneously optimizes all of the objective functions. Typically, optimizing one of the objective functions has the effect of moving another objective function away from its most desirable value. These are the usual conflicts among the objective functions in the multiobjective models. As a multi-objective problem, the supply-chain model produces non-dominated or Pareto-optimal solutions. That is, solutions for a supplychain problem do not leave any member worse-off at the expense of another. . Integration in supply chain is achieved through synchronization. Integration across the supply chain is achieved through synchronization of activities at the member entity and aggregating its impact through process, function, business, and on to enterprise levels, either at the member entity or the group entity. Thus, by synchronization of supply-chain components, existing bottlenecks in the system are eliminated, while future ones are prevented from occurring. A cooperative supply-chain A supply-chain network depicted in Figure 1 can be a complex web of systems, sub-systems, operations, activities, and their relationships to one another, belonging to its various members namely, suppliers, carriers, manufacturing plants, distribution centers, retailers, and consumers. The design, modeling and implementation of such a system, therefore, can be difficult, unless various parts of it are cohesively tied to the whole. The concept of a supply-chain is about managing coordinated information and material flows, plant operations, and logistics through a common set of principles, strategies, policies, and performance metrics throughout its developmental life cycle (Lee and Billington, 1993). It provides flexibility and agility in responding to consumer demand shifts with minimum cost overlays in resource utilization. The fundamental premise of this philosophy is synchronization among multiple autonomous entities represented in it. That is, improved coordination within and between various supply-chain members. Coordination is achieved within the framework of commitments made by members to each other. Members negotiate and compromise in a spirit of cooperation in order to meet these commitments. Hence, the label(CSC). Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decisionmaking processes, and improvement in the overall performance of each member, as well as the supply-chain (group) (Chandra, 1997; Poirier, 1999; Tzafastas and Kapsiotis, 1994). A generic textile supply chain has for its primary raw material vendors, cotton growers and/or chemical suppliers, depending upon whether the end product is cotton, polyester or some combination of cotton and polyester garment. Secondary raw material vendors are suppliers of accessories such as, zippers, buttons, thread, garment tags, etc. Other tier suppliers in the complete pipeline are: fiber manufacturers for producing the polyester or cotton fiber yarn; textile manufacturers for weaving and dying yarn into colored textile fabric; an apparel maker for cutting, sewing and packing the garment; a distribution center for merchandising the garment; and a retailer selling the brand name garment to consumers at a shopping mall or center. Synchronization of the textile supply chain is achieved through coordination primarily of: . replenishment schedules that have be",
"title": ""
},
{
"docid": "0aeb9567ed3ddf5ca7f33725fb5aa310",
"text": "Code-reuse attacks based on return oriented programming are among the most popular exploitation techniques used by attackers today. Few practical defenses are able to stop such attacks on arbitrary binaries without access to source code. A notable exception are the techniques that employ new hardware, such as Intel’s Last Branch Record (LBR) registers, to track all indirect branches and raise an alert when a sensitive system call is reached by means of too many indirect branches to short gadgets—under the assumption that such gadget chains would be indicative of a ROP attack. In this paper, we evaluate the implications. What is “too many” and how short is “short”? Getting the thresholds wrong has serious consequences. In this paper, we show by means of an attack on Internet Explorer that while current defenses based on these techniques raise the bar for exploitation, they can be bypassed. Conversely, tuning the thresholds to make the defenses more aggressive, may flag legitimate program behavior as an attack. We analyze the problem in detail and show that determining the right values is difficult.",
"title": ""
},
{
"docid": "745562de56499ff0030f35afa8d84b7f",
"text": "This paper will show how the accuracy and security of SCADA systems can be improved by using anomaly detection to identify bad values caused by attacks and faults. The performance of invariant induction and ngram anomaly-detectors will be compared and this paper will also outline plans for taking this work further by integrating the output from several anomalydetecting techniques using Bayesian networks. Although the methods outlined in this paper are illustrated using the data from an electricity network, this research springs from a more general attempt to improve the security and dependability of SCADA systems using anomaly detection.",
"title": ""
},
{
"docid": "c3f23cf5015e35dfd4b10254984bf0d4",
"text": "We investigate the applicability of passive RFID systems to the task of identifying multiple tagged objects simultaneously, assuming that the number of tags is not known in advance. We present a combinatorial model of the communication mechanism between the reader device and the tags, and use this model to derive the optimal parameter setting for the reading process, based on estimates for the number of tags. Some results on the performance of an implementation are presented. Keywords— RFID, collision-resolution, tagging, combinatorics.",
"title": ""
},
{
"docid": "553e476ad6a0081aed01775f995f4d16",
"text": "This document describes the findings of the Second Workshop on Neural Machine Translation and Generation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2018). First, we summarize the research trends of papers presented in the proceedings, and note that there is particular interest in linguistic structure, domain adaptation, data augmentation, handling inadequate resources, and analysis of models. Second, we describe the results of the workshop’s shared task on efficient neural machine translation (NMT), where participants were tasked with creating NMT systems that are both accurate and efficient.",
"title": ""
}
] |
scidocsrr
|
22e3f411d852cef6d1d7ec72aabbe735
|
Power-aware routing based on the energy drain rate for mobile ad hoc networks
|
[
{
"docid": "7785c16b3d0515057c8a0ec0ed55b5de",
"text": "Most ad hoc mobile devices today operate on batteries. Hence, power consumption becomes an important issue. To maximize the lifetime of ad hoc mobile networks, the power consumption rate of each node must be evenly distributed, and the overall transmission power for each connection request must be minimized. These two objectives cannot be satisfied simultaneously by employing routing algorithms proposed in previous work. In this article we present a new power-aware routing protocol to satisfy these two constraints simultaneously; we also compare the performance of different types of power-related routing algorithms via simulation. Simulation results confirm the need to strike a balance in attaining service availability performance of the whole network vs. the lifetime of ad hoc mobile devices.",
"title": ""
},
{
"docid": "bbdb676a2a813d29cd78facebc38a9b8",
"text": "In this paper we develop a new multiaccess protocol for ad hoc radio networks. The protocol is based on the original MACA protocol with the adition of a separate signalling channel. The unique feature of our protocol is that it conserves battery power at nodes by intelligently powering off nodes that are not actively transmitting or receiving packets. The manner in which nodes power themselves off does not influence the delay or throughput characteristics of our protocol. We illustrate the power conserving behavior of PAMAS via extensive simulations performed over ad hoc networks containing 10-20 nodes. Our results indicate that power savings of between 10% and 70% are attainable in most systems. Finally, we discuss how the idea of power awareness can be built into other multiaccess protocols as well.",
"title": ""
}
] |
[
{
"docid": "ef3bfb8b04eea94724e0124b0cfe723e",
"text": "Generative adversarial networks (GANs) have demonstrated to be successful at generating realistic real-world images. In this paper we compare various GAN techniques, both supervised and unsupervised. The effects on training stability of different objective functions are compared. We add an encoder to the network, making it possible to encode images to the latent space of the GAN. The generator, discriminator and encoder are parameterized by deep convolutional neural networks. For the discriminator network we experimented with using the novel Capsule Network, a state-of-the-art technique for detecting global features in images. Experiments are performed using a digit and face dataset, with various visualizations illustrating the results. The results show that using the encoder network it is possible to reconstruct images. With the conditional GAN we can alter visual attributes of generated or encoded images. The experiments with the Capsule Network as discriminator result in generated images of a lower quality, compared to a standard convolutional neural network.",
"title": ""
},
{
"docid": "63934cfd6042d8bb2227f4e83b005cc2",
"text": "To support effective exploration, it is often stated that interactive visualizations should provide rapid response times. However, the effects of interactive latency on the process and outcomes of exploratory visual analysis have not been systematically studied. We present an experiment measuring user behavior and knowledge discovery with interactive visualizations under varying latency conditions. We observe that an additional delay of 500ms incurs significant costs, decreasing user activity and data set coverage. Analyzing verbal data from think-aloud protocols, we find that increased latency reduces the rate at which users make observations, draw generalizations and generate hypotheses. Moreover, we note interaction effects in which initial exposure to higher latencies leads to subsequently reduced performance in a low-latency setting. Overall, increased latency causes users to shift exploration strategy, in turn affecting performance. We discuss how these results can inform the design of interactive analysis tools.",
"title": ""
},
{
"docid": "5229fb13c66ca8a2b079f8fe46bb9848",
"text": "We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett’s reduction, and provide a thorough complexity analysis of the method.",
"title": ""
},
{
"docid": "f195e7f1018e1e1a6836c9d110ce1de4",
"text": "Motivated by the goal of obtaining more-anthropomorphic walking in bipedal robots, this paper considers a hybrid model of a 3D hipped biped with feet and locking knees. The main observation of this paper is that functional Routhian Reduction can be used to extend two-dimensional walking to three dimensions—even in the presence of periods of underactuation—by decoupling the sagittal and coronal dynamics of the 3D biped. Specifically, we assume the existence of a control law that yields stable walking for the 2D sagittal component of the 3D biped. The main result of the paper is that utilizing this controller together with “reduction control laws” yields walking in three dimensions. This result is supported through simulation.",
"title": ""
},
{
"docid": "0c1cd807339481f3a0b6da1fbe96950c",
"text": "Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space. We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code. We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27x. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30x.",
"title": ""
},
{
"docid": "76def4ca02a25669610811881531e875",
"text": "The design and implementation of a novel frequency synthesizer based on low phase-noise digital dividers and a direct digital synthesizer is presented. The synthesis produces two low noise accurate tunable signals at 10 and 100 MHz. We report the measured residual phase noise and frequency stability of the syn thesizer and estimate the total frequency stability, which can be expected from the synthesizer seeded with a signal near 11.2 GHz from an ultra-stable cryocooled sapphire oscillator (cryoCSO). The synthesizer residual single-sideband phase noise, at 1-Hz offset, on 10and 100-MHz signals was -135 and -130 dBc/Hz, respectively. The frequency stability contributions of these two sig nals was σ<sub>y</sub> = 9 × 10<sup>-15</sup> and σ<sub>y</sub> = 2.2 × 10<sup>-15</sup>, respectively, at 1-s integration time. The Allan deviation of the total fractional frequency noise on the 10- and 100-MHz signals derived from the synthesizer with the cry oCSO may be estimated, respectively, as σ<sub>y</sub> ≈ 3.6 × 10<sup>-15</sup> τ<sup>-1/2</sup> + 4 × 10<sup>-16</sup> and σ<sub>y</sub> ≈ s 5.2 × 10<sup>-2</sup> × 10<sup>-16</sup> τ<sup>-1/2</sup> + 3 × 10<sup>-16</sup>, respectively, for 1 ≤ τ <; 10<sup>4</sup>s. We also calculate the coherence function (a figure of merit for very long baseline interferometry in radio astronomy) for observation frequencies of 100, 230, and 345 GHz, when using the cry oCSO and a hydrogen maser. The results show that the cryoCSO offers a significant advantage at frequencies above 100 GHz.",
"title": ""
},
{
"docid": "5a85c72c5b9898b010f047ee99dba133",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "8618b407f851f0806920f6e28fdefe3f",
"text": "The explosive growth of Internet applications and content, during the last decade, has revealed an increasing need for information filtering and recommendation. Most research in the area of recommendation systems has focused on designing and implementing efficient algorithms that provide accurate recommendations. However, the selection of appropriate recommendation content and the presentation of information are equally important in creating successful recommender applications. This paper addresses issues related to the presentation of recommendations in the movies domain. The current work reviews previous research approaches and popular recommender systems, and focuses on user persuasion and satisfaction. In our experiments, we compare different presentation methods in terms of recommendations’ organization in a list (i.e. top N-items list and structured overview) and recommendation modality (i.e. simple text, combination of text and image, and combination of text and video). The most efficient presentation methods, regarding user persuasion and satisfaction, proved to be the “structured overview” and the “text and video” interfaces, while a strong positive correlation was also found between user satisfaction and persuasion in all experimental conditions.",
"title": ""
},
{
"docid": "aa58cb2b2621da6260aeb203af1bd6f1",
"text": "Aspect-based opinion mining from online reviews has attracted a lot of attention recently. The main goal of all of the proposed methods is extracting aspects and/or estimating aspect ratings. Recent works, which are often based on Latent Dirichlet Allocation (LDA), consider both tasks simultaneously. These models are normally trained at the item level, i.e., a model is learned for each item separately. Learning a model per item is fine when the item has been reviewed extensively and has enough training data. However, in real-life data sets such as those from Epinions.com and Amazon.com more than 90% of items have less than 10 reviews, so-called cold start items. State-of-the-art LDA models for aspect-based opinion mining are trained at the item level and therefore perform poorly for cold start items due to the lack of sufficient training data. In this paper, we propose a probabilistic graphical model based on LDA, called Factorized LDA (FLDA), to address the cold start problem. The underlying assumption of FLDA is that aspects and ratings of a review are influenced not only by the item but also by the reviewer. It further assumes that both items and reviewers can be modeled by a set of latent factors which represent their aspect and rating distributions. Different from state-of-the-art LDA models, FLDA is trained at the category level and learns the latent factors using the reviews of all the items of a category, in particular the non cold start items, and uses them as prior for cold start items. Our experiments on three real-life data sets demonstrate the improved effectiveness of the FLDA model in terms of likelihood of the held-out test set. We also evaluate the accuracy of FLDA based on two application-oriented measures.",
"title": ""
},
{
"docid": "3e7e4b5c2a73837ac5fa111a6dc71778",
"text": "Merging the best features of RBAC and attribute-based systems can provide effective access control for distributed and rapidly changing applications.",
"title": ""
},
{
"docid": "edd6d9843c8c24497efa336d1a26be9d",
"text": "Alzheimer's disease (AD) can be diagnosed with a considerable degree of accuracy. In some centers, clinical diagnosis predicts the autopsy diagnosis with 90% certainty in series reported from academic centers. The characteristic histopathologic changes at autopsy include neurofibrillary tangles, neuritic plaques, neuronal loss, and amyloid angiopathy. Mutations on chromosomes 21, 14, and 1 cause familial AD. Risk factors for AD include advanced age, lower intelligence, small head size, and history of head trauma; female gender may confer additional risks. Susceptibility genes do not cause the disease by themselves but, in combination with other genes or epigenetic factors, modulate the age of onset and increase the probability of developing AD. Among several putative susceptibility genes (on chromosomes 19, 12, and 6), the role of apolipoprotein E (ApoE) on chromosome 19 has been repeatedly confirmed. Protective factors include ApoE-2 genotype, history of estrogen replacement therapy in postmenopausal women, higher educational level, and history of use of nonsteroidal anti-inflammatory agents. The most proximal brain events associated with the clinical expression of dementia are progressive neuronal dysfunction and loss of neurons in specific regions of the brain. Although the cascade of antecedent events leading to the final common path of neurodegeneration must be determined in greater detail, the accumulation of stable amyloid is increasingly widely accepted as a central pathogenetic event. All mutations known to cause AD increase the production of beta-amyloid peptide. This protein is derived from amyloid precursor protein and, when aggregated in a beta-pleated sheet configuration, is neurotoxic and forms the core of neuritic plaques. Nerve cell loss in selected nuclei leads to neurochemical deficiencies, and the combination of neuronal loss and neurotransmitter deficits leads to the appearance of the dementia syndrome. The destructive aspects include neurochemical deficits that disrupt cell-to-cell communications, abnormal synthesis and accumulation of cytoskeletal proteins (e.g., tau), loss of synapses, pruning of dendrites, damage through oxidative metabolism, and cell death. The concepts of cognitive reserve and symptom thresholds may explain the effects of education, intelligence, and brain size on the occurrence and timing of AD symptoms. Advances in understanding the pathogenetic cascade of events that characterize AD provide a framework for early detection and therapeutic interventions, including transmitter replacement therapies, antioxidants, anti-inflammatory agents, estrogens, nerve growth factor, and drugs that prevent amyloid formation in the brain.",
"title": ""
},
{
"docid": "0182e6dcf7c8ec981886dfa2586a0d5d",
"text": "MOTIVATION\nMetabolomics is a post genomic technology which seeks to provide a comprehensive profile of all the metabolites present in a biological sample. This complements the mRNA profiles provided by microarrays, and the protein profiles provided by proteomics. To test the power of metabolome analysis we selected the problem of discrimating between related genotypes of Arabidopsis. Specifically, the problem tackled was to discrimate between two background genotypes (Col0 and C24) and, more significantly, the offspring produced by the crossbreeding of these two lines, the progeny (whose genotypes would differ only in their maternally inherited mitichondia and chloroplasts).\n\n\nOVERVIEW\nA gas chromotography--mass spectrometry (GCMS) profiling protocol was used to identify 433 metabolites in the samples. The metabolomic profiles were compared using descriptive statistics which indicated that key primary metabolites vary more than other metabolites. We then applied neural networks to discriminate between the genotypes. This showed clearly that the two background lines can be discrimated between each other and their progeny, and indicated that the two progeny lines can also be discriminated. We applied Euclidean hierarchical and Principal Component Analysis (PCA) to help understand the basis of genotype discrimination. PCA indicated that malic acid and citrate are the two most important metabolites for discriminating between the background lines, and glucose and fructose are two most important metabolites for discriminating between the crosses. These results are consistant with genotype differences in mitochondia and chloroplasts.",
"title": ""
},
{
"docid": "8fb37cad9ad964598ed718f0c32eaff1",
"text": "A planar W-band monopulse antenna array is designed based on the substrate integrated waveguide (SIW) technology. The sum-difference comparator, 16-way divider and 32 × 32 slot array antenna are all integrated on a single dielectric substrate in the compact layout through the low-cost PCB process. Such a substrate integrated monopulse array is able to operate over 93 ~ 96 GHz with narrow-beam and high-gain. The maximal gain is measured to be 25.8 dBi, while the maximal null-depth is measured to be - 43.7 dB. This SIW monopulse antenna not only has advantages of low-cost, light, easy-fabrication, etc., but also has good performance validated by measurements. It presents an excellent candidate for W-band directional-finding systems.",
"title": ""
},
{
"docid": "fdb0009b962254761541eb08f556fa0e",
"text": "Nonionic surfactants are widely used in the development of protein pharmaceuticals. However, the low level of residual peroxides in surfactants can potentially affect the stability of oxidation-sensitive proteins. In this report, we examined the peroxide formation in polysorbate 80 under a variety of storage conditions and tested the potential of peroxides in polysorbate 80 to oxidize a model protein, IL-2 mutein. For the first time, we demonstrated that peroxides can be easily generated in neat polysorbate 80 in the presence of air during incubation at elevated temperatures. Polysorbate 80 in aqueous solution exhibited a faster rate of peroxide formation and a greater amount of peroxides during incubation, which is further promoted/catalyzed by light. Peroxide formation can be greatly inhibited by preventing any contact with air/oxygen during storage. IL-2 mutein can be easily oxidized both in liquid and solid states. A lower level of peroxides in polysorbate 80 did not change the rate of IL-2 mutein oxidation in liquid state but significantly accelerated its oxidation in solid state under air. A higher level of peroxides in polysorbate 80 caused a significant increase in IL-2 mutein oxidation both in liquid and solid states, and glutathione can significantly inhibit the peroxide-induced oxidation of IL-2 mutein in a lyophilized formulation. In addition, a higher level of peroxides in polysorbate 80 caused immediate IL-2 mutein oxidation during annealing in lyophilization, suggesting that implementation of an annealing step needs to be carefully evaluated in the development of a lyophilization process for oxidation-sensitive proteins in the presence of polysorbate.",
"title": ""
},
{
"docid": "c589dd4a3da018fbc62d69e2d7f56e88",
"text": "More than 520 soil samples were surveyed for species of the mycoparasitic zygomycete genus Syncephalis using a culture-based approach. These fungi are relatively common in soil using the optimal conditions for growing both the host and parasite. Five species obtained in dual culture are unknown to science and are described here: (i) S. digitata with sporangiophores short, merosporangia separate at the apices, simple, 3-5 spored; (ii) S. floridana, which forms galls in the host and has sporangiophores up to 170 µm long with unbranched merosporangia that contain 2-4 spores; (iii) S. pseudoplumigaleta, with an abrupt apical bend in the sporophore; (iv) S. pyriformis with fertile vesicles that are long-pyriform; and (v) S. unispora with unispored merosporangia. To facilitate future molecular comparisons between species of Syncephalis and to allow identification of these fungi from environmental sampling datasets, we used Syncephalis-specific PCR primers to generate internal transcribed spacer (ITS) sequences for all five new species.",
"title": ""
},
{
"docid": "9b44cee4e65922bb07682baf0d395730",
"text": "Zero-shot learning has gained popularity due to its potential to scale recognition models without requiring additional training data. This is usually achieved by associating categories with their semantic information like attributes. However, we believe that the potential offered by this paradigm is not yet fully exploited. In this work, we propose to utilize the structure of the space spanned by the attributes using a set of relations. We devise objective functions to preserve these relations in the embedding space, thereby inducing semanticity to the embedding space. Through extensive experimental evaluation on five benchmark datasets, we demonstrate that inducing semanticity to the embedding space is beneficial for zero-shot learning. The proposed approach outperforms the state-of-the-art on the standard zero-shot setting as well as the more realistic generalized zero-shot setting. We also demonstrate how the proposed approach can be useful for making approximate semantic inferences about an image belonging to a category for which attribute information is not available.",
"title": ""
},
{
"docid": "0e2d6ebfade09beb448e9c538dadd015",
"text": "Matching incomplete or partial fingerprints continues to be an important challenge today, despite the advances made in fingerprint identification techniques. While the introduction of compact silicon chip-based sensors that capture only part of the fingerprint has made this problem important from a commercial perspective, there is also considerable interest in processing partial and latent fingerprints obtained at crime scenes. When the partial print does not include structures such as core and delta, common matching methods based on alignment of singular structures fail. We present an approach that uses localized secondary features derived from relative minutiae information. A flow network-based matching technique is introduced to obtain one-to-one correspondence of secondary features. Our method balances the tradeoffs between maximizing the number of matches and minimizing total feature distance between query and reference fingerprints. A two-hidden-layer fully connected neural network is trained to generate the final similarity score based on minutiae matched in the overlapping areas. Since the minutia-based fingerprint representation is an ANSI-NIST standard [American National Standards Institute, New York, 1993], our approach has the advantage of being directly applicable to existing databases. We present results of testing on FVC2002’s DB1 and DB2 databases. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6386c0ef0d7cc5c33e379d9c4c2ca019",
"text": "BACKGROUND\nEven after negative sentinel lymph node biopsy (SLNB) for primary melanoma, patients who develop in-transit (IT) melanoma or local recurrences (LR) can have subclinical regional lymph node involvement.\n\n\nSTUDY DESIGN\nA prospective database identified 33 patients with IT melanoma/LR who underwent technetium 99m sulfur colloid lymphoscintigraphy alone (n = 15) or in conjunction with lymphazurin dye (n = 18) administered only if the IT melanoma/LR was concurrently excised.\n\n\nRESULTS\nSeventy-nine percent (26 of 33) of patients undergoing SLNB in this study had earlier removal of lymph nodes in the same lymph node basin as the expected drainage of the IT melanoma or LR at the time of diagnosis of their primary melanoma. Lymphoscintography at time of presentation with IT melanoma/LR was successful in 94% (31 of 33) cases, and at least 1 sentinel lymph node was found intraoperatively in 97% (30 of 31) cases. The SLNB was positive in 33% (10 of 30) of these cases. Completion lymph node dissection was performed in 90% (9 of 10) of patients. Nine patients with negative SLNB and IT melanoma underwent regional chemotherapy. Patients in this study with a positive sentinel lymph node at the time the IT/LR was mapped had a considerably shorter time to development of distant metastatic disease compared with those with negative sentinel lymph nodes.\n\n\nCONCLUSIONS\nIn this study, we demonstrate the technical feasibility and clinical use of repeat SLNB for recurrent melanoma. Performing SLNB cannot only optimize local, regional, and systemic treatment strategies for patients with LR or IT melanoma, but also appears to provide important prognostic information.",
"title": ""
},
{
"docid": "a741a386cdbaf977468782c1971c8d86",
"text": "There is a trend that, virtually everyone, ranging from big Web companies to traditional enterprisers to physical science researchers to social scientists, is either already experiencing or anticipating unprecedented growth in the amount of data available in their world, as well as new opportunities and great untapped value. This paper reviews big data challenges from a data management respective. In particular, we discuss big data diversity, big data reduction, big data integration and cleaning, big data indexing and query, and finally big data analysis and mining. Our survey gives a brief overview about big-data-oriented research and problems.",
"title": ""
},
{
"docid": "dc5e69ca604d7fde242876d5464fb045",
"text": "We propose a general Convolutional Neural Network (CNN) encoder model for machine translation that fits within in the framework of Encoder-Decoder models proposed by Cho, et. al. [1]. A CNN takes as input a sentence in the source language, performs multiple convolution and pooling operations, and uses a fully connected layer to produce a fixed-length encoding of the sentence as input to a Recurrent Neural Network decoder (using GRUs or LSTMs). The decoder, encoder, and word embeddings are jointly trained to maximize the conditional probability of the target sentence given the source sentence. Many variations on the basic model are possible and can improve the performance of the model.",
"title": ""
}
] |
scidocsrr
|
6ca533a904ec1622f69593cff72dd8e8
|
Indirect content privacy surveys: measuring privacy without asking about it
|
[
{
"docid": "575da85b3675ceaec26143981dbe9b53",
"text": "People are increasingly required to disclose personal information to computerand Internetbased systems in order to register, identify themselves or simply for the system to work as designed. In the present paper, we outline two different methods to easily measure people’s behavioral self-disclosure to web-based forms. The first, the use of an ‘I prefer not to say’ option to sensitive questions is shown to be responsive to the manipulation of level of privacy concern by increasing the salience of privacy issues, and to experimental manipulations of privacy. The second, blurring or increased ambiguity was used primarily by males in response to an income question in a high privacy condition. Implications for the study of self-disclosure in human–computer interaction and web-based research are discussed. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "1c832140fce684c68fd91779d62596e3",
"text": "The safety and antifungal efficacy of amphotericin B lipid complex (ABLC) were evaluated in 556 cases of invasive fungal infection treated through an open-label, single-patient, emergency-use study of patients who were refractory to or intolerant of conventional antifungal therapy. All 556 treatment episodes were evaluable for safety. During the course of ABLC therapy, serum creatinine levels significantly decreased from baseline (P < .02). Among 162 patients with serum creatinine values > or = 2.5 mg/dL at the start of ABLC therapy (baseline), the mean serum creatinine value decreased significantly from the first week through the sixth week (P < or = .0003). Among the 291 mycologically confirmed cases evaluable for therapeutic response, there was a complete or partial response to ABLC in 167 (57%), including 42% (55) of 130 cases of aspergillosis, 67% (28) of 42 cases of disseminated candidiasis, 71% (17) of 24 cases of zygomycosis, and 82% (9) of 11 cases of fusariosis. Response rates varied according to the pattern of invasive fungal infection, underlying condition, and reason for enrollment (intolerance versus progressive infection). These findings support the use of ABLC in the treatment of invasive fungal infections in patients who are intolerant of or refractory to conventional antifungal therapy.",
"title": ""
},
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "671573d5f3fc356ee0a5a3e373d6a52f",
"text": "This paper presents a fuzzy logic control for a speed control of DC induction motor. The simulation developed by using Fuzzy MATLAB Toolbox and SIMULINK. The fuzzy logic controller is also introduced to the system for keeping the motor speed to be constant when the load varies. Because of the low maintenance and robustness induction motors have many applications in the industries. The speed control of induction motor is more important to achieve maximum torque and efficiency. The result of the 3x3 matrix fuzzy control rules and 5x5 matrix fuzzy control rules of the theta and speed will do comparison in this paper. Observation the effects of the fuzzy control rules on the performance of the DC- induction motor-speed control.",
"title": ""
},
{
"docid": "872d06c4d3702d79cb1c7bcbc140881a",
"text": "Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.\nExisting noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data. In Section 1, inadequacies of these models are discussed. A model based on n-ary relations, a normal form for data base relations, and the concept of a universal data sublanguage are introduced. In Section 2, certain operations on relations (other than logical inference) are discussed and applied to the problems of redundancy and consistency in the user's model.",
"title": ""
},
{
"docid": "90dfa19b821aeab985a96eba0c3037d3",
"text": "Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5–15 kg, medium carcasses 15.1–30 kg, medium/large carcasses 35–50 kg, large carcasses 55–70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.",
"title": ""
},
{
"docid": "51179905a1ded4b38d7ba8490fbdac01",
"text": "Psychology—the way learning is defined, studied, and understood—underlies much of the curricular and instructional decision-making that occurs in education. Constructivism, perhaps the most current psychology of learning, is no exception. Initially based on the work of Jean Piaget and Lev Vygotsky, and then supported and extended by contemporary biologists and cognitive scientists, it is having major ramifications on the goals teachers set for the learners with whom they work, the instructional strategies teachers employ in working towards these goals, and the methods of assessment utilized by school personnel to document genuine learning. What is this theory of learning and development that is the basis of the current reform movement and how is it different from other models of psychology?",
"title": ""
},
{
"docid": "1fc10d626c7a06112a613f223391de26",
"text": "The question of what makes a face attractive, and whether our preferences come from culture or biology, has fascinated scholars for centuries. Variation in the ideals of beauty across societies and historical periods suggests that standards of beauty are set by cultural convention. Recent evidence challenges this view, however, with infants as young as 2 months of age preferring to look at faces that adults find attractive (Langlois et al., 1987), and people from different cultures showing considerable agreement about which faces are attractive (Cun-for a review). These findings raise the possibility that some standards of beauty may be set by nature rather than culture. Consistent with this view, specific preferences have been identified that appear to be part of our biological rather than Such a preference would be adaptive if stabilizing selection operates on facial traits (Symons, 1979), or if averageness is associated with resistance to pathogens , as some have suggested Evolutionary biologists have proposed that a preference for symmetry would also be adaptive because symmetry is a signal of health and genetic quality Only high-quality individuals can maintain symmetric development in the face of environmental and genetic stresses. Symmetric bodies are certainly attractive to humans and many other animals but what about symmetric faces? Biologists suggest that facial symmetry should be attractive because it may signal mate quality High levels of facial asymmetry in individuals with chro-mosomal abnormalities (e.g., Down's syndrome and Tri-somy 14; for a review, see Thornhill & Møller, 1997) are consistent with this view, as is recent evidence that facial symmetry levels correlate with emotional and psychological health (Shackelford & Larsen, 1997). In this paper, we investigate whether people can detect subtle differences in facial symmetry and whether these differences are associated with differences in perceived attractiveness. Recently, Kowner (1996) has reported that faces with normal levels of asymmetry are more attractive than perfectly symmetric versions of the same faces. 3 Similar results have been reported by Langlois et al. and an anonymous reviewer for helpful comments on an earlier version of the manuscript. We also thank Graham Byatt for assistance with stimulus construction, Linda Jeffery for assistance with the figures, and Alison Clark and Catherine Hickford for assistance with data collection and statistical analysis in Experiment 1A. Evolutionary, as well as cultural, pressures may contribute to our perceptions of facial attractiveness. Biologists predict that facial symmetry should be attractive, because it may signal …",
"title": ""
},
{
"docid": "fbfd3294cfe070ac432bf087fc382b18",
"text": "The alignment of business and information technology (IT) strategies is an important and enduring theoretical challenge for the information systems discipline, remaining a top issue in practice over the past 20 years. Multi-business organizations (MBOs) present a particular alignment challenge because business strategies are developed at the corporate level, within individual strategic business units and across the corporate investment cycle. In contrast, the extant literature implicitly assumes that IT strategy is aligned with a single business strategy at a single point in time. This paper draws on resource-based theory and path dependence to model functional, structural, and temporal IT strategic alignment in MBOs. Drawing on Makadok’s theory of profit, we show how each form of alignment creates value through the three strategic drivers of competence, governance, and flexibility, respectively. We illustrate the model with examples from a case study on the Commonwealth Bank of Australia. We also explore the model’s implications for existing IT alignment models, providing alternative theoretical explanations for how IT alignment creates value. Journal of Information Technology (2015) 30, 101–118. doi:10.1057/jit.2015.1; published online 24 March 2015",
"title": ""
},
{
"docid": "b03273ada7d85d37e4c44f1195c9a450",
"text": "Nowadays the trend to solve optimization problems is to use s pecific algorithms rather than very general ones. The UNLocBoX provides a general framework allowing the user to design his own algorithms. To do so, the framework try to stay as close from the mathematical problem as possible. M ore precisely, the UNLocBoX is a Matlab toolbox designed to solve convex optimi zation problem of the form",
"title": ""
},
{
"docid": "48fffb441a5e7f304554e6bdef6b659e",
"text": "The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia, demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.",
"title": ""
},
{
"docid": "d21308f9ffa990746c6be137964d2e12",
"text": "'Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers', This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "c2fc4e65c484486f5612f4006b6df102",
"text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.",
"title": ""
},
{
"docid": "1ecbdb3a81e046452905105600b90780",
"text": "Identity-invariant estimation of head pose from still images is a challenging task due to the high variability of facial appearance. We present a novel 3D head pose estimation approach, which utilizes the flexibility and expressibility of a dense generative 3D facial model in combination with a very fast fitting algorithm. The efficiency of the head pose estimation is obtained by a 2D synthesis of the facial input image. This optimization procedure drives the appearance and pose of the 3D facial model. In contrast to many other approaches we are specifically interested in the more difficult task of head pose estimation from still images, instead of tracking faces in image sequences. We evaluate our approach on two publicly available databases (FacePix and USF HumanID) and compare our method to the 3D morphable model and other state of the art approaches in terms of accuracy and speed.",
"title": ""
},
{
"docid": "2ce36ce9de500ba2367b1af83ac3e816",
"text": "We examine whether the information content of the earnings report, as captured by the earnings response coefficient (ERC), increases when investors’ uncertainty about the manager’s reporting objectives decreases, as predicted in Fischer and Verrecchia (2000). We use the 2006 mandatory compensation disclosures as an instrument to capture a decrease in investors’ uncertainty about managers’ incentives and reporting objectives. Employing a difference-in-differences design and exploiting the staggered adoption of the new rules, we find a statistically and economically significant increase in ERC for treated firms relative to control firms, largely driven by profit firms. Cross-sectional tests suggest that the effect is more pronounced in subsets of firms most affected by the new rules. Our findings represent the first empirical evidence of a role of compensation disclosures in enhancing the information content of financial reports. JEL Classification: G38, G30, G34, M41",
"title": ""
},
{
"docid": "959ad8268836d34648a52c449f5de987",
"text": "There is widespread sentiment that fast gradient methods (e.g. Nesterov’s acceleration, conjugate gradient, heavy ball) are not effective for the purposes of stochastic optimization due to their instability and error accumulation. Numerous works have attempted to quantify these instabilities in the face of either statistical or non-statistical errors (Paige, 1971; Proakis, 1974; Polyak, 1987; Greenbaum, 1989; Roy and Shynk, 1990; Sharma et al., 1998; d’Aspremont, 2008; Devolder et al., 2014; Yuan et al., 2016). This work considers these issues for the special case of stochastic approximation for the least squares regression problem, and our main result refutes this conventional wisdom by showing that acceleration can be made robust to statistical errors. In particular, this work introduces an accelerated stochastic gradient method that provably achieves the minimax optimal statistical risk faster than stochastic gradient descent. Critical to the analysis is a sharp characterization of accelerated stochastic gradient descent as a stochastic process. We hope this characterization gives insights towards the broader question of designing simple and effective accelerated stochastic methods for more general convex and non-convex optimization problems.",
"title": ""
},
{
"docid": "3c33528735b53a4f319ce4681527c163",
"text": "Within the past two years, important advances have been made in modeling credit risk at the portfolio level. Practitioners and policy makers have invested in implementing and exploring a variety of new models individually. Less progress has been made, however, with comparative analyses. Direct comparison often is not straightforward, because the different models may be presented within rather different mathematical frameworks. This paper offers a comparative anatomy of two especially influential benchmarks for credit risk models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We then design simulation exercises which evaluate the effect of each of these differences individually. JEL Codes: G31, C15, G11 ∗The views expressed herein are my own and do not necessarily reflect those of the Board of Governors or its staff. I would like to thank David Jones for drawing my attention to this issue, and for his helpful comments. I am also grateful to Mark Carey for data and advice useful in calibration of the models, and to Chris Finger and Tom Wilde for helpful comments. Please address correspondence to the author at Division of Research and Statistics, Mail Stop 153, Federal Reserve Board, Washington, DC 20551, USA. Phone: (202)452-3705. Fax: (202)452-5295. Email: 〈[email protected]〉. Over the past decade, financial institutions have developed and implemented a variety of sophisticated models of value-at-risk for market risk in trading portfolios. These models have gained acceptance not only among senior bank managers, but also in amendments to the international bank regulatory framework. Much more recently, important advances have been made in modeling credit risk in lending portfolios. The new models are designed to quantify credit risk on a portfolio basis, and thus have application in control of risk concentration, evaluation of return on capital at the customer level, and more active management of credit portfolios. Future generations of today’s models may one day become the foundation for measurement of regulatory capital adequacy. Two of the models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+, have been released freely to the public since 1997 and have quickly become influential benchmarks. Practitioners and policy makers have invested in implementing and exploring each of the models individually, but have made less progress with comparative analyses. The two models are intended to measure the same risks, but impose different restrictions and distributional assumptions, and suggest different techniques for calibration and solution. Thus, given the same portfolio of credit exposures, the two models will, in general, yield differing evaluations of credit risk. Determining which features of the models account for differences in output would allow us a better understanding of the sensitivity of the models to the particular assumptions they employ. Unfortunately, direct comparison of the models is not straightforward, because the two models are presented within rather different mathematical frameworks. The CreditMetrics model is familiar to econometricians as an ordered probit model. Credit events are driven by movements in underlying unobserved latent variables. The latent variables are assumed to depend on external “risk factors.” Common dependence on the same risk factors gives rise to correlations in credit events across obligors. The CreditRisk+ model is based instead on insurance industry models of event risk. Instead of a latent variable, each obligor has a default probability. The default probabilities are not constant over time, but rather increase or decrease in response to background macroeconomic factors. To the extent that two obligors are sensitive to the same set of background factors, their default probabilities will move together. These co-movements in probability give rise to correlations in defaults. CreditMetrics and CreditRisk+ may serve essentially the same function, but they appear to be constructed quite differently. This paper offers a comparative anatomy of CreditMetrics and CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We can then design simulation exercises which evaluate the effect of these differences individually. We proceed as follows. Section 1 presents a summary of the CreditRisk+ model, and introduces a restricted version of CreditMetrics. The restrictions are imposed to facilitate direct comparison of CreditMetrics and CreditRisk+. While some of the richness of the full CreditMetrics implementation is sacrificed, the essential mathematical characteristics of the model are preserved. Our",
"title": ""
},
{
"docid": "56a072fc480c64e6a288543cee9cd5ac",
"text": "The performance of object detection has recently been significantly improved due to the powerful features learnt through convolutional neural networks (CNNs). Despite the remarkable success, there are still several major challenges in object detection, including object rotation, within-class diversity, and between-class similarity, which generally degenerate object detection performance. To address these issues, we build up the existing state-of-the-art object detection systems and propose a simple but effective method to train rotation-invariant and Fisher discriminative CNN models to further boost object detection performance. This is achieved by optimizing a new objective function that explicitly imposes a rotation-invariant regularizer and a Fisher discrimination regularizer on the CNN features. Specifically, the first regularizer enforces the CNN feature representations of the training samples before and after rotation to be mapped closely to each other in order to achieve rotation-invariance. The second regularizer constrains the CNN features to have small within-class scatter but large between-class separation. We implement our proposed method under four popular object detection frameworks, including region-CNN (R-CNN), Fast R- CNN, Faster R- CNN, and R- FCN. In the experiments, we comprehensively evaluate the proposed method on the PASCAL VOC 2007 and 2012 data sets and a publicly available aerial image data set. Our proposed methods outperform the existing baseline methods and achieve the state-of-the-art results.",
"title": ""
},
{
"docid": "7fd5f3461742db10503dd5e3d79fe3ed",
"text": "There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.",
"title": ""
},
{
"docid": "14032695043a1cc16239317e496bac35",
"text": "The rearing of bees is a quite difficult job since it requires experience and time. Beekeepers are used to take care of their bee colonies observing them and learning to interpret their behavior. Despite the rearing of bees represents one of the most antique human habits, nowadays bees risk the extinction principally because of the increasing pollution levels related to human activity. It is important to increase our knowledge about bees in order to develop new practices intended to improve their protection. These practices could include new technologies, in order to increase profitability of beekeepers and economical interest related to bee rearing, but also innovative rearing techniques, genetic selections, environmental politics and so on. Moreover bees, since they are very sensitive to pollution, are considered environmental indicators, and the research on bees could give important information about the conditions of soil, air and water. In this paper we propose a real hardware and software solution for apply the internet-of-things concept to bees in order to help beekeepers to improve their business and collect data for research purposes.",
"title": ""
},
{
"docid": "83195a7a81b58fb7c22b1bb1d806eb42",
"text": "We demonstrate high-performance, flexible, transparent heaters based on large-scale graphene films synthesized by chemical vapor deposition on Cu foils. After multiple transfers and chemical doping processes, the graphene films show sheet resistance as low as ∼43 Ohm/sq with ∼89% optical transmittance, which are ideal as low-voltage transparent heaters. Time-dependent temperature profiles and heat distribution analyses show that the performance of graphene-based heaters is superior to that of conventional transparent heaters based on indium tin oxide. In addition, we confirmed that mechanical strain as high as ∼4% did not substantially affect heater performance. Therefore, graphene-based, flexible, transparent heaters are expected to find uses in a broad range of applications, including automobile defogging/deicing systems and heatable smart windows.",
"title": ""
}
] |
scidocsrr
|
fee3cad6a022121bf6b4b82a54c5ac2b
|
An agile boot camp: Using a LEGO®-based active game to ground agile development principles
|
[
{
"docid": "be0ba5b90102aab7cbee08a29333be93",
"text": "Test-driven development (TDD) has been proposed as a solution to improve testing in Industry and in academia. The purpose of this poster is to outline the challenges of teaching a novel Test-First approach in a Level 8 course on Software Testing. Traditionally, introductory programming and software testing courses teach a test-last approach. After the introduction of the Extreme Programming version of AGILE, industry and academia have slowly shifted their focus to the Test-First approach. This poster paper is a pedagogical insight into this shift from the test-last to the test-first approach known as Test Driven Development (TDD).",
"title": ""
}
] |
[
{
"docid": "0406ef30ccc781558480458c225e7716",
"text": "The electrical parameters degradations of lateral double-diffused MOS with multiple floating poly-gate field plates under different stress conditions have been investigated experimentally. For the maximum substrate current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{{\\text {submax}}})$ </tex-math></inline-formula> stress, the increased interface states at the bird’s beak mainly result in an on-resistance (<inline-formula> <tex-math notation=\"LaTeX\">${R}_{ \\mathrm{\\scriptscriptstyle ON}})$ </tex-math></inline-formula> increase at the beginning of the stress, while hot holes injection and trapping into the oxide beneath the edge of real poly-gate turns out to be the dominating degradation mechanism after around 800-s stress, making the <inline-formula> <tex-math notation=\"LaTeX\">${R}_{{ \\mathrm{\\scriptscriptstyle ON}}}$ </tex-math></inline-formula> decrease. For the maximum operating gate voltage (<inline-formula> <tex-math notation=\"LaTeX\">${V}_{{\\text {gmax}}})$ </tex-math></inline-formula> stress, the trapped hot electrons in the channel region bring an increase in threshold voltage (<inline-formula> <tex-math notation=\"LaTeX\">${V}_{{\\text {th}}})$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${R}_{{ \\mathrm{\\scriptscriptstyle ON}}}$ </tex-math></inline-formula>, while the generation of large numbers of interface states at the bird’s beak further dramatically increases the <inline-formula> <tex-math notation=\"LaTeX\">${R}_{{ \\mathrm{\\scriptscriptstyle ON}}}$ </tex-math></inline-formula>. A novel device structurewith a poly-gate partly recessed into the field oxide has been presented to decrease the hot-carrier-induced degradations.",
"title": ""
},
{
"docid": "ef98966f79d5c725b33e227f86e610a2",
"text": "We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. (2017) to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic comparison of popular choices for a self-attentional architecture. Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters. We achieve a new state of the art on the WIKITEXT-103 benchmark of 20.51 perplexity, improving the next best known result by 8.7 perplexity. On the BILLION WORD benchmark, we achieve a state of the art of 24.14 perplexity.1",
"title": ""
},
{
"docid": "d4b9d294d60ef001bee3a872b17a75b1",
"text": "Real-time formative assessment of student learning has become the subject of increasing attention. Students' textual responses to short answer questions offer a rich source of data for formative assessment. However, automatically analyzing textual constructed responses poses significant computational challenges, and the difficulty of generating accurate assessments is exacerbated by the disfluencies that occur prominently in elementary students' writing. With robust text analytics, there is the potential to accurately analyze students' text responses and predict students' future success. In this paper, we present WriteEval, a hybrid text analytics method for analyzing student-composed text written in response to constructed response questions. Based on a model integrating a text similarity technique with a semantic analysis technique, WriteEval performs well on responses written by fourth graders in response to short-text science questions. Further, it was found that WriteEval's assessments correlate with summative analyses of student performance.",
"title": ""
},
{
"docid": "3d7fcc8b4715bdf2e54dfab4c989cf29",
"text": "All vertebrates, including humans, obtain most of their daily vitamin D requirement from casual exposure to sunlight. During exposure to sunlight, the solar ultraviolet B photons (290-315 nm) penetrate into the skin where they cause the photolysis of 7-dehydrocholesterol to precholecalciferol. Once formed, precholecalciferol undergoes a thermally induced rearrangement of its double bonds to form cholecalciferol. An increase in skin pigmentation, aging, and the topical application of a sunscreen diminishes the cutaneous production of cholecalciferol. Latitude, season, and time of day as well as ozone pollution in the atmosphere influence the number of solar ultraviolet B photons that reach the earth's surface, and thereby, alter the cutaneous production of cholecalciferol. In Boston, exposure to sunlight during the months of November through February will not produce any significant amounts of cholecalciferol in the skin. Because windowpane glass absorbs ultraviolet B radiation, exposure of sunlight through glass windows will not result in any production of cholecalciferol. It is now recognized that vitamin D insufficiency and vitamin D deficiency are common in elderly people, especially in those who are infirm and not exposed to sunlight or who live at latitudes that do not provide them with sunlight-mediated cholecalciferol during the winter months. Vitamin D insufficiency and deficiency exacerbate osteoporosis, cause osteomalacia, and increase the risk of skeletal fractures. Vitamin D insufficiency and deficiency can be prevented by encouraging responsible exposure to sunlight and/or consumption of a multivitamin tablet that contains 10 micrograms (400 IU) vitamin D.",
"title": ""
},
{
"docid": "9e20e4a12808a7947623cc23d84c9a6f",
"text": "In this paper we will present the new design of TEM double-ridged horn antenna, resulting in a better VSWR and improved gain of antenna. A cavity back and a new technique for tapering the flared section of the TEM horn antenna are introduced to improve the return loss and matching of the impedance, respectively. By tapering the ridges of antenna both laterally and longitudinally it is possible to extend the operating frequency band while decreasing the size of antenna. The proposed antenna is simulated with two commercially available packages, namely Ansoft HFSS and CST microwave studio. Stimulation results for the VSWR, radiation patterns, and gain of the designed TEM horn antenna over the frequency band 2–18 GHz are presented.",
"title": ""
},
{
"docid": "baa71f083831919a067322ab4b268db5",
"text": "– The theoretical analysis gives an overview of the functioning of DDS, especially with respect to noise and spurs. Different spur reduction techniques are studied in detail. Four ICs, which were the circuit implementations of the DDS, were designed. One programmable logic device implementation of the CORDIC based quadrature amplitude modulation (QAM) modulator was designed with a separate D/A converter IC. For the realization of these designs some new building blocks, e.g. a new tunable error feedback structure and a novel and more cost-effective digital power ramp generator, were developed. Implementing a DDS on an FPGA using Xilinx’s ISE software. IndexTerms—CORDIC, DDS, NCO, FPGA, SFDR. ________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "416a03dba8d76458d07a3e8d9303d4ac",
"text": "We introduce a unified optimization framework for geometry processing based on shape constraints. These constraints preserve or prescribe the shape of subsets of the points of a geometric data set, such as polygons, one-ring cells, volume elements, or feature curves. Our method is based on two key concepts: a shape proximity function and shape projection operators. The proximity function encodes the distance of a desired least-squares fitted elementary target shape to the corresponding vertices of the 3D model. Projection operators are employed to minimize the proximity function by relocating vertices in a minimal way to match the imposed shape constraints. We demonstrate that this approach leads to a simple, robust, and efficient algorithm that allows implementing a variety of geometry processing applications, simply by combining suitable projection operators. We show examples for computing planar and circular meshes, shape space exploration, mesh quality improvement, shape-preserving deformation, and conformal parametrization. Our optimization framework provides a systematic way of building new solvers for geometry processing and produces similar or better results than state-of-the-art methods.",
"title": ""
},
{
"docid": "44e7ba0be5275047587e9afd22f1de2a",
"text": "Dialogue state tracking plays an important role in statistical dialogue management. Domain-independent rule-based approaches are attractive due to their efficiency, portability and interpretability. However, recent rule-based models are still not quite competitive to statistical tracking approaches. In this paper, a novel framework is proposed to formulate rule-based models in a general way. In the framework, a rule is considered as a special kind of polynomial function satisfying certain linear constraints. Under some particular definitions and assumptions, rule-based models can be seen as feasible solutions of an integer linear programming problem. Experiments showed that the proposed approach can not only achieve competitive performance compared to statistical approaches, but also have good generalisation ability. It is one of the only two entries that outperformed all the four baselines in the third Dialog State Tracking Challenge.",
"title": ""
},
{
"docid": "373dfa09c3833d4d497fd79d7b0297cc",
"text": "This paper introduces a novel approach to battery management. In contrast to state-of-the-art solutions where a central Battery Management System (BMS) exists, we propose an Embedded Battery Management (EBM) that entirely decentralizes the monitoring and control of the battery pack. For this purpose, each cell of the pack is equipped with a Cell Management Unit (CMU) that monitors and controls local parameters of the respective cell, using its computational and communication resources. This combination of a battery cell and CMU forms the smart cell. Consequently, system-level functions are performed in a distributed fashion by the network of smart cells, applying concepts of self-organization to enable plug-and-play integration. This decentralized distributed architecture might offer significant advantages over centralized BMSs, resulting in higher modularity, easier integration and shorter time to market for battery packs. A development platform has been set up to design and analyze circuits, protocols and algorithms for EBM enabled by smart cells.",
"title": ""
},
{
"docid": "089c003534670cf6ab296828bf2604a3",
"text": "The development of ultra-low power LSIs is a promising area of research in microelectronics. Such LSIs would be suitable for use in power-aware LSI applications such as portable mobile devices, implantable medical devices, and smart sensor networks [1]. These devices have to operate with ultra-low power, i.e., a few microwatts or less, because they will probably be placed under conditions where they have to get the necessary energy from poor energy sources such as microbatteries or energy scavenging devices [2]. As a step toward such LSIs, we first need to develop voltage and current reference circuits that can operate with an ultra-low current, several tens of nanoamperes or less, i.e., sub-microwatt operation. To achieve such low-power operation, the circuits have to be operated in the subthreshold region, i.e., a region at which the gate-source voltage of MOSFETs is lower than the threshold voltage [3; 4]. Voltage and current reference circuits are important building blocks for analog, digital, and mixed-signal circuit systems in microelectronics, because the performance of these circuits is determined mainly by their bias voltages and currents. The circuits generate a constant reference voltage and current for various other components such as operational amplifiers, comparators, AD/DA converters, oscillators, and PLLs. For this purpose, bandgap reference circuits with CMOS-based vertical bipolar transistors are conventionally used in CMOS LSIs [5; 6]. However, they need resistors with a high resistance of several hundred megaohms to achieve low-current, subthreshold operation. Such a high resistance needs a large area to be implemented, and this makes conventional bandgap references unsuitable for use in ultra-low power LSIs. Therefore, modified voltage and current reference circuits for lowpower LSIs have been reported (see [7]-[12], [14]-[17]). However, these circuits have various problems. For example, their power dissipations are still large, their output voltages and currents are sensitive to supply voltage and temperature variations, and they have complex circuits with many MOSFETs; these problems are inconvenient for practical use in ultra-low power LSIs. Moreover, the effect of process variations on the reference signal has not been discussed in detail. To solve these problems, I and my colleagues reported new voltage and current reference circuits [13; 18] that can operate with sub-microwatt power dissipation and with low sensitivity to temperature and supply voltage. Our circuits consist of subthreshold MOSFET circuits and use no resistors.",
"title": ""
},
{
"docid": "e9698e55abb8cee0f3a5663517bd0037",
"text": "0377-2217/$ see front matter 2008 Elsevier B.V. A doi:10.1016/j.ejor.2008.06.027 * Corresponding author. Tel.: +32 16326817. E-mail address: [email protected] The definition and modeling of customer loyalty have been central issues in customer relationship management since many years. Recent papers propose solutions to detect customers that are becoming less loyal, also called churners. The churner status is then defined as a function of the volume of commercial transactions. In the context of a Belgian retail financial service company, our first contribution is to redefine the notion of customer loyalty by considering it from a customer-centric viewpoint instead of a product-centric one. We hereby use the customer lifetime value (CLV) defined as the discounted value of future marginal earnings, based on the customer’s activity. Hence, a churner is defined as someone whose CLV, thus the related marginal profit, is decreasing. As a second contribution, the loss incurred by the CLV decrease is used to appraise the cost to misclassify a customer by introducing a new loss function. In the empirical study, we compare the accuracy of various classification techniques commonly used in the domain of churn prediction, including two cost-sensitive classifiers. Our final conclusion is that since profit is what really matters in a commercial environment, standard statistical accuracy measures for prediction need to be revised and a more profit oriented focus may be desirable. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fb37da1dc9d95501e08d0a29623acdab",
"text": "This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior) transfer across increasingly complex collective robotic (RoboCup keep-away) tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function), behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search) addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer) evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.",
"title": ""
},
{
"docid": "de83d02f5f120163ed86050ee6962f50",
"text": "Researchers have recently questioned the benefits associated with having high self-esteem. The authors propose that the importance of self-esteem lies more in how people strive for it rather than whether it is high or low. They argue that in domains in which their self-worth is invested, people adopt the goal to validate their abilities and qualities, and hence their self-worth. When people have self-validation goals, they react to threats in these domains in ways that undermine learning; relatedness; autonomy and self-regulation; and over time, mental and physical health. The short-term emotional benefits of pursuing self-esteem are often outweighed by long-term costs. Previous research on self-esteem is reinterpreted in terms of self-esteem striving. Cultural roots of the pursuit of self-esteem are considered. Finally, the alternatives to pursuing self-esteem, and ways of avoiding its costs, are discussed.",
"title": ""
},
{
"docid": "48a476d5100f2783455fabb6aa566eba",
"text": "Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.].",
"title": ""
},
{
"docid": "3dcfcaa97fcc1bce04ce515027e64927",
"text": "Abs t rac t . RoboCup is an attempt to foster AI and intelligent robotics research by providing a standard problem where wide range of technologies can be integrated and exaznined. The first R o b o C u p competition was held at IJCAI-97, Nagoya. In order for a robot team to actually perform a soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensorfllsion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup's final target is a world cup with real robots, RoboCup offers a softwaxe platform for research on the software aspects of RoboCup. This paper describes technical chalhmges involw~d in RoboCup, rules, and simulation environment.",
"title": ""
},
{
"docid": "a4b123705dda7ae3ac7e9e88a50bd64a",
"text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "bceb9f8cc1726017e564c6474618a238",
"text": "The modulators are the basic requirement of communication systems they are designed to reduce the channel distortion & to use in RF communication hence many type of carrier modulation techniques has been already proposed according to channel properties & data rate of the system. QPSK (Quadrature Phase Shift Keying) is one of the modulation schemes used in wireless communication system due to its ability to transmit twice the data rate for a given bandwidth. The QPSK is the most often used scheme since it does not suffer from BER (Bit Error rate) degradation while the bandwidth efficiency is increased. It is very popular in Satellite communication. As the design of complex mathematical models such as QPSK modulator in „pure HDL‟ is very difficult and costly; it requires from designer many additional skills and is time-consuming. To overcome these types of difficulties, the proposed QPSK modulator can be implemented on FPGA by using the concept of hardware co-simulation at Low power. In this process, QPSK modulator is simulated with Xilinx System Generator Simulink software and later on it is converted in Very high speed integrated circuit Hardware Descriptive Language to implement it on FPGA. Along with the co-simulation, power of the proposed QPSK modulator can be minimized than conventional QPSK modulator. As a conclusion, the proposed architecture will not only able to operate on co-simulation platform but at the same time it will significantly consume less operational power.",
"title": ""
},
{
"docid": "77c8a86fba0183e2b9183ba823e9d9cf",
"text": "The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy.",
"title": ""
},
{
"docid": "7d59b5e4523a6ee280b758ad8a55d3eb",
"text": "A feasible pathway to scale germanium (Ge) FETs in future technology nodes has been proposed using the tunable diamond-shaped Ge nanowire (NW). The Ge NW was obtained through a simple top-down dry etching and blanket Ge epitaxy techniques readily available in mass production. The different etching selectivity of surface orientations for Cl2 and HBr was employed for the three-step isotropic/anisotropic/isotropic dry etching. The ratio of Cl2 and HBr, mask width, and Ge recess depth were crucial for forming the nearly defect-free suspended Ge channel through effective removal of dislocations near the Si/Ge interface. This technique could also be applied for forming diamond-shaped Si NWs. The suspended diamond-shaped NW gate-all-around NWFETs feature excellent electrostatics, the favorable {111} surfaces along the (110) direction with high carrier mobility, and the nearly defect-free Ge channel. The pFET with a high ION/IOFF ratio of 6 × 107 and promising nFET performance have been demonstrated successfully.",
"title": ""
},
{
"docid": "71e8c35e0f0b5756d14821622a8d0fc5",
"text": "Classic drugs of abuse lead to specific increases in cerebral functional activity and dopamine release in the shell of the nucleus accumbens (the key neural structure for reward, motivation, and addiction). In contrast, caffeine at doses reflecting daily human consumption does not induce a release of dopamine in the shell of the nucleus accumbens but leads to a release of dopamine in the prefrontal cortex, which is consistent with its reinforcing properties.",
"title": ""
}
] |
scidocsrr
|
eb47e0953346f2a60fb0486508773e87
|
Mobile Cloud Computing: A Comparison of Application Models
|
[
{
"docid": "31f838fb0c7db7e8b58fb1788d5554c8",
"text": "Today’s smartphones operate independently of each other, using only local computing, sensing, networking, and storage capabilities and functions provided by remote Internet services. It is generally difficult or expensive for one smartphone to share data and computing resources with another. Data is shared through centralized services, requiring expensive uploads and downloads that strain wireless data networks. Collaborative computing is only achieved using ad hoc approaches. Coordinating smartphone data and computing would allow mobile applications to utilize the capabilities of an entire smartphone cloud while avoiding global network bottlenecks. In many cases, processing mobile data in-place and transferring it directly between smartphones would be more efficient and less susceptible to network limitations than offloading data and processing to remote servers. We have developed Hyrax, a platform derived from Hadoop that supports cloud computing on Android smartphones. Hyrax allows client applications to conveniently utilize data and execute computing jobs on networks of smartphones and heterogeneous networks of phones and servers. By scaling with the number of devices and tolerating node departure, Hyrax allows applications to use distributed resources abstractly, oblivious to the physical nature of the cloud. The design and implementation of Hyrax is described, including experiences in porting Hadoop to the Android platform and the design of mobilespecific customizations. The scalability of Hyrax is evaluated experimentally and compared to that of Hadoop. Although the performance of Hyrax is poor for CPU-bound tasks, it is shown to tolerate node-departure and offer reasonable performance in data sharing. A distributed multimedia search and sharing application is implemented to qualitatively evaluate Hyrax from an application development perspective.",
"title": ""
}
] |
[
{
"docid": "a52ae731397db5fb56bf6b65882ccc77",
"text": "This paper presents a class@cation of intrusions with respect to technique as well as to result. The taxonomy is intended to be a step on the road to an established taxonomy of intrusions for use in incident reporting, statistics, warning bulletins, intrusion detection systems etc. Unlike previous schemes, it takes the viewpoint of the system owner and should therefore be suitable to a wider community than that of system developers and vendors only. It is based on data from a tzalistic intrusion experiment, a fact that supports the practical applicability of the scheme. The paper also discusses general aspects of classification, and introduces a concept called dimension. After having made a broad survey of previous work in thejield, we decided to base our classification of intrusion techniques on a scheme proposed by Neumann and Parker in I989 and to further refine relevant parts of their scheme. Our classification of intrusion results is derived from the traditional three aspects of computer security: confidentiality, availability and integrity.",
"title": ""
},
{
"docid": "7c1be047bbb4fe3f988aaccfd0add70f",
"text": "We reviewed scientific literature pertaining to known and putative disease agents associated with the lone star tick, Amblyomma americanum. Reports in the literature concerning the role of the lone star tick in the transmission of pathogens of human and animal diseases have sometimes been unclear and even contradictory. This overview has indicated that A. americanum is involved in the ecology of several disease agents of humans and other animals, and the role of this tick as a vector of these diseases ranges from incidental to significant. Probably the clearest relationship is that of Ehrlichia chaffeensis and A. americanum. Also, there is a definite association between A. americanum and tularemia, as well as between the lone star tick and Theileria cervi to white-tailed deer. Evidence of Babesia cervi (= odocoilei) being transmitted to deer by A. americanum is largely circumstantial at this time. The role of A. americanum in cases of southern tick-associated rash illness (STARI) is currently a subject of intensive investigations with important implications. The lone star tick has been historically reported to be a vector of Rocky Mountain spotted fever rickettsiae, but current opinions are to the contrary. Evidence incriminated A. americanum as the vector of Bullis fever in the 1940s, but the disease apparently has disappeared. Q fever virus has been found in unfed A. americanum, but the vector potential, if any, is poorly understood at this time. Typhus fever and toxoplasmosis have been studied in the lone star tick, and several non-pathogenic organisms have been recovered. Implications of these tick-disease relationships are discussed.",
"title": ""
},
{
"docid": "e3e8ef3239fb6a7565a177cbceb1bee8",
"text": "A large number of studies analyse object detection and pose estimation at visual level in 2D, discussing the effects of challenges such as occlusion, clutter, texture, etc., on the performances of the methods, which work in the context of RGB modality. Interpreting the depth data, the study in this paper presents thorough multi-modal analyses. It discusses the above-mentioned challenges for full 6D object pose estimation in RGB-D images comparing the performances of several 6D detectors in order to answer the following questions: What is the current position of the computer vision community for maintaining “automation” in robotic manipulation? What next steps should the community take for improving “autonomy” in robotics while handling objects? Direct comparison of the detectors is difficult, since they are tested on multiple datasets with different characteristics and are evaluated using widely varying evaluation protocols. To deal with these issues, we follow a threefold strategy: five representative object datasets, mainly differing from the point of challenges that they involve, are collected. Then, two classes of detectors are tested on the collected datasets. Lastly, the baselines’ performances are evaluated using two different evaluation metrics under uniform scoring criteria. Regarding the experiments conducted, we analyse our observations on the baselines along with the challenges involved in the interested datasets, and we suggest a number of insights for the next steps to be taken, for improving the autonomy in robotics.",
"title": ""
},
{
"docid": "9eb0d79f9c13f30f53fb7214b337880d",
"text": "Many real world problems can be solved with Artificial Neural Networks in the areas of pattern recognition, signal processing and medical diagnosis. Most of the medical data set is seldom complete. Artificial Neural Networks require complete set of data for an accurate classification. This paper dwells on the various missing value techniques to improve the classification accuracy. The proposed system also investigates the impact on preprocessing during the classification. A classifier was applied to Pima Indian Diabetes Dataset and the results were improved tremendously when using certain combination of preprocessing techniques. The experimental system achieves an excellent classification accuracy of 99% which is best than before.",
"title": ""
},
{
"docid": "7de29b042513aaf1a3b12e71bee6a338",
"text": "The widespread use of deception in online sources has motivated the need for methods to automatically profile and identify deceivers. This work explores deception, gender and age detection in short texts using a machine learning approach. First, we collect a new open domain deception dataset also containing demographic data such as gender and age. Second, we extract feature sets including n-grams, shallow and deep syntactic features, semantic features, and syntactic complexity and readability metrics. Third, we build classifiers that aim to predict deception, gender, and age. Our findings show that while deception detection can be performed in short texts even in the absence of a predetermined domain, gender and age prediction in deceptive texts is a challenging task. We further explore the linguistic differences in deceptive content that relate to deceivers gender and age and find evidence that both age and gender play an important role in people’s word choices when fabricating lies.",
"title": ""
},
{
"docid": "64de73be55c4b594934b0d1bd6f47183",
"text": "Smart grid has emerged as the next-generation power grid via the convergence of power system engineering and information and communication technology. In this article, we describe smart grid goals and tactics, and present a threelayer smart grid network architecture. Following a brief discussion about major challenges in smart grid development, we elaborate on smart grid cyber security issues. We define a taxonomy of basic cyber attacks, upon which sophisticated attack behaviors may be built. We then introduce fundamental security techniques, whose integration is essential for achieving full protection against existing and future sophisticated security attacks. By discussing some interesting open problems, we finally expect to trigger more research efforts in this emerging area.",
"title": ""
},
{
"docid": "93afb696fa395a7f7c2a4f3fc2ac690d",
"text": "We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.",
"title": ""
},
{
"docid": "f7d30db4b04b33676d386953aebf503c",
"text": "Microvascular free flap transfer currently represents one of the most popular methods for mandibularreconstruction. With the various free flap options nowavailable, there is a general consensus that no single kindof osseous or osteocutaneous flap can resolve the entire spectrum of mandibular defects. A suitable flap, therefore, should be selected according to the specific type of bone and soft tissue defect. We have developed an algorithm for mandibular reconstruction, in which the bony defect is termed as either “lateral” or “anterior” and the soft-tissue defect is classified as “none,” “skin or mucosal,” or “through-and-through.” For proper flap selection, the bony defect condition should be considered first, followed by the soft-tissue defect condition. When the bony defect is “lateral” and the soft tissue is not defective, the ilium is the best choice. When the bony defect is “lateral” and a small “skin or mucosal” soft-tissue defect is present, the fibula represents the optimal choice. When the bony defect is “lateral” and an extensive “skin or mucosal” or “through-and-through” soft-tissue defect exists, the scapula should be selected. When the bony defect is “anterior,” the fibula should always be selected. However, when an “anterior” bone defect also displays an “extensive” or “through-and-through” soft-tissue defect, the fibula should be usedwith other soft-tissue flaps. Flaps such as a forearm flap, anterior thigh flap, or rectus abdominis musculocutaneous flap are suitable, depending on the size of the soft-tissue defect.",
"title": ""
},
{
"docid": "130efef512294d14094a900693efebfd",
"text": "Metaphor comprehension involves an interaction between the meaning of the topic and the vehicle terms of the metaphor. Meaning is represented by vectors in a high-dimensional semantic space. Predication modifies the topic vector by merging it with selected features of the vehicle vector. The resulting metaphor vector can be evaluated by comparing it with known landmarks in the semantic space. Thus, metaphorical prediction is treated in the present model in exactly the same way as literal predication. Some experimental results concerning metaphor comprehension are simulated within this framework, such as the nonreversibility of metaphors, priming of metaphors with literal statements, and priming of literal statements with metaphors.",
"title": ""
},
{
"docid": "3c8ac7bd31d133b4d43c0d3a0f08e842",
"text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.",
"title": ""
},
{
"docid": "34913781debe37f36befc853d57eba0c",
"text": "Michael R. Benjamin Naval Undersea Warfare Center, Newport, Rhode Island 02841, and Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 e-mail: [email protected] Henrik Schmidt Department of Mechanical Engineering, Laboratory for Autonomous Marine Sensing Systems, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 e-mail: [email protected] Paul M. Newman Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, United Kingdom e-mail: [email protected] John J. Leonard Department of Mechanical Engineering, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 e-mail: [email protected]",
"title": ""
},
{
"docid": "a9e30e02bcbac0f117820d21bf9941da",
"text": "The question of how identity is affected when diagnosed with dementia is explored in this capstone thesis. With the rise of dementia diagnoses (Goldstein-Levitas, 2016) there is a need for understanding effective approaches to care as emotional components remain intact. The literature highlights the essence of personhood and how person-centered care (PCC) is essential to preventing isolation and impacting a sense of self and well-being (Killick, 2004). Meeting spiritual needs in the sense of hope and purpose may also improve quality of life and delay symptoms. Dance/movement therapy (DMT) is specifically highlighted as an effective approach as sessions incorporate the components to physically, emotionally, and spiritually stimulate the individual with dementia. A DMT intervention was developed and implemented at an assisted living facility in the Boston area within a specific unit dedicated to the care of residents who had a primary diagnosis of mild to severe dementia. A Chacian framework is used with sensory stimulation techniques to address physiological needs. Results indicated positive experiences from observations and merited the need to conduct more research to credit DMT’s effectiveness with geriatric populations.",
"title": ""
},
{
"docid": "e171be9168fc94527980e767742555d3",
"text": "OBJECTIVE\nRelatively minor abusive injuries can precede severe physical abuse in infants. Our objective was to determine how often abused infants have a previous history of \"sentinel\" injuries, compared with infants who were not abused.\n\n\nMETHODS\nCase-control, retrospective study of 401, <12-month-old infants evaluated for abuse in a hospital-based setting and found to have definite, intermediate concern for, or no abuse after evaluation by the hospital-based Child Protection Team. A sentinel injury was defined as a previous injury reported in the medical history that was suspicious for abuse because the infant could not cruise, or the explanation was implausible.\n\n\nRESULTS\nOf the 200 definitely abused infants, 27.5% had a previous sentinel injury compared with 8% of the 100 infants with intermediate concern for abuse (odds ratio: 4.4, 95% confidence interval: 2.0-9.6; P < .001). None of the 101 nonabused infants (controls) had a previous sentinel injury (P < .001). The type of sentinel injury in the definitely abused cohort was bruising (80%), intraoral injury (11%), and other injury (7%). Sentinel injuries occurred in early infancy: 66% at <3 months of age and 95% at or before the age of 7 months. Medical providers were reportedly aware of the sentinel injury in 41.9% of cases.\n\n\nCONCLUSIONS\nPrevious sentinel injuries are common in infants with severe physical abuse and rare in infants evaluated for abuse and found to not be abused. Detection of sentinel injuries with appropriate interventions could prevent many cases of abuse.",
"title": ""
},
{
"docid": "a753be5a5f81ae77bfcb997a2748d723",
"text": "The design of electromagnetic (EM) interference filters for converter systems is usually based on measurements with a prototype during the final stages of the design process. Predicting the conducted EM noise spectrum of a converter by simulation in an early stage has the potential to save time/cost and to investigate different noise reduction methods, which could, for example, influence the layout or the design of the control integrated circuit. Therefore, the main sources of conducted differential-mode (DM) and common-mode (CM) noise of electronic ballasts for fluorescent lamps are identified in this paper. For each source, the noise spectrum is calculated and a noise propagation model is presented. The influence of the line impedance stabilizing network (LISN) and the test receiver is also included. Based on the presented models, noise spectrums are calculated and validated by measurements.",
"title": ""
},
{
"docid": "eb5208a4793fa5c5723b20da0421af26",
"text": "High-level synthesis promises a significant shortening of the FPGA design cycle when compared with design entry using register transfer level (RTL) languages. Recent evaluations report that C-to-RTL flows can produce results with a quality close to hand-crafted designs [1]. Algorithms which use dynamic, pointer-based data structures, which are common in software, remain difficult to implement well. In this paper, we describe a comparative case study using Xilinx Vivado HLS as an exemplary state-of-the-art high-level synthesis tool. Our test cases are two alternative algorithms for the same compute-intensive machine learning technique (clustering) with significantly different computational properties. We compare a data-flow centric implementation to a recursive tree traversal implementation which incorporates complex data-dependent control flow and makes use of pointer-linked data structures and dynamic memory allocation. The outcome of this case study is twofold: We confirm similar performance between the hand-written and automatically generated RTL designs for the first test case. The second case reveals a degradation in latency by a factor greater than 30× if the source code is not altered prior to high-level synthesis. We identify the reasons for this shortcoming and present code transformations that narrow the performance gap to a factor of four. We generalise our source-to-source transformations whose automation motivates research directions to improve high-level synthesis of dynamic data structures in the future.",
"title": ""
},
{
"docid": "39d4375dd9b8353241482bff577ee812",
"text": "Cellulose constitutes the most abundant renewable polymer resource available today. As a chemical raw material, it is generally well-known that it has been used in the form of fibers or derivatives for nearly 150 years for a wide spectrum of products and materials in daily life. What has not been known until relatively recently is that when cellulose fibers are subjected to acid hydrolysis, the fibers yield defect-free, rod-like crystalline residues. Cellulose nanocrystals (CNs) have garnered in the materials community a tremendous level of attention that does not appear to be relenting. These biopolymeric assemblies warrant such attention not only because of their unsurpassed quintessential physical and chemical properties (as will become evident in the review) but also because of their inherent renewability and sustainability in addition to their abundance. They have been the subject of a wide array of research efforts as reinforcing agents in nanocomposites due to their low cost, availability, renewability, light weight, nanoscale dimension, and unique morphology. Indeed, CNs are the fundamental constitutive polymeric motifs of macroscopic cellulosic-based fibers whose sheer volume dwarfs any known natural or synthetic biomaterial. Biopolymers such as cellulose and lignin and † North Carolina State University. ‡ Helsinki University of Technology. Dr. Youssef Habibi is a research assistant professor at the Department of Forest Biomaterials at North Carolina State University. He received his Ph.D. in 2004 in organic chemistry from Joseph Fourier University (Grenoble, France) jointly with CERMAV (Centre de Recherche sur les Macromolécules Végétales) and Cadi Ayyad University (Marrakesh, Morocco). During his Ph.D., he worked on the structural characterization of cell wall polysaccharides and also performed surface chemical modification, mainly TEMPO-mediated oxidation, of crystalline polysaccharides, as well as their nanocrystals. Prior to joining NCSU, he worked as assistant professor at the French Engineering School of Paper, Printing and Biomaterials (PAGORA, Grenoble Institute of Technology, France) on the development of biodegradable nanocomposites based on nanocrystalline polysaccharides. He also spent two years as postdoctoral fellow at the French Institute for Agricultural Research, INRA, where he developed new nanostructured thin films based on cellulose nanowiskers. Dr. Habibi’s research interests include the sustainable production of materials from biomass, development of high performance nanocomposites from lignocellulosic materials, biomass conversion technologies, and the application of novel analytical tools in biomass research. Chem. Rev. 2010, 110, 3479–3500 3479",
"title": ""
},
{
"docid": "77af12d87cd5827f35d92968d1888162",
"text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"title": ""
},
{
"docid": "46b5f32b9f08dd5d1fbe2d6c2fe532ee",
"text": "As more recombinant human proteins become available on the market, the incidence of immunogenicity problems is rising. The antibodies formed against a therapeutic protein can result in serious clinical effects, such as loss of efficacy and neutralization of the endogenous protein with essential biological functions. Here we review the literature on the relations between the immunogenicity of the therapeutic proteins and their structural properties. The mechanisms by which protein therapeutics can induce antibodies as well as the models used to study immunogenicity are discussed. Examples of how the chemical structure (including amino acid sequence, glycosylation, and pegylation) can influence the incidence and level of antibody formation are given. Moreover, it is shown that physical degradation (especially aggregation) of the proteins as well as chemical decomposition (e.g., oxidation) may enhance the immune response. To what extent the presence of degradation products in protein formulations influences their immunogenicity still needs further investigation. Immunization of transgenic animals, tolerant for the human protein, with well-defined, artificially prepared degradation products of therapeutic proteins may shed more light on the structure-immunogenicity relationships of recombinant human proteins.",
"title": ""
},
{
"docid": "22cb0a390087efcb9fa2048c74e9845f",
"text": "This paper describes the early conception and latest developments of electroactive polymer (EAP)-based sensors, actuators, electronic components, and power sources, implemented as wearable devices for smart electronic textiles (e-textiles). Such textiles, functioning as multifunctional wearable human interfaces, are today considered relevant promoters of progress and useful tools in several biomedical fields, such as biomonitoring, rehabilitation, and telemedicine. After a brief outline on ongoing research and the first products on e-textiles under commercial development, this paper presents the most highly performing EAP-based devices developed by our lab and other research groups for sensing, actuation, electronics, and energy generation/storage, with reference to their already demonstrated or potential applicability to electronic textiles",
"title": ""
},
{
"docid": "d6e093ecc3325fcdd2e29b0b961b9b21",
"text": "[Context and motivation] Natural language is the main representation means of industrial requirements documents, which implies that requirements documents are inherently ambiguous. There exist guidelines for ambiguity detection, such as the Ambiguity Handbook [1]. In order to detect ambiguities according to the existing guidelines, it is necessary to train analysts. [Question/problem] Although ambiguity detection guidelines were extensively discussed in literature, ambiguity detection has not been automated yet. Automation of ambiguity detection is one of the goals of the presented paper. More precisely, the approach and tool presented in this paper have three goals: (1) to automate ambiguity detection, (2) to make plausible for the analyst that ambiguities detected by the tool represent genuine problems of the analyzed document, and (3) to educate the analyst by explaining the sources of the detected ambiguities. [Principal ideas/results] The presented tool provides reliable ambiguity detection, in the sense that it detects four times as many genuine ambiguities as than an average human analyst. Furthermore, the tool offers high precision ambiguity detection and does not present too many false positives to the human analyst. [Contribution] The presented tool is able both to detect the ambiguities and to explain ambiguity sources. Thus, besides pure ambiguity detection, it can be used to educate analysts, too. Furthermore, it provides a significant potential for considerable time and cost savings and at the same time quality improvements in the industrial requirements engineering.",
"title": ""
}
] |
scidocsrr
|
faf711062699daf00fac5ffac48e9e17
|
Exploring the role of customer relationship management (CRM) systems in customer knowledge creation
|
[
{
"docid": "de3aee8ca694d59eb0ef340b3b1c8161",
"text": "In recent years, organisations have begun to realise the importance of knowing their customers better. Customer relationship management (CRM) is an approach to managing customer related knowledge of increasing strategic significance. The successful adoption of IT-enabled CRM redefines the traditional models of interaction between businesses and their customers, both nationally and globally. It is regarded as a source for competitive advantage because it enables organisations to explore and use knowledge of their customers and to foster profitable and long-lasting one-to-one relationships. This paper discusses the results of an exploratory survey conducted in the UK financial services sector; it discusses CRM practice and expectations, the motives for implementing it, and evaluates post-implementation experiences. It also investigates the CRM tools functionality in the strategic, process, communication, and business-to-customer (B2C) organisational context and reports the extent of their use. The results show that despite the anticipated potential, the benefits from such tools are rather small. # 2004 Published by Elsevier B.V.",
"title": ""
}
] |
[
{
"docid": "ca20f416a3809a0a06d76d08697bcc4b",
"text": "BACKGROUND\nManual labor in the Agriculture, Forestry, and Fishing (AgFF) Sector is provided primarily by immigrant workers. Limited information is available that documents the demographic characteristics of these manual workers, the occupational illnesses, injuries and fatalities they experience; or the risk factors to which they are exposed.\n\n\nMETHODS\nA working conference of experts on occupational health in the AgFF Sector was held to address information limitations. This paper provides an overview of the conference. Other reports address organization of work, health outcomes, healthcare access, and safety policy.\n\n\nCONTENTS\nThis report addresses how best to define the population and the AgFF Sector, occupational exposures for the sector, data limitations, characteristics of immigrant workers, reasons for concern for immigrant workers in the AgFF Sector, regulations, a conceptual model for occupational health, and directions for research and intervention.",
"title": ""
},
{
"docid": "6d394ccc32b958d5ffbd34856b1bace4",
"text": "Interferometric synthetic aperture radar (InSAR) correlation, a measure of the similarity of two radar echoes, provides a quantitative measure of surface and subsurface scattering properties and hence surface composition and structure. Correlation is observed by comparing the radar return across several nearby radar image pixels, but estimates of correlation are biased by finite data sample size and any underlying interferometer fringe pattern. We present a method for correcting bias in InSAR correlation measurements resulting in significantly more accurate estimates, so that inverse models of surface properties are more useful. We demonstrate the value of the approach using data collected over Antarctica by the Radarsat spacecraft.",
"title": ""
},
{
"docid": "65209c3ce517aa7cdcdb3a7106ffe9f2",
"text": "This paper presents first results of the Networking and Cryptography library (NaCl) on the 8-bit AVR family of microcontrollers. We show that NaCl, which has so far been optimized mainly for different desktop and server platforms, is feasible on resource-constrained devices while being very fast and memory efficient. Our implementation shows that encryption using Salsa20 requires 268 cycles/byte, authentication using Poly1305 needs 195 cycles/byte, a Curve25519 scalar multiplication needs 22 791 579 cycles, signing of data using Ed25519 needs 23 216 241 cycles, and verification can be done within 32 634 713 cycles. All implemented primitives provide at least 128-bit security, run in constant time, do not use secret-data-dependent branch conditions, and are open to the public domain (no usage restrictions).",
"title": ""
},
{
"docid": "be8cfa012ffba4ee8017c3e299a88fb0",
"text": "The present study examined (1) the impact of a brief substance use intervention on delay discounting and indices of substance reward value (RV), and (2) whether baseline values and posttreatment change in these behavioral economic variables predict substance use outcomes. Participants were 97 heavy drinking college students (58.8% female, 41.2% male) who completed a brief motivational intervention (BMI) and then were randomized to one of two conditions: a supplemental behavioral economic intervention that attempted to increase engagement in substance-free activities associated with delayed rewards (SFAS) or an Education control (EDU). Demand intensity, and Omax, decreased and elasticity significantly increased after treatment, but there was no effect for condition. Both baseline values and change in RV, but not discounting, predicted substance use outcomes at 6-month follow-up. Students with high RV who used marijuana were more likely to reduce their use after the SFAS intervention. These results suggest that brief interventions may reduce substance reward value, and that changes in reward value are associated with subsequent drinking and drug use reductions. High RV marijuana users may benefit from intervention elements that enhance future time orientation and substance-free activity participation.",
"title": ""
},
{
"docid": "2dd9bb2536fdc5e040544d09fe3dd4fa",
"text": "Low 1/f noise, low-dropout (LDO) regulators are becoming critical for the supply regulation of deep-submicron analog baseband and RF system-on-chip designs. A low-noise, high accuracy LDO regulator (LN-LDO) utilizing a chopper stabilized error amplifier is presented. In order to achieve fast response during load transients, a current-mode feedback amplifier (CFA) is designed as a second stage driving the regulation FET. In order to reduce clock feed-through and 1/f noise accumulation at the chopping frequency, a first-order digital SigmaDelta noise-shaper is used for chopping clock spectral spreading. With up to 1 MHz noise-shaped modulation clock, the LN-LDO achieves a noise spectral density of 32 nV/radic(Hz) and a PSR of 38 dB at 100 kHz. The proposed LDO is shown to reduce the phase noise of an integrated 32 MHz temperature compensated crystal oscillator (TCXO) at 10 kHz offset by 15 dB. Due to reduced 1/f noise requirements, the error amplifier silicon area is reduced by 75%, and the overall regulator area is reduced by 50% with respect to an equivalent noise static regulator. The current-mode feedback second stage buffer reduces regulator settling time by 60% in comparison to an equivalent power consumption voltage mode buffer, achieving 0.6 mus settling time for a 25-mA load step. The LN-LDO is designed and fabricated on a 0.25 mum CMOS process with five layers of metal, occupying 0.88 mm2.",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "212848b1cd0c8e72ff64ac87e0a3805a",
"text": "INTRODUCTION\nSmartphones changed the method by which doctors communicate with each other, offer modern functionalities sensitive to the context of use, and can represent a valuable ally in the healthcare system. Studies have shown that WhatsApp™ application can facilitate communication within the healthcare team and provide the attending physician a constant oversight of activities performed by junior team members. The aim of the study was to use WhatsApp between two distant surgical teams involved in a program of elective surgery to verify if it facilitates communication, enhances learning, and improves patient care preserving their privacy.\n\n\nMETHODS\nWe conducted a focused group of surgeons over a 28-month period (from March 2013 to July 2015), and from September 2014 to July 2015, a group of selected specialists communicated healthcare matters through the newly founded \"WhatsApp Surgery Group.\" Each patient enrolled in the study signed a consent form to let the team communicate his/her clinical data using WhatsApp. Communication between team members, response times, and types of messages were evaluated.\n\n\nRESULTS\nForty six (n = 46) patients were enrolled in the study. A total of 1,053 images were used with an average of 78 images for each patient (range 41-143). 125 h of communication were recorded, generating 354 communication events. The expert surgeon had received the highest number of questions (P, 0.001), while the residents asked clinical questions (P, 0.001) and were the fastest responders to communications (P, 0.001).\n\n\nCONCLUSION\nOur study investigated how two distant clinical teams may exploit such a communication system and quantifies both the direction and type of communication between surgeons. WhatsApp is a low cost, secure, and fast technology and it offers the opportunity to facilitate clinical and nonclinical communications, enhance learning, and improve patient care preserving their privacy.",
"title": ""
},
{
"docid": "2586eaf8556ead1c085165569f9936b2",
"text": "SQL injection attack poses a serious security threats among the Internet community nowadays and it's continue to increase exploiting flaws found in the Web applications. In SQL injection attack, the attackers can take advantage of poorly coded web application software to introduce malicious code into the system and/or could retrieve important information. Web applications are under siege from cyber criminals seeking to steal confidential information and disable or damage the services offered by these application. Therefore, additional steps must be taken to ensure data security and integrity of the applications. In this paper we propose an innovative solution to filter the SQL injection attack using SNORT IDS. The proposed detection technique uses SNORT tool by augmenting a number of additional SNORT rules. We evaluate the proposed solution by comparing our method with several existing techniques. Experimental results demonstrate that the proposed method outperforms other similar techniques using the same data set.",
"title": ""
},
{
"docid": "ce53aa803d587301a47166c483ecec34",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "1c80fdc30b2b37443367dae187fbb376",
"text": "The web is a catalyst for drawing people together around shared goals, but many groups never reach critical mass. It can thus be risky to commit time or effort to a goal: participants show up only to discover that nobody else did, and organizers devote significant effort to causes that never get off the ground. Crowdfunding has lessened some of this risk by only calling in donations when an effort reaches a collective monetary goal. However, it leaves unsolved the harder problem of mobilizing effort, time and participation. We generalize the concept into activation thresholds, commitments that are conditioned on others' participation. With activation thresholds, supporters only need to show up for an event if enough other people commit as well. Catalyst is a platform that introduces activation thresholds for on-demand events. For more complex coordination needs, Catalyst also provides thresholds based on time or role (e.g., a bake sale requiring commitments for bakers, decorators, and sellers). In a multi-month field deployment, Catalyst helped users organize events including food bank volunteering, on-demand study groups, and mass participation events like a human chess game. Our results suggest that activation thresholds can indeed catalyze a large class of new collective efforts.",
"title": ""
},
{
"docid": "3dcfd937b9c1ae8ccc04c6a8a99c71f5",
"text": "Automatically generated fake restaurant reviews are a threat to online review systems. Recent research has shown that users have difficulties in detecting machine-generated fake reviews hiding among real restaurant reviews. The method used in this work (char-LSTM ) has one drawback: it has difficulties staying in context, i.e. when it generates a review for specific target entity, the resulting review may contain phrases that are unrelated to the target, thus increasing its detectability. In this work, we present and evaluate a more sophisticated technique based on neural machine translation (NMT) with which we can generate reviews that stay on-topic. We test multiple variants of our technique using native English speakers on Amazon Mechanical Turk. We demonstrate that reviews generated by the best variant have almost optimal undetectability (class-averaged F-score 47%). We conduct a user study with experienced users and show that our method evades detection more frequently compared to the state-of-the-art (average evasion 3.2/4 vs 1.5/4) with statistical significance, at level α = 1% (Section 4.3). We develop very effective detection tools and reach average F-score of 97% in classifying these. Although fake reviews are very effective in fooling people, effective automatic detection is still feasible.",
"title": ""
},
{
"docid": "5596f6d7ebe828f4d6f5ab4d94131b1d",
"text": "A successful quality model is indispensable in a rich variety of multimedia applications, e.g., image classification and video summarization. Conventional approaches have developed many features to assess media quality at both low-level and high-level. However, they cannot reflect the process of human visual cortex in media perception. It is generally accepted that an ideal quality model should be biologically plausible, i.e., capable of mimicking human gaze shifting as well as the complicated visual cognition. In this paper, we propose a biologically inspired quality model, focusing on interpreting how humans perceive visually and semantically important regions in an image (or a video clip). Particularly, we first extract local descriptors (graphlets in this work) from an image/frame. They are projected onto the perceptual space, which is built upon a set of low-level and high-level visual features. Then, an active learning algorithm is utilized to select graphlets that are both visually and semantically salient. The algorithm is based on the observation that each graphlet can be linearly reconstructed by its surrounding ones, and spatially nearer ones make a greater contribution. In this way, both the local and global geometric properties of an image/frame can be encoded in the selection process. These selected graphlets are linked into a so-called biological viewing path (BVP) to simulate human visual perception. Finally, the quality of an image or a video clip is predicted by a probabilistic model. Experiments shown that 1) the predicted BVPs are over 90% consistent with real human gaze shifting paths on average; and 2) our quality model outperforms many of its competitors remarkably.",
"title": ""
},
{
"docid": "a1774a08ffefd28785fbf3a8f4fc8830",
"text": "Bounds are given for the empirical and expected Rademacher complexity of classes of linear transformations from a Hilbert space H to a
nite dimensional space. The results imply generalization guarantees for graph regularization and multi-task subspace learning. 1 Introduction Rademacher averages have been introduced to learning theory as an e¢ cient complexity measure for function classes, motivated by tight, sample or distribution dependent generalization bounds ([10], [2]). Both the de
nition of Rademacher complexity and the generalization bounds extend easily from realvalued function classes to function classes with values in R, as they are relevant to multi-task learning ([1], [12]). There has been an increasing interest in multi-task learning which has shown to be very e¤ective in experiments ([7], [1]), and there have been some general studies of its generalisation performance ([4], [5]). For a large collection of tasks there are usually more data available than for a single task and these data may be put to a coherent use by some constraint of relatedness. A practically interesting case is linear multi-task learning, extending linear large margin classi
ers to vector valued large-margin classi
ers. Di¤erent types of constraints have been proposed: Evgeniou et al ([8], [9]) propose graph regularization, where the vectors de
ning the classi
ers of related tasks have to be near each other. They also show that their scheme can be implemented in the framework of kernel machines. Ando and Zhang [1] on the other hand require the classi
ers to be members of a common low dimensional subspace. They also give generalization bounds using Rademacher complexity, but these bounds increase with the dimension of the input space. This paper gives dimension free bounds which apply to both approaches. 1.1 Multi-task generalization and Rademacher complexity Suppose we have m classi
cation tasks, represented by m independent random variables X ; Y l taking values in X f 1; 1g, where X l models the random",
"title": ""
},
{
"docid": "ecb93affc7c9b0e4bf86949d3f2006d4",
"text": "We present data-dependent learning bounds for the general scenario of non-stationary nonmixing stochastic processes. Our learning guarantees are expressed in terms of a datadependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions. We also also provide novel analysis of stable time series forecasting algorithm using this new notion of discrepancy that we introduce. We use our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results. An extended abstract has appeared in (Kuznetsov and Mohri, 2015).",
"title": ""
},
{
"docid": "f1d69b033490ed8c4eec7b476e9b7c08",
"text": "Performance-based measures of emotional intelligence (EI) are more likely than measures based on self-report to assess EI as a construct distinct from personality. A multivariate investigation was conducted with the performance-based, Multi-Factor Emotional Intelligence Scale (MEIS; J. D. Mayer, D. Caruso, & P. Salovey, 1999). Participants (N = 704) also completed the Trait Self-Description Inventory (TSDI, a measure of the Big Five personality factors; Christal, 1994; R. D. Roberts et al.), and the Armed Services Vocational Aptitude Battery (ASVAB, a measure of intelligence). Results were equivocal. Although the MEIS showed convergent validity (correlating moderately with the ASVAB) and divergent validity (correlating minimally with the TSDI), different scoring protocols (i.e., expert and consensus) yielded contradictory findings. Analyses of factor structure and subscale reliability identified further measurement problems. Overall, it is questionable whether the MEIS operationalizes EI as a reliable and valid construct.",
"title": ""
},
{
"docid": "e48b39ce7d5b9cc55dcf7d80ca00d4cd",
"text": "To efficiently extract local and global features in face description and recognition, a pyramid-based multi-scale LBP approach is proposed. Firstly, the face image pyramid is constructed through multi-scale analysis. Then the LBP operator is applied to each level of the image pyramid to extract facial features under various scales. Finally, all the extracted features are concatenated into an enhanced feature vector which is used as the face descriptor. Experimental results on ORL and FERET face databases show that the proposed LBP representation is highly efficient with good performance in face recognition and is robust to illumination, facial expression and position variation.",
"title": ""
},
{
"docid": "3b5216dfbd7b12cf282311d645b10a38",
"text": "3D CAD systems are used in product design for simultaneous engineering and to improve productivity. CAD tools can substantially enhance design performance. Although 3D CAD is a widely used and highly effective tool in mechanical design, mastery of CAD skills is complex and time-consuming. The concepts of parametric–associative models and systems are powerful tools whose efficiency is proportional to the complexity of their implementation. The availability of a framework for actions that can be taken to improve CAD efficiency can therefore be highly beneficial. Today, a clear and structured approach does not exist in this way for CAD methodology deployment. The novelty of this work is therefore to propose a general strategy for utilizing the advantages of parametric CAD in the automotive industry in the form of a roadmap. The main stages of the roadmap are illustrated by means of industrial use cases. The first results of his research are discussed and suggestions for future work are given. © 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "901924cc7e0e6177ac6727a183abc808",
"text": "In this paper we tackle the problem of document image retrieval by combining a similarity measure between documents and the probability that a given document belongs to a certain class. The membership probability to a specific class is computed using Support Vector Machines in conjunction with similarity measure based kernel applied to structural document representations. In the presented experiments, we use different document representations, both visual and structural, and we apply them to a database of historical documents. We show how our method based on similarity kernels outperforms the usual distance-based retrieval.",
"title": ""
},
{
"docid": "cd450942c0acc63d0018e3662a1d69ba",
"text": "By fractionating conditioned medium (CM) from Drosophila imaginal disc cell cultures, we have identified a family of Imaginal Disc Growth Factors (IDGFs), which are the first polypeptide growth factors to be reported from invertebrates. The active fraction from CM, as well as recombinant IDGFs, cooperate with insulin to stimulate the proliferation, polarization and motility of imaginal disc cells. The IDGF family in Drosophila includes at least five members, three of which are encoded by three genes in a tight cluster. The proteins are structurally related to chitinases, but they show an amino acid substitution that is known to abrogate catalytic activity. It therefore seems likely that they have evolved from chitinases but acquired a new growth-promoting function. The IDGF genes are expressed most strongly in the embryonic yolk cells and in the fat body of the embryo and larva. The predicted molecular structure, expression patterns, and mitogenic activity of these proteins suggest that they are secreted and transported to target tissues via the hemolymph. However, the genes are also expressed in embryonic epithelia in association with invagination movements, so the proteins may have local as well as systemic functions. Similar proteins are found in mammals and may constitute a novel class of growth factors.",
"title": ""
},
{
"docid": "48a0e75b97fdaa734f033c6b7791e81f",
"text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.",
"title": ""
}
] |
scidocsrr
|
412eae9cfb6e5bad7fe0025120546655
|
Relative distance measurement between moving vehicles for manless driving
|
[
{
"docid": "f36826993d5a9f99fc3554b5f542780e",
"text": "In this research, an adaptive timely traffic light is proposed as solution for congestion in typical area in Indonesia. Makassar City, particularly in the most complex junction (fly over, Pettarani, Reformasi highway and Urip S.) is observed for months using static cameras. The condition is mapped into fuzzy logic to have a better time transition of traffic light as opposed to the current conventional traffic light system. In preliminary result, fuzzy logic shows significant number of potential reduced in congestion. Each traffic line has 20-30% less congestion with future implementation of the proposed system.",
"title": ""
}
] |
[
{
"docid": "025f3fa2b4ddc50c0f40f4b3c2429524",
"text": "Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.",
"title": ""
},
{
"docid": "296da9be6a4b3c6d111f875157e196c8",
"text": "Histopathology image analysis is a gold standard for cancer recognition and diagnosis. Automatic analysis of histopathology images can help pathologists diagnose tumor and cancer subtypes, alleviating the workload of pathologists. There are two basic types of tasks in digital histopathology image analysis: image classification and image segmentation. Typical problems with histopathology images that hamper automatic analysis include complex clinical representations, limited quantities of training images in a dataset, and the extremely large size of singular images (usually up to gigapixels). The property of extremely large size for a single image also makes a histopathology image dataset be considered large-scale, even if the number of images in the dataset is limited. In this paper, we propose leveraging deep convolutional neural network (CNN) activation features to perform classification, segmentation and visualization in large-scale tissue histopathology images. Our framework transfers features extracted from CNNs trained by a large natural image database, ImageNet, to histopathology images. We also explore the characteristics of CNN features by visualizing the response of individual neuron components in the last hidden layer. Some of these characteristics reveal biological insights that have been verified by pathologists. According to our experiments, the framework proposed has shown state-of-the-art performance on a brain tumor dataset from the MICCAI 2014 Brain Tumor Digital Pathology Challenge and a colon cancer histopathology image dataset. The framework proposed is a simple, efficient and effective system for histopathology image automatic analysis. We successfully transfer ImageNet knowledge as deep convolutional activation features to the classification and segmentation of histopathology images with little training data. CNN features are significantly more powerful than expert-designed features.",
"title": ""
},
{
"docid": "ae6a02ee18e3599c65fb9db22706de44",
"text": "We use a hierarchical Bayesian approach to model user preferences in different contexts or settings. Unlike many previous recommenders, our approach is content-based. We assume that for each context, a user has a different set of preference weights which are linked by a common, “generic context” set of weights. The approach uses Expectation Maximization (EM) to estimate both the generic context weights and the context specific weights. This improves upon many current recommender systems that do not incorporate context into the recommendations they provide. In this paper, we show that by considering contextual information, we can improve our recommendations, demonstrating that it is useful to consider context in giving ratings. Because the approach does not rely on connecting users via collaborative filtering, users are able to interpret contexts in different ways and invent their own",
"title": ""
},
{
"docid": "97a6a77cfa356636e11e02ffe6fc0121",
"text": "© 2019 Muhammad Burhan Hafez et al., published by De Gruyter. This work is licensed under the Creative CommonsAttribution-NonCommercial-NoDerivs4.0License. Paladyn, J. Behav. Robot. 2019; 10:14–29 Research Article Open Access Muhammad Burhan Hafez*, Cornelius Weber, Matthias Kerzel, and Stefan Wermter Deep intrinsically motivated continuous actor-critic for eflcient robotic visuomotor skill learning https://doi.org/10.1515/pjbr-2019-0005 Received June 6, 2018; accepted October 29, 2018 Abstract: In this paper, we present a new intrinsically motivated actor-critic algorithm for learning continuous motor skills directly from raw visual input. Our neural architecture is composed of a critic and an actor network. Both networks receive the hidden representation of a deep convolutional autoencoder which is trained to reconstruct the visual input, while the centre-most hidden representation is also optimized to estimate the state value. Separately, an ensemble of predictive world models generates, based on its learning progress, an intrinsic reward signal which is combined with the extrinsic reward to guide the exploration of the actor-critic learner. Our approach is more data-efficient and inherently more stable than the existing actor-critic methods for continuous control from pixel data. We evaluate our algorithm for the task of learning robotic reaching and grasping skills on a realistic physics simulator and on a humanoid robot. The results show that the control policies learnedwith our approach can achieve better performance than the compared state-of-the-art and baseline algorithms in both dense-reward and challenging sparse-reward settings.",
"title": ""
},
{
"docid": "7514deb49197a5078b1cf9f8f789eee9",
"text": "The phrase table is considered to be the main bilingual resource for the phrase-based statistical machine translation (PBSMT) model. During translation, a source sentence is decomposed into several phrases. The best match of each source phrase is selected among several target-side counterparts within the phrase table, and processed by the decoder to generate a sentence-level translation. The best match is chosen according to several factors, including a set of bilingual features. PBSMT engines by default provide four probability scores in phrase tables which are considered as the main set of bilingual features. Our goal is to enrich that set of features, as a better feature set should yield better translations. We propose new scores generated by a Convolutional Neural Network (CNN) which indicate the semantic relatedness of phrase pairs. We evaluate our model in different experimental settings with different language pairs. We observe significant improvements when the proposed features are incorporated into the PBSMT pipeline.",
"title": ""
},
{
"docid": "5ad82270c7bc78434d1d7630d2cb7aae",
"text": "Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level and model temporal context with 3D ConvNets. Here, we go one step further and model spatio-temporal relations to capture the interactions between human actors, relevant objects and scene elements essential to differentiate similar human actions. Our approach is weakly supervised and mines the relevant elements automatically with an actor-centric relational network (ACRN). ACRN computes and accumulates pair-wise relation information from actor and global scene features, and generates relation features for action classification. It is implemented as neural networks and can be trained jointly with an existing action detection system. We show that ACRN outperforms alternative approaches which capture relation information, and that the proposed framework improves upon the state-ofthe-art performance on JHMDB and AVA. A visualization of the learned relation features confirms that our approach is able to attend to the relevant relations for each action.",
"title": ""
},
{
"docid": "70a73dad03925580cdc3a7ef069f6f3a",
"text": "Recently, there has been a great attention to develop feature selection methods on the microarray high dimensional datasets. In this paper, an innovative method based on Maximum Relevancy and Minimum Redundancy (MRMR) approach by using Hesitant Fuzzy Sets (HFSs) is proposed to deal with feature subset selection; the method is called MRMR-HFS. MRMR-HFS is a novel filterbased feature selection algorithm that selects features by ensemble of ranking algorithms (as the measure of feature-class relevancy that must be maximized) and similarity measures (as the measure of feature-feature redundancy that must be minimized). The combination of ranking algorithms and similarity measures are done by using the fundamental concepts of information energies of HFSs. The proposed method has been inspired from Correlation based Feature Selection (CFS) within the sequential forward search in order to present a robust feature selection tool to solve high dimensional problems. To evaluate the effectiveness of the MRMR-HFS, several experimental results are carried out on nine well-known microarray high dimensional datasets. The obtained results are compared with those of other similar state-of-the-art algorithms including Correlation-based Feature Selection (CFS), Fast Correlation-based Filter (FCBF), Intract (INT), and Maximum Relevancy Minimum Redundancy (MRMR). The outcomes of comparison carried out via some non-parametric statistical tests confirm that the MRMR-HFS is effective for feature subset selection in high dimensional datasets in terms of accuracy, sensitivity, specificity, G-mean, and number of selected features.",
"title": ""
},
{
"docid": "98b908b6d1cddb4290b6c09e482a7745",
"text": "Systems for automated image analysis are useful for a variety of tasks and their importance is still growing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut für Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information. Keywords— Driver Assistance, Machine Vision, Data",
"title": ""
},
{
"docid": "19806d18233e149b091790d220e5181b",
"text": "In this work we propose Pixel Content Encoders (PCE), a lightweight image inpainting model, capable of generating novel content for large missing regions in images. Unlike previously presented convolutional neural network based models, our PCE model has an order of magnitude fewer trainable parameters. Moreover, by incorporating dilated convolutions we are able to preserve fine grained spatial information, achieving state-of-the-art performance on benchmark datasets of natural images and paintings. Besides image inpainting, we show that without changing the architecture, PCE can be used for image extrapolation, generating novel content beyond existing image boundaries.",
"title": ""
},
{
"docid": "3372d21fc8ab17ad735c6d471c98d792",
"text": "Psychopathology is increasingly viewed from a circuit perspective in which a disorder stems not from circumscribed anomalies in discrete brain regions, but rather from impairments in distributed neural networks. This focus on neural circuitry has rendered resting state functional connectivity MRI (rs-fcMRI) an increasingly important role in the elucidation of pathophysiology including attention-deficit/hyperactivity disorder (ADHD). Unlike many other MRI techniques that focus on the properties of discrete brain regions, rs-fcMRI measures the coherence of neural activity across anatomically disparate brain regions, examining the connectivity and organization of neural circuits. In this review, we explore the methods available to investigators using rs-fcMRI techniques, including a discussion of their relative merits and limitations. We then review findings from extant rs-fcMRI studies of ADHD focusing on neural circuits implicated in the disorder, especially the default mode network, cognitive control network, and cortico-striato-thalamo-cortical loops. We conclude by suggesting future directions that may help advance subsequent rs-fcMRI research in ADHD.",
"title": ""
},
{
"docid": "1da4596bcfbf46684981595dea7f6e80",
"text": "Consciousness results from three mechanisms: representation by firing patterns in neural populations, binding of representations into more complex representations called semantic pointers, and competition among semantic pointers to capture the most important aspects of an organism's current state. We contrast the semantic pointer competition (SPC) theory of consciousness with the hypothesis that consciousness is the capacity of a system to integrate information (IIT). We describe computer simulations to show that SPC surpasses IIT in providing better explanations of key aspects of consciousness: qualitative features, onset and cessation, shifts in experiences, differences in kinds across different organisms, unity and diversity, and storage and retrieval.",
"title": ""
},
{
"docid": "ca2cc9e21fd1aacc345238c1d609bedf",
"text": "The aim of the present study was to evaluate the long-term effect of implants installed in different dental areas in adolescents. The sample consisted of 18 subjects with missing teeth (congenital absence or trauma). The patients were of different chronological ages (between 13 and 17 years) and of different skeletal maturation. In all subjects, the existing permanent teeth were fully erupted. In 15 patients, 29 single implants (using the Brånemark technique) were installed to replace premolars, canines, and upper incisors. In three patients with extensive aplasia, 18 implants were placed in various regions. The patients were followed during a 10-year period, the first four years annually and then every second year. Photographs, study casts, peri-apical radiographs, lateral cephalograms, and body height measurements were recorded at each control. The results show that dental implants are a good treatment option for replacing missing teeth in adolescents, provided that the subject's dental and skeletal development is complete. However, different problems are related to the premolar and the incisor regions, which have to be considered in the total treatment planning. Disadvantages may be related to the upper incisor region, especially for lateral incisors, due to slight continuous eruption of adjacent teeth and craniofacial changes post-adolescence. Periodontal problems may arise, with marginal bone loss around the adjacent teeth and bone loss buccally to the implants. The shorter the distance between the implant and the adjacent teeth, the larger the reduction of marginal bone level. Before placement of the implant sufficient space must be gained in the implant area, and the adjacent teeth uprighted and paralleled, even in the apical area, using non-intrusive movements. In the premolar area, excess space is needed, not only in the mesio-distal, but above all in the bucco-lingual direction. Thus, an infraoccluded lower deciduous molar should be extracted shortly before placement of the implant to avoid reduction of the bucco-lingual bone volume. Oral rehabilitation with implant-supported prosthetic constructions seems to be a good alternative in adolescents with extensive aplasia, provided that craniofacial growth has ceased or is almost complete.",
"title": ""
},
{
"docid": "e99eceb3072dc2798071fe9d65d30c3a",
"text": "With the vast availability of traffic sensors from which traffic information can be derived, a lot of research effort has been devoted to developing traffic prediction techniques, which in turn improve route navigation, traffic regulation, urban area planning, etc. One key challenge in traffic prediction is how much to rely on prediction models that are constructed using historical data in real-time traffic situations, which may differ from that of the historical data and change over time. In this paper, we propose a novel online framework that could learn from the current traffic situation (or context) in real-time and predict the future traffic by matching the current situation to the most effective prediction model trained using historical data. As real-time traffic arrives, the traffic context space is adaptively partitioned in order to efficiently estimate the effectiveness of each base predictor in different situations. We obtain and prove both short-term and long-term performance guarantees (bounds) for our online algorithm. The proposed algorithm also works effectively in scenarios where the true labels (i.e., realized traffic) are missing or become available with delay. Using the proposed framework, the context dimension that is the most relevant to traffic prediction can also be revealed, which can further reduce the implementation complexity as well as inform traffic policy making. Our experiments with real-world data in real-life conditions show that the proposed approach significantly outperforms existing solutions.",
"title": ""
},
{
"docid": "af495aaae51ead951246733d088a2a47",
"text": "In this paper, we present a novel parallel implementation for training Gradient Boosting Decision Trees (GBDTs) on Graphics Processing Units (GPUs). Thanks to the wide use of the open sourced XGBoost library, GBDTs have become very popular in recent years and won many awards in machine learning and data mining competitions. Although GPUs have demonstrated their success in accelerating many machine learning applications, there are a series of key challenges of developing a GPU-based GBDT algorithm, including irregular memory accesses, many small sorting operations and varying data parallel granularities in tree construction. To tackle these challenges on GPUs, we propose various novel techniques (including Run-length Encoding compression and thread/block workload dynamic allocation, and reusing intermediate training results for efficient gradient computation). Our experimental results show that our algorithm named GPU-GBDT is often 10 to 20 times faster than the sequential version of XGBoost, and achieves 1.5 to 2 times speedup over a 40 threaded XGBoost running on a relatively high-end workstation of 20 CPU cores. Moreover, GPU-GBDT outperforms its CPU counterpart by 2 to 3 times in terms of performance-price ratio.",
"title": ""
},
{
"docid": "1e0a4246c81896c3fd5175bc10065460",
"text": "Automatic modulation recognition (AMR) is becoming more important because it is usable in advanced general-purpose communication such as, cognitive radio, as well as, specific applications. Therefore, developments should be made for widely used modulation types; machine learning techniques should be employed for this problem. In this study, we have evaluated performances of different machine learning algorithms for AMR. Specifically, we have evaluated performances of artificial neural networks, support vector machines, random forest tree, k-nearest neighbor, Hoeffding tree, logistic regression, Naive Bayes and Gradient Boosted Regression Tree methods to obtain comparative results. The most preferred feature extraction methods in the literature have been used for a set of modulation types for general-purpose communication. We have considered AWGN and Rayleigh channel models evaluating their recognition performance as well as having made recognition performance improvement over Rayleigh for low SNR values using the reception diversity technique. We have compared their recognition performance in the accuracy metric, and plotted them as well. Furthermore, we have served confusion matrices for some particular experiments.",
"title": ""
},
{
"docid": "5259c7d1c7b05050596f6667aa262e11",
"text": "We propose a novel approach to automatic detection and tracking of people taking different poses in cluttered and dynamic environments using a single RGB-D camera. The original RGB-D pixels are transformed to a novel point ensemble image (PEI), and we demonstrate that human detection and tracking in 3D space can be performed very effectively with this new representation. The detector in the first phase quickly locates human physiquewise plausible candidates, which are then further carefully filtered in a supervised learning and classification second phase. Joint statistics of color and height are computed for data association to generate final 3D motion trajectories of tracked individuals. Qualitative and quantitative experimental results obtained on the publicly available office dataset, mobile camera dataset and the real-world clothing store dataset we created show very promising results. © 2014 Elsevier B.V. All rights reserved. d T b r a e w c t e i c a i c p p g w e h",
"title": ""
},
{
"docid": "249a8a783987588e364fd93f788794c4",
"text": "We present a novel approach to learning taxonomic relations between terms by considering multiple and heterogeneous sources of vidence. In order to derive an optimal combination of these sources, we exploit a machine-learning approach, representing all the sources of evidence as first-or der features and training standard classifiers. We consider in particular different f a ures derived from WordNet, an approach matching Hearst-style patterns in a corpus and on the Web as well as further methods mentioned in the literature. In particul ar, we explore different classifiers as well as various strategies for dealing with un balanced datasets. We evaluate our approach by comparing the results with a refere nc taxonomy for the tourism domain.",
"title": ""
},
{
"docid": "3b9b49f8c2773497f8e05bff4a594207",
"text": "SSD (Single Shot Detector) is one of the state-of-the-art object detection algorithms, and it combines high detection accuracy with real-time speed. However, it is widely recognized that SSD is less accurate in detecting small objects compared to large objects, because it ignores the context from outside the proposal boxes. In this paper, we present CSSD–a shorthand for context-aware single-shot multibox object detector. CSSD is built on top of SSD, with additional layers modeling multi-scale contexts. We describe two variants of CSSD, which differ in their context layers, using dilated convolution layers (DiCSSD) and deconvolution layers (DeCSSD) respectively. The experimental results show that the multi-scale context modeling significantly improves the detection accuracy. In addition, we study the relationship between effective receptive fields (ERFs) and the theoretical receptive fields (TRFs), particularly on a VGGNet. The empirical results further strengthen our conclusion that SSD coupled with context layers achieves better detection results especially for small objects (+3.2%[email protected] on MSCOCO compared to the newest SSD), while maintaining comparable runtime performance.",
"title": ""
},
{
"docid": "fd786ae1792e559352c75940d84600af",
"text": "In this paper, we obtain an (1 − e−1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n) function value computations. c © 2003 Published by Elsevier B.V.",
"title": ""
}
] |
scidocsrr
|
214d3555055146bd6209a393b734d2d6
|
Stress and multitasking in everyday college life: an empirical study of online activity
|
[
{
"docid": "ed34383cada585951e1dcc62445d08c2",
"text": "The increasing volume of e-mail and other technologically enabled communications are widely regarded as a growing source of stress in people’s lives. Yet research also suggests that new media afford people additional flexibility and control by enabling them to communicate from anywhere at any time. Using a combination of quantitative and qualitative data, this paper builds theory that unravels this apparent contradiction. As the literature would predict, we found that the more time people spent handling e-mail, the greater was their sense of being overloaded, and the more e-mail they processed, the greater their perceived ability to cope. Contrary to assumptions of prior studies, we found no evidence that time spent working mediates e-mail-related overload. Instead, e-mail’s material properties entwined with social norms and interpretations in a way that led informants to single out e-mail as a cultural symbol of the overload they experience in their lives. Moreover, by serving as a symbol, e-mail distracted people from recognizing other sources of overload in their work lives. Our study deepens our understanding of the impact of communication technologies on people’s lives and helps untangle those technologies’ seemingly contradictory influences.",
"title": ""
}
] |
[
{
"docid": "fe0587c51c4992aa03f28b18f610232f",
"text": "We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order $\\frac{1}{4} \\log_2 N$ bits of P.",
"title": ""
},
{
"docid": "124fa48e1e842f2068a8fb55a2b8bb8e",
"text": "We present an augmented reality application for mechanics education. It utilizes a recent physics engine developed for the PC gaming market to simulate physical experiments in the domain of mechanics in real time. Students are enabled to actively build own experiments and study them in a three-dimensional virtual world. A variety of tools are provided to analyze forces, mass, paths and other properties of objects before, during and after experiments. Innovative teaching content is presented that exploits the strengths of our immersive virtual environment. PhysicsPlayground serves as an example of how current technologies can be combined to deliver a new quality in physics education.",
"title": ""
},
{
"docid": "5339554b6f753b69b5ace705af0263cd",
"text": "We explore several oversampling techniques for an imbalanced multi-label classification problem, a setting often encountered when developing models for Computer-Aided Diagnosis (CADx) systems. While most CADx systems aim to optimize classifiers for overall accuracy without considering the relative distribution of each class, we look into using synthetic sampling to increase perclass performance when predicting the degree of malignancy. Using low-level image features and a random forest classifier, we show that using synthetic oversampling techniques increases the sensitivity of the minority classes by an average of 7.22% points, with as much as a 19.88% point increase in sensitivity for a particular minority class. Furthermore, the analysis of low-level image feature distributions for the synthetic nodules reveals that these nodules can provide insights on how to preprocess image data for better classification performance or how to supplement the original datasets when more data acquisition is feasible.",
"title": ""
},
{
"docid": "8183fe0c103e2ddcab5b35549ed8629f",
"text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.",
"title": ""
},
{
"docid": "25a7f23c146add12bfab3f1fc497a065",
"text": "One of the greatest puzzles of human evolutionary history concerns the how and why of the transition from small-scale, ‘simple’ societies to large-scale, hierarchically complex ones. This paper reviews theoretical approaches to resolving this puzzle. Our discussion integrates ideas and concepts from evolutionary biology, anthropology, and political science. The evolutionary framework of multilevel selection suggests that complex hierarchies can arise in response to selection imposed by intergroup conflict (warfare). The logical coherency of this theory has been investigated with mathematical models, and its predictions were tested empirically by constructing a database of the largest territorial states in the world (with the focus on the preindustrial era).",
"title": ""
},
{
"docid": "f9580093dcf61a9d6905265cfb3a0d32",
"text": "The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance.",
"title": ""
},
{
"docid": "733f5029329072adf5635f0b4d0ad1cb",
"text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.",
"title": ""
},
{
"docid": "7b681d1f200c0281beb161b71e6a3604",
"text": "Data quality remains a persistent problem in practice and a challenge for research. In this study we focus on the four dimensions of data quality noted as the most important to information consumers, namely accuracy, completeness, consistency, and timeliness. These dimensions are of particular concern for operational systems, and most importantly for data warehouses, which are often used as the primary data source for analyses such as classification, a general type of data mining. However, the definitions and conceptual models of these dimensions have not been collectively considered with respect to data mining in general or classification in particular. Nor have they been considered for problem complexity. Conversely, these four dimensions of data quality have only been indirectly addressed by data mining research. Using definitions and constructs of data quality dimensions, our research evaluates the effects of both data quality and problem complexity on generated data and tests the results in a real-world case. Six different classification outcomes selected from the spectrum of classification algorithms show that data quality and problem complexity have significant main and interaction effects. From the findings of significant effects, the economics of higher data quality are evaluated for a frequent application of classification and illustrated by the real-world case.",
"title": ""
},
{
"docid": "9a6ce56536585e54d3e15613b2fa1197",
"text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.",
"title": ""
},
{
"docid": "02eccb2c0aeae243bf2023b25850890f",
"text": "In order to meet performance goals, it is widely agreed that vehicular ad hoc networks (VANETs) must rely heavily on node-to-node communication, thus allowing for malicious data traffic. At the same time, the easy access to information afforded by VANETs potentially enables the difficult security goal of data validation. We propose a general approach to evaluating the validity of VANET data. In our approach a node searches for possible explanations for the data it has collected based on the fact that malicious nodes may be present. Explanations that are consistent with the node's model of the VANET are scored and the node accepts the data as dictated by the highest scoring explanations. Our techniques for generating and scoring explanations rely on two assumptions: 1) nodes can tell \"at least some\" other nodes apart from one another and 2) a parsimony argument accurately reflects adversarial behavior in a VANET. We justify both assumptions and demonstrate our approach on specific VANETs.",
"title": ""
},
{
"docid": "c166ae2b9085cc4769438b1ca8ac8ee0",
"text": "Texts in web pages, images and videos contain important clues for information indexing and retrieval. Most existing text extraction methods depend on the language type and text appearance. In this paper, a novel and universal method of image text extraction is proposed. A coarse-to-fine text location method is implemented. Firstly, a multi-scale approach is adopted to locate texts with different font sizes. Secondly, projection profiles are used in location refinement step. Color-based k-means clustering is adopted in text segmentation. Compared to grayscale image which is used in most existing methods, color image is more suitable for segmentation based on clustering. It treats corner-points, edge-points and other points equally so that it solves the problem of handling multilingual text. It is demonstrated in experimental results that best performance is obtained when k is 3. Comparative experimental results on a large number of images show that our method is accurate and robust in various conditions.",
"title": ""
},
{
"docid": "77437d225dcc535fdbe5a7e66e15f240",
"text": "We are interested in automatic scene understanding from geometric cues. To this end, we aim to bring semantic segmentation in the loop of real-time reconstruction. Our semantic segmentation is built on a deep autoencoder stack trained exclusively on synthetic depth data generated from our novel 3D scene library, SynthCam3D. Importantly, our network is able to segment real world scenes without any noise modelling. We present encouraging preliminary results.",
"title": ""
},
{
"docid": "eb8fd891a197e5a028f1ca5eaf3988a3",
"text": "Information-centric networking (ICN) replaces the widely used host-centric networking paradigm in communication networks (e.g., Internet and mobile ad hoc networks) with an information-centric paradigm, which prioritizes the delivery of named content, oblivious of the contents’ origin. Content and client security, provenance, and identity privacy are intrinsic by design in the ICN paradigm as opposed to the current host centric paradigm where they have been instrumented as an after-thought. However, given its nascency, the ICN paradigm has several open security and privacy concerns. In this paper, we survey the existing literature in security and privacy in ICN and present open questions. More specifically, we explore three broad areas: 1) security threats; 2) privacy risks; and 3) access control enforcement mechanisms. We present the underlying principle of the existing works, discuss the drawbacks of the proposed approaches, and explore potential future research directions. In security, we review attack scenarios, such as denial of service, cache pollution, and content poisoning. In privacy, we discuss user privacy and anonymity, name and signature privacy, and content privacy. ICN’s feature of ubiquitous caching introduces a major challenge for access control enforcement that requires special attention. We review existing access control mechanisms including encryption-based, attribute-based, session-based, and proxy re-encryption-based access control schemes. We conclude the survey with lessons learned and scope for future work.",
"title": ""
},
{
"docid": "aed264522ed7ee1d3559fe4863760986",
"text": "A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center. KeywordsSensor Networks; Clustering Methods; Voronoi Tessellations; Algorithms.",
"title": ""
},
{
"docid": "d269ebe2bc6ab4dcaaac3f603037b846",
"text": "The contribution of power production by photovoltaic (PV) systems to the electricity supply is constantly increasing. An efficient use of the fluctuating solar power production will highly benefit from forecast information on the expected power production. This forecast information is necessary for the management of the electricity grids and for solar energy trading. This paper presents an approach to predict regional PV power output based on forecasts up to three days ahead provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). Focus of the paper is the description and evaluation of the approach of irradiance forecasting, which is the basis for PV power prediction. One day-ahead irradiance forecasts for single stations in Germany show a rRMSE of 36%. For regional forecasts, forecast accuracy is increasing in dependency on the size of the region. For the complete area of Germany, the rRMSE amounts to 13%. Besides the forecast accuracy, also the specification of the forecast uncertainty is an important issue for an effective application. We present and evaluate an approach to derive weather specific prediction intervals for irradiance forecasts. The accuracy of PV power prediction is investigated in a case study.",
"title": ""
},
{
"docid": "101fbbe7760c3961f11da7f1e080e5f7",
"text": "Probiotic ingestion can be recommended as a preventative approach to maintaining the balance of the intestinal microflora and thereby enhance 'well-being'. Research into the use of probiotic intervention in specific illnesses and disorders has identified certain patient populations that may benefit from the approach. Undoubtedly, probiotics will vary in their efficacy and it may not be the case that the same results occur with all species. Those that prove most efficient will likely be strains that are robust enough to survive the harsh physico-chemical conditions present in the gastrointestinal tract. This includes gastric acid, bile secretions and competition with the resident microflora. A survey of the literature indicates positive results in over fifty human trials, with prevention/treatment of infections the most frequently reported output. In theory, increased levels of probiotics may induce a 'barrier' influence against common pathogens. Mechanisms of effect are likely to include the excretion of acids (lactate, acetate), competition for nutrients and gut receptor sites, immunomodulation and the formation of specific antimicrobial agents. As such, persons susceptible to diarrhoeal infections may benefit greatly from probiotic intake. On a more chronic basis, it has been suggested that some probiotics can help maintain remission in the inflammatory conditions, ulcerative colitis and pouchitis. They have also been suggested to repress enzymes responsible for genotoxin formation. Moreover, studies have suggested that probiotics are as effective as anti-spasmodic drugs in the alleviation of irritable bowel syndrome. The approach of modulating the gut flora for improved health has much relevance for the management of those with acute and chronic gut disorders. Other target groups could include those susceptible to nosocomial infections, as well as the elderly, who have an altered microflora, with a decreased number of beneficial microbial species. For the future, it is imperative that mechanistic interactions involved in probiotic supplementation be identified. Moreover, the survival issues associated with their establishment in the competitive gut ecosystem should be addressed. Here, the use of prebiotics in association with useful probiotics may be a worthwhile approach. A prebiotic is a dietary carbohydrate selectively metabolised by probiotics. Combinations of probiotics and prebiotics are known as synbiotics.",
"title": ""
},
{
"docid": "a2082f1b4154cd11e94eff18a016e91e",
"text": "1 During the summer of 2005, I discovered that there was not a copy of my dissertation available from the library at McGill University. I was, however, able to obtain a copy of it on microfilm from another university that had initially obtained it on interlibrary loan. I am most grateful to Vicki Galbraith who typed this version from that copy, which except for some minor variations due to differences in type size and margins (plus this footnote, of course) is identical to that on the microfilm. ACKNOWLEDGEMENTS 1 The writer is grateful to Dr. J. T. McIlhone, Associate General Director in Charge of English Classes of the Montreal Catholic School Board, for his kind cooperation in making subjects available, and to the Principals and French teachers of each high school for their assistance and cooperation during the testing programs. advice on the statistical analysis. In addition, the writer would like to express his appreciation to Mr. K. Tunstall for his assistance in the difficult task of interviewing the parents of each student. Finally, the writer would like to express his gratitude to Janet W. Gardner for her invaluable assistance in all phases of the research program.",
"title": ""
},
{
"docid": "1406e39d95505da3d7ab2b5c74c2e068",
"text": "Context: During requirements engineering, prioritization is performed to grade or rank requirements in their order of importance and subsequent implementation releases. It is a major step taken in making crucial decisions so as to increase the economic value of a system. Objective: The purpose of this study is to identify and analyze existing prioritization techniques in the context of the formulated research questions. Method: Search terms with relevant keywords were used to identify primary studies that relate requirements prioritization classified under journal articles, conference papers, workshops, symposiums, book chapters and IEEE bulletins. Results: 73 Primary studies were selected from the search processes. Out of these studies; 13 were journal articles, 35 were conference papers and 8 were workshop papers. Furthermore, contributions from symposiums as well as IEEE bulletins were 2 each while the total number of book chapters amounted to 13. Conclusion: Prioritization has been significantly discussed in the requirements engineering domain. However , it was generally discovered that, existing prioritization techniques suffer from a number of limitations which includes: lack of scalability, methods of dealing with rank updates during requirements evolution, coordination among stakeholders and requirements dependency issues. Also, the applicability of existing techniques in complex and real setting has not been reported yet.",
"title": ""
},
{
"docid": "0d93bf1b3b891a625daa987652ca1964",
"text": "In this paper, we show that a continuous spectrum of randomis ation exists, in which most existing tree randomisations are only operating around the tw o ends of the spectrum. That leaves a huge part of the spectrum largely unexplored. We propose a ba se le rner VR-Tree which generates trees with variable-randomness. VR-Trees are able to span f rom the conventional deterministic trees to the complete-random trees using a probabilistic pa rameter. Using VR-Trees as the base models, we explore the entire spectrum of randomised ensemb les, together with Bagging and Random Subspace. We discover that the two halves of the spectrum have their distinct characteristics; and the understanding of which allows us to propose a new appr o ch in building better decision tree ensembles. We name this approach Coalescence, which co ales es a number of points in the random-half of the spectrum. Coalescence acts as a committe e of “ xperts” to cater for unforeseeable conditions presented in training data. Coalescence is found to perform better than any single operating point in the spectrum, without the need to tune to a specific level of randomness. In our empirical study, Coalescence ranks top among the benchm arking ensemble methods including Random Forests, Random Subspace and C5 Boosting; and only Co alescence is significantly better than Bagging and Max-Diverse Ensemble among all the methods in the comparison. Although Coalescence is not significantly better than Random Forests , we have identified conditions under which one will perform better than the other.",
"title": ""
},
{
"docid": "a972fb96613715b1d17ac69fdd86c115",
"text": "Saliency detection has been widely studied to predict human fixations, with various applications in computer vision and image processing. For saliency detection, we argue in this paper that the state-of-the-art High Efficiency Video Coding (HEVC) standard can be used to generate the useful features in compressed domain. Therefore, this paper proposes to learn the video saliency model, with regard to HEVC features. First, we establish an eye tracking database for video saliency detection, which can be downloaded from https://github.com/remega/video_database. Through the statistical analysis on our eye tracking database, we find out that human fixations tend to fall into the regions with large-valued HEVC features on splitting depth, bit allocation, and motion vector (MV). In addition, three observations are obtained with the further analysis on our eye tracking database. Accordingly, several features in HEVC domain are proposed on the basis of splitting depth, bit allocation, and MV. Next, a kind of support vector machine is learned to integrate those HEVC features together, for video saliency detection. Since almost all video data are stored in the compressed form, our method is able to avoid both the computational cost on decoding and the storage cost on raw data. More importantly, experimental results show that the proposed method is superior to other state-of-the-art saliency detection methods, either in compressed or uncompressed domain.",
"title": ""
}
] |
scidocsrr
|
92a4cd0463da8ba8b11b8ddc5e4576c6
|
Project management and IT governance. Integrating PRINCE2 and ISO 38500
|
[
{
"docid": "70b9aad14b2fc75dccab0dd98b3d8814",
"text": "This paper describes the first phase of an ongoing program of research into theory and practice of IT governance. It conceptually explores existing IT governance literature and reveals diverse definitions of IT governance, that acknowledge its structures, control frameworks and/or processes. The definitions applied within the literature and the nature and breadth of discussion demonstrate a lack of a clear shared understanding of the term IT governance. This lack of clarity has the potential to confuse and possibly impede useful research in the field and limit valid cross-study comparisons of results. Using a content analysis approach, a number of existing diverse definitions are moulded into a \"definitive\" definition of IT governance and its usefulness is critically examined. It is hoped that this exercise will heighten awareness of the \"broad reach\" of the IT governance concept to assist researchers in the development of research projects and more effectively guide practitioners in the overall assessment of IT governance.",
"title": ""
},
{
"docid": "2eff84064f1d9d183eddc7e048efa8e6",
"text": "Rupinder Kaur, Dr. Jyotsna Sengupta Abstract— The software process model consists of a set of activities undertaken to design, develop and maintain software systems. A variety of software process models have been designed to structure, describe and prescribe the software development process. The software process models play a very important role in software development, so it forms the core of the software product. Software project failure is often devastating to an organization. Schedule slips, buggy releases and missing features can mean the end of the project or even financial ruin for a company. Oddly, there is disagreement over what it means for a project to fail. In this paper, discussion is done on current process models and analysis on failure of software development, which shows the need of new research.",
"title": ""
}
] |
[
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "d029ce85b17e37abc93ab704fbef3a98",
"text": "Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based methods for correspondence generation. In this paper, we propose an endto-end trainable video SR framework to super-resolve both images and optical flows. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed according to the HR optical flows. Finally, compensated LR inputs are fed to a superresolution network (SRnet) to generate the SR results. Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance. Comparative results on the Vid4 and DAVIS10 datasets show that our framework achieves the stateof-the-art performance. The codes will be released soon at: https://github.com/LongguangWang/SOF-VSR-SuperResolving-Optical-Flow-for-Video-Super-Resolution-.",
"title": ""
},
{
"docid": "9b1a4e27c5d387ef091fdb9140eb8795",
"text": "In this study I investigated the relation between normal heterosexual attraction and autogynephilia (a man's propensity to be sexually aroused by the thought or image of himself as a woman). The subjects were 427 adult male outpatients who reported histories of dressing in women's garments, of feeling like women, or both. The data were questionnaire measures of autogynephilia, heterosexual interest, and other psychosexual variables. As predicted, the highest levels of autogynephilia were observed at intermediate rather than high levels of heterosexual interest; that is, the function relating these variables took the form of an inverted U. This finding supports the hypothesis that autogynephilia is a misdirected type of heterosexual impulse, which arises in association with normal heterosexuality but also competes with it.",
"title": ""
},
{
"docid": "c3c3add0c42f3b98962c4682a72b1865",
"text": "This paper compares to investigate output characteristics according to a conventional and novel stator structure of axial flux permanent magnet (AFPM) motor for cooling fan drive system. Segmented core of stator has advantages such as easy winding and fast manufacture speed. However, a unit cost increase due to cutting off tooth tip to constant slot width. To solve the problem, this paper proposes a novel stator structure with three-step segmented core. The characteristics of AFPM were analyzed by time-stepping three dimensional finite element analysis (3D FEA) in two stator models, when stator cores are cutting off tooth tips from rectangular core and three step segmented core. Prototype motors were manufactured based on analysis results, and were tested as a motor.",
"title": ""
},
{
"docid": "3e5041c6883ce6ab59234ed2c8c995b7",
"text": "Self-amputation of the penis treated immediately: case report and review of the literature. Self-amputation of the penis is rare in urological practice. It occurs more often in a context psychotic disease. It can also be secondary to alcohol or drugs abuse. Treatment and care vary according on the severity of the injury, the delay of consultation and the patient's mental state. The authors report a case of self-amputation of the penis in an alcoholic context. The authors analyze the etiological and urological aspects of this trauma.",
"title": ""
},
{
"docid": "1fd51acb02bafb3ea8f5678581a873a4",
"text": "How often has this scenario happened? You are driving at night behind a car that has bright light-emitting diode (LED) taillights. When looking directly at the taillights, the light is not blurry, but when glancing at other objects, a trail of lights appears, known as a phantom array. The reason for this trail of lights might not be what you expected: it is not due to glare, degradation of eyesight, or astigmatism. The culprit may be the flickering of the LED lights caused by pulse-width modulating (PWM) drive circuitry. Actually, many LED taillights flicker on and off at frequencies between 200 and 500 Hz, which is too fast to notice when the eye is not in rapid motion. However, during a rapid eye movement (saccade), the images of the LED lights appear in different positions on the retina, causing a trail of images to be perceived (Figure 1). This disturbance of vision may not occur with all LED taillights because some taillights keep a constant current through the LEDs. However, when there is a PWM current through the LEDs, the biological effect of the light flicker may become noticeable during the eye saccade.",
"title": ""
},
{
"docid": "c60957f1bf90450eb947d2b0ab346ffb",
"text": "Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques.",
"title": ""
},
{
"docid": "f25c0b1fef38b7322197d61dd5dcac41",
"text": "Hepatocellular carcinoma (HCC) is one of the most common malignancies worldwide and one of the few malignancies with an increasing incidence in the USA. While the relationship between HCC and its inciting risk factors (e.g., hepatitis B, hepatitis C and alcohol liver disease) is well defined, driving genetic alterations are still yet to be identified. Clinically, HCC tends to be hypervascular and, for that reason, transarterial chemoembolization has proven to be effective in managing many patients with localized disease. More recently, angiogenesis has been targeted effectively with pharmacologic strategies, including monoclonal antibodies against VEGF and the VEGF receptor, as well as small-molecule kinase inhibitors of the VEGF receptor. Targeting angiogenesis with these approaches has been validated in several different solid tumors since the initial approval of bevacizumab for advanced colon cancer in 2004. In HCC, only sorafenib has been shown to extend survival in patients with advanced HCC and has opened the door for other anti-angiogenic strategies. Here, we will review the data supporting the targeting of the VEGF axis in HCC and the preclinical and early clinical development of bevacizumab.",
"title": ""
},
{
"docid": "291a1927343797d72f50134b97f73d88",
"text": "This paper proposes a half-rate single-loop reference-less binary CDR that operates from 8.5 Gb/s to 12.1 Gb/s (36% capture range). The high capture range is made possible by adding a novel frequency detection mechanism which limits the magnitude of the phase error between the input data and the VCO clock. The proposed frequency detector produces three phases of the data, and feeds into the phase detector the data phase that minimizes the CDR phase error. This frequency detector, implemented within a 10 Gb/s CDR in Fujitsu's 65 nm CMOS, consumes 11 mW and improves the capture range by up to 6 × when it is activated.",
"title": ""
},
{
"docid": "a6c3a4dfd33eb902f5338f7b8c7f78e5",
"text": "A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.",
"title": ""
},
{
"docid": "a2b3cdf440dd6aa139ea51865d8f81cc",
"text": "Hyperspectral image (HSI) classification is a hot topic in the remote sensing community. This paper proposes a new framework of spectral-spatial feature extraction for HSI classification, in which for the first time the concept of deep learning is introduced. Specifically, the model of autoencoder is exploited in our framework to extract various kinds of features. First we verify the eligibility of autoencoder by following classical spectral information based classification and use autoencoders with different depth to classify hyperspectral image. Further in the proposed framework, we combine PCA on spectral dimension and autoencoder on the other two spatial dimensions to extract spectral-spatial information for classification. The experimental results show that this framework achieves the highest classification accuracy among all methods, and outperforms classical classifiers such as SVM and PCA-based SVM.",
"title": ""
},
{
"docid": "0d7586e443f265015beed6f8bdc15def",
"text": "With the rapid growth of E-Commerce on the Internet, online product search service has emerged as a popular and effective paradigm for customers to find desired products and select transactions. Most product search engines today are based on adaptations of relevance models devised for information retrieval. However, there is still a big gap between the mechanism of finding products that customers really desire to purchase and that of retrieving products of high relevance to customers' query. In this paper, we address this problem by proposing a new ranking framework for enhancing product search based on dynamic best-selling prediction in E-Commerce. Specifically, we first develop an effective algorithm to predict the dynamic best-selling, i.e. the volume of sales, for each product item based on its transaction history. By incorporating such best-selling prediction with relevance, we propose a new ranking model for product search, in which we rank higher the product items that are not only relevant to the customer's need but with higher probability to be purchased by the customer. Results of a large scale evaluation, conducted over the dataset from a commercial product search engine, demonstrate that our new ranking method is more effective for locating those product items that customers really desire to buy at higher rank positions without hurting the search relevance.",
"title": ""
},
{
"docid": "8bea1f9e107cfcebc080bc62d7ac600d",
"text": "The introduction of wireless transmissions into the data center has shown to be promising in improving cost effectiveness of data center networks DCNs. For high transmission flexibility and performance, a fundamental challenge is to increase the wireless availability and enable fully hybrid and seamless transmissions over both wired and wireless DCN components. Rather than limiting the number of wireless radios by the size of top-of-rack switches, we propose a novel DCN architecture, Diamond, which nests the wired DCN with radios equipped on all servers. To harvest the gain allowed by the rich reconfigurable wireless resources, we propose the low-cost deployment of scalable 3-D ring reflection spaces RRSs which are interconnected with streamlined wired herringbone to enable large number of concurrent wireless transmissions through high-performance multi-reflection of radio signals over metal. To increase the number of concurrent wireless transmissions within each RRS, we propose a precise reflection method to reduce the wireless interference. We build a 60-GHz-based testbed to demonstrate the function and transmission ability of our proposed architecture. We further perform extensive simulations to show the significant performance gain of diamond, in supporting up to five times higher server-to-server capacity, enabling network-wide load balancing, and ensuring high fault tolerance.",
"title": ""
},
{
"docid": "fec16344f8b726b9d232423424c101d3",
"text": "A triboelectric separator manufactured by PlasSep, Ltd., Canada was evaluated at MBA Polymers, Inc. as part of a project sponsored by the American Plastics Council (APC) to explore the potential of triboelectric methods for separating commingled plastics from end-oflife durables. The separator works on a very simple principle: that dissimilar materials will transfer electrical charge to one another when rubbed together, the resulting surface charge differences can then be used to separate these dissimilar materials from one another in an electric field. Various commingled plastics were tested under controlled operating conditions. The feed materials tested include commingled plastics derived from electronic shredder residue (ESR), automobile shredder residue (ASR), refrigerator liners, and water bottle plastics. The separation of ESR ABS and HIPS, and water bottle PC and PVC were very promising. However, this device did not efficiently separate many plastic mixtures, such as rubber and plastics; nylon and acetal; and PE and PP from ASR. All tests were carried out based on the standard operating conditions determined for ESR ABS and HIPS. There is the potential to improve the separation performance for many of the feed materials by individually optimizing their operating conditions. Cursory economics shows that the operation cost is very dependent upon assumed throughput, separation efficiency and requisite purity. Unit operation cost could range from $0.03/lb. to $0.05/lb. at capacities of 2000 lb./hr. and 1000 lb./hr.",
"title": ""
},
{
"docid": "532ded1b0cc25a21464996a15a976125",
"text": "Folded-plate structures provide an efficient design using thin laminated veneer lumber panels. Inspired by Japanese furniture joinery, the multiple tab-and-slot joint was developed for the multi-assembly of timber panels with non-parallel edges without adhesive or metal joints. Because the global analysis of our origami structures reveals that the rotational stiffness at ridges affects the global behaviour, we propose an experimental and numerical study of this linear interlocking connection. Its geometry is governed by three angles that orient the contact faces. Nine combinations of these angles were tested and the rotational slip was measured with two different bending set-ups: closing or opening the fold formed by two panels. The non-linear behaviour was conjointly reproduced numerically using the finite element method and continuum damage mechanics.",
"title": ""
},
{
"docid": "d83853692581644f3a86ad0e846c48d2",
"text": "This paper investigates cyber security issues with automatic dependent surveillance broadcast (ADS-B) based air traffic control. Before wide-scale deployment in civil aviation, any airborne or ground-based technology must be ensured to have no adverse impact on safe and profitable system operations, both under normal conditions and failures. With ADS-B, there is a lack of a clear understanding about vulnerabilities, how they can impact airworthiness and what failure conditions they can potentially induce. The proposed work streamlines a threat assessment methodology for security evaluation of ADS-B based surveillance. To the best of our knowledge, this work is the first to identify the need for mechanisms to secure ADS-B based airborne surveillance and propose a security solution. This paper presents preliminary findings and results of the ongoing investigation.12",
"title": ""
},
{
"docid": "1a5189a09df624d496b83470eed4cfb6",
"text": "Vol. 24, No. 1, 2012 103 Received January 5, 2011, Revised March 9, 2011, Accepted for publication April 6, 2011 Corresponding author: Gyong Moon Kim, M.D., Department of Dermatology, St. Vincent Hospital, College of Medicine, The Catholic University of Korea, 93-6 Ji-dong, Paldal-gu, Suwon 442-723, Korea. Tel: 82-31-249-7465, Fax: 82-31-253-8927, E-mail: gyongmoonkim@ catholic.ac.kr This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http:// creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Ann Dermatol Vol. 24, No. 1, 2012 http://dx.doi.org/10.5021/ad.2012.24.1.103",
"title": ""
},
{
"docid": "9973de0dc30f8e8f7234819163a15db2",
"text": "Jennifer L. Docktor, Natalie E. Strand, José P. Mestre, and Brian H. Ross Department of Physics, University of Wisconsin–La Crosse, La Crosse, Wisconsin 54601, USA Department of Physics, University of Illinois, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, Illinois 61801, USA Department of Educational Psychology, University of Illinois, Champaign, Illinois 61820, USA Department of Psychology, University of Illinois, Champaign, Illinois 61820, USA (Received 30 April 2015; published 1 September 2015)",
"title": ""
},
{
"docid": "39e6ddd04b7fab23dbbeb18f2696536e",
"text": "Moving IoT components from the cloud onto edge hosts helps in reducing overall network traffic and thus minimizes latency. However, provisioning IoT services on the IoT edge devices presents new challenges regarding system design and maintenance. One possible approach is the use of software-defined IoT components in the form of virtual IoT resources. This, in turn, allows exposing the thing/device layer and the core IoT service layer as collections of micro services that can be distributed to a broad range of hosts.\n This paper presents the idea and evaluation of using virtual resources in combination with a permission-based blockchain for provisioning IoT services on edge hosts.",
"title": ""
},
{
"docid": "55a798fd7ec96239251fce2a340ba1ba",
"text": "At EUROCRYPT’88, we introduced an interactive zero-howledge protocol ( G ~ O U and Quisquater [13]) fitted to the authentication of tamper-resistant devices (e.g. smart cads , Guillou and Ugon [14]). Each security device stores its secret authentication number, an RSA-like signature computed by an authority from the device identity. Any transaction between a tamperresistant security device and a verifier is limited to a unique interaction: the device sends its identity and a random test number; then the verifier teUs a random large question; and finally the device answers by a witness number. The transaction is successful when the test number is reconstructed from the witness number, the question and the identity according to numbers published by the authority and rules of redundancy possibly standardized. This protocol allows a cooperation between users in such a way that a group of cooperative users looks like a new entity, having a shadowed identity the product of the individual shadowed identities, while each member reveals nothing about its secret. In another scenario, the secret is partitioned between distinkt devices sharing the same identity. A group of cooperative users looks like a unique user having a larger public exponent which is the greater common multiple of each individual exponent. In this paper, additional features are introduced in order to provide: firstly, a mutual interactive authentication of both communicating entities and previously exchanged messages, and, secondly, a digital signature of messages, with a non-interactive zero-knowledge protocol. The problem of multiple signature is solved here in a very smart way due to the possibilities of cooperation between users. The only secret key is the factors of the composite number chosen by the authority delivering one authentication number to each smart card. This key is not known by the user. At the user level, such a scheme may be considered as a keyless identity-based integrity scheme. This integrity has a new and important property: it cannot be misused, i.e. derived into a confidentiality scheme.",
"title": ""
}
] |
scidocsrr
|
1f4cf2423f05ef835580dd2811cf2555
|
Putting Your Best Face Forward : The Accuracy of Online Dating Photographs
|
[
{
"docid": "34fb2f437c5135297ec2ad52556440e9",
"text": "This study investigates self-disclosure in the novel context of online dating relationships. Using a national random sample of Match.com members (N = 349), the authors tested a model of relational goals, self-disclosure, and perceived success in online dating. The authors’findings provide support for social penetration theory and the social information processing and hyperpersonal perspectives as well as highlight the positive effect of anticipated future face-to-face interaction on online self-disclosure. The authors find that perceived online dating success is predicted by four dimensions of self-disclosure (honesty, amount, intent, and valence), although honesty has a negative effect. Furthermore, online dating experience is a strong predictor of perceived success in online dating. Additionally, the authors identify predictors of strategic success versus self-presentation success. This research extends existing theory on computer-mediated communication, selfdisclosure, and relational success to the increasingly important arena of mixed-mode relationships, in which participants move from mediated to face-to-face communication.",
"title": ""
},
{
"docid": "47aec03cf18dc3abd4d46ee017f25a16",
"text": "Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.",
"title": ""
}
] |
[
{
"docid": "401bad1d0373acb71a855a28d2aeea38",
"text": "mechanobullous epidermolysis bullosa acquisita to combined treatment with immunoadsorption and rituximab (anti-CD20 monoclonal antibodies). Arch Dermatol 2007; 143: 192–198. 6 Sadler E, Schafleitner B, Lanschuetzer C et al. Treatment-resistant classical epidermolysis bullosa acquisita responding to rituximab. Br J Dermatol 2007; 157: 417–419. 7 Crichlow SM, Mortimer NJ, Harman KE. A successful therapeutic trial of rituximab in the treatment of a patient with recalcitrant, high-titre epidermolysis bullosa acquisita. Br J Dermatol 2007; 156: 194–196. 8 Saha M, Cutler T, Bhogal B, Black MM, Groves RW. Refractory epidermolysis bullosa acquisita: successful treatment with rituximab. Clin Exp Dermatol 2009; 34: e979–e980. 9 Kubisch I, Diessenbacher P, Schmidt E, Gollnick H, Leverkus M. Premonitory epidermolysis bullosa acquisita mimicking eyelid dermatitis: successful treatment with rituximab and protein A immunoapheresis. Am J Clin Dermatol 2010; 11: 289–293. 10 Meissner C, Hoefeld-Fegeler M, Vetter R et al. Severe acral contractures and nail loss in a patient with mechano-bullous epidermolysis bullosa acquisita. Eur J Dermatol 2010; 20: 543–544.",
"title": ""
},
{
"docid": "91c0658dbd6f078fdf53e9ae276a6f73",
"text": "Given a photo collection of \"unconstrained\" face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.",
"title": ""
},
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "75e9253b7c6333db1aa3cef2ab364f99",
"text": "We used single-pulse transcranial magnetic stimulation of the left primary hand motor cortex and motor evoked potentials of the contralateral right abductor pollicis brevis to probe motor cortex excitability during a standard mental rotation task. Based on previous findings we tested the following hypotheses. (i) Is the hand motor cortex activated more strongly during mental rotation than during reading aloud or reading silently? The latter tasks have been shown to increase motor cortex excitability substantially in recent studies. (ii) Is the recruitment of the motor cortex for mental rotation specific for the judgement of rotated but not for nonrotated Shepard & Metzler figures? Surprisingly, motor cortex activation was higher during mental rotation than during verbal tasks. Moreover, we found strong motor cortex excitability during the mental rotation task but significantly weaker excitability during judgements of nonrotated figures. Hence, this study shows that the primary hand motor area is generally involved in mental rotation processes. These findings are discussed in the context of current theories of mental rotation, and a likely mechanism for the global excitability increase in the primary motor cortex during mental rotation is proposed.",
"title": ""
},
{
"docid": "90b6b0ff4b60e109fc111b26aab4a25c",
"text": "Due to its damage to Internet security, malware and its detection has caught the attention of both anti-malware industry and researchers for decades. Many research efforts have been conducted on developing intelligent malware detection systems. In these systems, resting on the analysis of file contents extracted from the file samples, like Application Programming Interface (API) calls, instruction sequences, and binary strings, data mining methods such as Naive Bayes and Support Vector Machines have been used for malware detection. However, driven by the economic benefits, both diversity and sophistication of malware have significantly increased in recent years. Therefore, anti-malware industry calls for much more novel methods which are capable to protect the users against new threats, and more difficult to evade. In this paper, other than based on file contents extracted from the file samples, we study how file relation graphs can be used for malware detection and propose a novel Belief Propagation algorithm based on the constructed graphs to detect newly unknown malware. A comprehensive experimental study on a real and large data collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our proposed method outperform other alternate data mining based detection techniques.",
"title": ""
},
{
"docid": "703696ca3af2a485ac34f88494210007",
"text": "Cells navigate environments, communicate and build complex patterns by initiating gene expression in response to specific signals. Engineers seek to harness this capability to program cells to perform tasks or create chemicals and materials that match the complexity seen in nature. This Review describes new tools that aid the construction of genetic circuits. Circuit dynamics can be influenced by the choice of regulators and changed with expression 'tuning knobs'. We collate the failure modes encountered when assembling circuits, quantify their impact on performance and review mitigation efforts. Finally, we discuss the constraints that arise from circuits having to operate within a living cell. Collectively, better tools, well-characterized parts and a comprehensive understanding of how to compose circuits are leading to a breakthrough in the ability to program living cells for advanced applications, from living therapeutics to the atomic manufacturing of functional materials.",
"title": ""
},
{
"docid": "2bb535ff25532ccdbf85a301a872c8bd",
"text": "Simultaneous Localization and Mapping (SLAM) consists in the concurrent construction of a representation of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. The paper serves as a tutorial for the non-expert reader. It is also a position paper: by looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: do robots need SLAM? Is SLAM solved?",
"title": ""
},
{
"docid": "3c8ac7bd31d133b4d43c0d3a0f08e842",
"text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.",
"title": ""
},
{
"docid": "40df4f2d0537bca3cf92dc3005d2b9f3",
"text": "The pages of this Sample Chapter may have slight variations in final published form. H istorically, we talk of first-force psychodynamic, second-force cognitive-behavioral, and third-force existential-humanistic counseling and therapy theories. Counseling and psychotherapy really began with Freud and psychoanalysis. James Watson and, later, B. F. Skinner challenged Freud's emphasis on the unconscious and focused on observable behavior. Carl Rogers, with his person-centered counseling, revolutionized the helping professions by focusing on the importance of nurturing a caring therapist-client relationship in the helping process. All three approaches are still alive and well in the fields of counseling and psychology, as discussed in Chapters 5 through 10. As you reflect on the new knowledge and skills you exercised by reading the preceding chapters and completing the competency-building activities in those chapters, hopefully you part three 319 will see that you have gained a more sophisticated foundational understanding of the three traditional theoretical forces that have shaped the fields of counseling and therapy over the past one hundred years. Efforts in this book have been intended to bring your attention to both the strengths and limitations of psychodynamic, cognitive-behavioral, and existential-humanistic perspectives. With these perspectives in mind, the following chapters examine the fourth major theoretical force that has emerged in the mental health professions over the past 40 years: the multicultural-feminist-social justice counseling world-view. The perspectives of the fourth force challenge you to learn new competencies you will need to acquire to work effectively, respectfully, and ethically in a culturally diverse 21st-century society. Part Three begins by discussing the rise of the feminist counseling and therapy perspective (Chapter 11) and multicultural counseling and therapy (MCT) theories (Chapter 12). To assist you in synthesizing much of the information contained in all of the preceding chapters, Chapter 13 presents a comprehensive and integrative helping theory referred to as developmental counseling and therapy (DCT). Chapter 14 offers a comprehensive examination of family counseling and therapy theories to further extend your knowledge of ways that mental health practitioners can assist entire families in realizing new and untapped dimensions of their collective well-being. Finally Chapter 15 provides guidelines to help you develop your own approach to counseling and therapy that complements a growing awareness of your own values, biases, preferences, and relational compe-tencies as a mental health professional. Throughout, competency-building activities offer you opportunities to continue to exercise new skills associated with the different theories discussed in Part Three. …",
"title": ""
},
{
"docid": "21f45ec969ba3852d731a2e2119fc86e",
"text": "When a large number of people with heterogeneous knowledge and skills run a project together, it is important to use a sensible engineering process. This especially holds for a project building an intelligent autonomously driving car to participate in the 2007 DARPA Urban Challenge. In this article, we present essential elements of a software and systems engineering process for the development of artificial intelligence capable of driving autonomously in complex urban situations. The process includes agile concepts, like test first approach, continuous integration of every software module and a reliable release and configuration management assisted by software tools in integrated development environments. However, the most important ingredients for an efficient and stringent development are the ability to efficiently test the behavior of the developed system in a flexible and modular simulator for urban situations.",
"title": ""
},
{
"docid": "3df76261ff7981794e9c3d1332efe023",
"text": "The complete sequence of the 16,569-base pair human mitochondrial genome is presented. The genes for the 12S and 16S rRNAs, 22 tRNAs, cytochrome c oxidase subunits I, II and III, ATPase subunit 6, cytochrome b and eight other predicted protein coding genes have been located. The sequence shows extreme economy in that the genes have none or only a few noncoding bases between them, and in many cases the termination codons are not coded in the DNA but are created post-transcriptionally by polyadenylation of the mRNAs.",
"title": ""
},
{
"docid": "a412c41fe943120a513ad9b6fb70cb8b",
"text": "Blockchains based on proofs of work (PoW) currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. The security of PoWbased blockchains requires that new transactions are verified, making a proper replication of the blockchain data in the system essential. While existing PoW mining protocols offer considerable incentives for workers to generate blocks, workers do not have any incentives to store the blockchain. This resulted in a sharp decrease in the number of full nodes that store the full blockchain, e.g., in Bitcoin, Litecoin, etc. However, the smaller is the number of replicas or nodes storing the replicas, the higher is the vulnerability of the system against compromises and DoS-attacks. In this paper, we address this problem and propose a novel solution, EWoK (Entangled proofs of WOrk and Knowledge). EWoK regulates in a decentralized-manner the minimum number of replicas that should be stored by tying replication to the only directly-incentivized process in PoW-blockchains—which is PoW itself. EWoK only incurs small modifications to existing PoW protocols, and is fully compliant with the specifications of existing mining hardware—which is likely to increase its adoption by the existing PoW ecosystem. EWoK plugs an efficient in-memory hash-based proof of knowledge and couples them with the standard PoW mechanism. We implemented EWoK and integrated it within commonly used mining protocols, such as GetBlockTemplate and Stratum mining; our results show that EWoK can be easily integrated within existing mining pool protocols and does not impair the mining efficiency.",
"title": ""
},
{
"docid": "f415b38e6d43c8ed81ce97fd924def1b",
"text": "Collaborative filtering is one of the most successful and widely used methods of automated product recommendation in online stores. The most critical component of the method is the mechanism of finding similarities among users using product ratings data so that products can be recommended based on the similarities. The calculation of similarities has relied on traditional distance and vector similarity measures such as Pearson’s correlation and cosine which, however, have been seldom questioned in terms of their effectiveness in the recommendation problem domain. This paper presents a new heuristic similarity measure that focuses on improving recommendation performance under cold-start conditions where only a small number of ratings are available for similarity calculation for each user. Experiments using three different datasets show the superiority of the measure in new user cold-start conditions. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a33f962c4a6ea61d3400ca9feea50bd7",
"text": "Now, we come to offer you the right catalogues of book to open. artificial intelligence techniques for rational decision making is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
},
{
"docid": "b41ee70f93fe7c52f4fc74727f43272e",
"text": "It is no secret that pornographic material is now a one-clickaway from everyone, including children and minors. General social media networks are striving to isolate adult images and videos from normal ones. Intelligent image analysis methods can help to automatically detect and isolate questionable images in media. Unfortunately, these methods require vast experience to design the classifier including one or more of the popular computer vision feature descriptors. We propose to build a classifier based on one of the recently flourishing deep learning techniques. Convolutional neural networks contain many layers for both automatic features extraction and classification. The benefit is an easier system to build (no need for hand-crafting features and classifiers). Additionally, our experiments show that it is even more accurate than the state of the art methods on the most recent benchmark dataset.",
"title": ""
},
{
"docid": "ea86e4d0581dc3be3f3671cf25b064ae",
"text": "Transfer learning allows leveraging the knowledge of source domains, available a priori, to help training a classifier for a target domain, where the available data is scarce. The effectiveness of the transfer is affected by the relationship between source and target. Rather than improving the learning, brute force leveraging of a source poorly related to the target may decrease the classifier performance. One strategy to reduce this negative transfer is to import knowledge from multiple sources to increase the chance of finding one source closely related to the target. This work extends the boosting framework for transferring knowledge from multiple sources. Two new algorithms, MultiSource-TrAdaBoost, and TaskTrAdaBoost, are introduced, analyzed, and applied for object category recognition and specific object detection. The experiments demonstrate their improved performance by greatly reducing the negative transfer as the number of sources increases. TaskTrAdaBoost is a fast algorithm enabling rapid retraining over new targets.",
"title": ""
},
{
"docid": "eb34d154a1547db6e0a9612abc0adcf3",
"text": "Soft robots are challenging to model due to their nonlinear behavior. However, their soft bodies make it possible to safely observe their behavior under random control inputs, making them amenable to large-scale data collection and system identification. This paper implements and evaluates a system identification method based on Koopman operator theory. This theory offers a way to represent a nonlinear system as a linear system in the infinite-dimensional space of real-valued functions called observables, enabling models of nonlinear systems to be constructed via linear regression of observed data. The approach does not suffer from some of the shortcomings of other nonlinear system identification methods, which typically require the manual tuning of training parameters and have limited convergence guarantees. A dynamic model of a pneumatic soft robot arm is constructed via this method, and used to predict the behavior of the real system. The total normalized-root-mean-square error (NRMSE) of its predictions over twelve validation trials is lower than that of several other identified models including a neural network, NLARX, nonlinear Hammerstein-Wiener, and linear state space model.",
"title": ""
},
{
"docid": "9634245d2a71804083fa90a6555d13a8",
"text": "In far-field speech recognition systems, training acoustic models with alignments generated from parallel close-talk microphone data provides significant improvements. However it is not practical to assume the availability of large corpora of parallel close-talk microphone data, for training. In this paper we explore methods to reduce the performance gap between far-field ASR systems trained with alignments from distant microphone data and those trained with alignments from parallel close-talk microphone data. These methods include the use of a lattice-free sequence objective function which tolerates minor mis-alignment errors; and the use of data selection techniques to discard badly aligned data. We present results on single distant microphone and multiple distant microphone scenarios of the AMI LVCSR task. We identify prominent causes of alignment errors in AMI data.",
"title": ""
},
{
"docid": "05a35ab061a0d5ce18a3ceea8dde78f6",
"text": "A single feed grid array antenna for 24 GHz Doppler sensor is proposed in this paper. It is designed on 0.787 mm thick substrate made of Rogers Duroid 5880 (ε<sub>r</sub>= 2.2 and tan δ= 0.0009) with 0.017 mm copper claddings. Dimension of the antenna is 60 mm × 60 mm × 0.787 mm. This antenna exhibits 2.08% impedance bandwidth, 6.25% radiation bandwidth and 20.6 dBi gain at 24.2 GHz. The beamwidth is 14°and 16°in yoz and xoz planes, respectively.",
"title": ""
},
{
"docid": "ff18792f352429df42358d6b435ae813",
"text": "Recently, micro-expression recognition has seen an increase of interest from psychological and computer vision communities. As microexpressions are generated involuntarily on a person’s face, and are usually a manifestation of repressed feelings of the person. Most existing works pay attention to either the detection or spotting of micro-expression frames or the categorization of type of micro-expression present in a short video shot. In this paper, we introduced a novel automatic approach to micro-expression recognition from long video that combines both spotting and recognition mechanisms. To achieve this, the apex frame, which provides the instant when the highest intensity of facial movement occurs, is first spotted from the entire video sequence. An automatic eye masking technique is also presented to improve the robustness of apex frame spotting. With the single apex, we describe the spotted micro-expression instant using a state-of-the-art feature extractor before proceeding to classification. This is the first known work that recognizes micro-expressions from a long video sequence without the knowledge of onset and offset frames, which are typically used to determine a cropped sub-sequence containing the micro-expression. We evaluated the spotting and recognition tasks on four spontaneous micro-expression databases comprising only of raw long videos – CASME II-RAW, SMICE-HS, SMIC-E-VIS and SMIC-E-NIR. We obtained compelling results that show the effectiveness of the proposed approach, which outperform most methods that rely on human annotated sub-sequences.",
"title": ""
}
] |
scidocsrr
|
54090374cd70fa395b7b2a5607d937f3
|
A big data enabled load-balancing control for smart manufacturing of Industry 4.0
|
[
{
"docid": "e740e5ff2989ce414836c422c45570a9",
"text": "Many organizations desired to operate their businesses, works and services in a mobile (i.e. just in time and anywhere), dynamic, and knowledge-oriented fashion. Activities like e-learning, environmental learning, remote inspection, health-care, home security and safety mechanisms etc. requires a special infrastructure that might provide continuous, secured, reliable and mobile data with proper information/ knowledge management system in context to their confined environment and its users. An indefinite number of sensor networks for numerous healthcare applications has been designed and implemented but they all lacking extensibility, fault-tolerance, mobility, reliability and openness. Thus, an open, flexible and rearrangeable infrastructure is proposed for healthcare monitoring applications. Where physical sensors are virtualized as virtual sensors on cloud computing by this infrastructure and virtual sensors are provisioned automatically to end users whenever they required. In this paper we reviewed some approaches to hasten the service creations in field of healthcare and other applications with Cloud-Sensor architecture. This architecture provides services to end users without being worried about its implementation details. The architecture allows the service requesters to use the virtual sensors by themselves or they may create other new services by extending virtual sensors.",
"title": ""
},
{
"docid": "dbab8fdd07b1180ba425badbd1616bb2",
"text": "The proliferation of cyber-physical systems introduces the fourth stage of industrialization, commonly known as Industry 4.0. The vertical integration of various components inside a factory to implement a flexible and reconfigurable manufacturing system, i.e., smart factory, is one of the key features of Industry 4.0. In this paper, we present a smart factory framework that incorporates industrial network, cloud, and supervisory control terminals with smart shop-floor objects such as machines, conveyers, and products. Then, we provide a classification of the smart objects into various types of agents and define a coordinator in the cloud. The autonomous decision and distributed cooperation between agents lead to high flexibility. Moreover, this kind of self-organized system leverages the feedback and coordination by the central coordinator in order to achieve high efficiency. Thus, the smart factory is characterized by a self-organized multi-agent system assisted with big data based feedback and coordination. Based on this model, we propose an intelligent negotiation mechanism for agents to cooperate with each other. Furthermore, the study illustrates that complementary strategies can be designed to prevent deadlocks by improving the agents’ decision making and the coordinator’s behavior. The simulation results assess the effectiveness of the proposed negotiation mechanism and deadlock prevention strategies. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "4b988535edefeb3ff7df89bcb900dd1c",
"text": "Context: As a result of automated software testing, large amounts of software test code (script) are usually developed by software teams. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, just like any software development activity, development of test scripts is tedious and error prone. We refer, in this study, to all activities that should be conducted during the entire lifecycle of test-code as Software Test-Code Engineering (STCE). Objective: As the STCE research area has matured and the number of related studies has increased, it is important to systematically categorize the current state-of-the-art and to provide an overview of the trends in this field. Such summarized and categorized results provide many benefits to the broader community. For example, they are valuable resources for new researchers (e.g., PhD students) aiming to conduct additional secondary studies. Method: In this work, we systematically classify the body of knowledge related to STCE through a systematic mapping (SM) study. As part of this study, we pose a set of research questions, define selection and exclusion criteria, and systematically develop and refine a systematic map. Results: Our study pool includes a set of 60 studies published in the area of STCE between 1999 and 2012. Our mapping data is available through an online publicly-accessible repository. We derive the trends for various aspects of STCE. Among our results are the following: (1) There is an acceptable mix of papers with respect to different contribution facets in the field of STCE and the top two leading facets are tool (68%) and method (65%). The studies that presented new processes, however, had a low rate (3%), which denotes the need for more process-related studies in this area. (2) Results of investigation about research facet of studies and comparing our result to other SM studies shows that, similar to other fields in software engineering, STCE is moving towards more rigorous validation approaches. (3) A good mixture of STCE activities has been presented in the primary studies. Among them, the two leading activities are quality assessment and co-maintenance of test-code with production code. The highest growth rate for co-maintenance activities in recent years shows the importance and challenges involved in this activity. (4) There are two main categories of quality assessment activity: detection of test smells and oracle assertion adequacy. (5) JUnit is the leading test framework which has been used in about 50% of the studies. (6) There is a good mixture of SUT types used in the studies: academic experimental systems (or simple code examples), real open-source and commercial systems. (7) Among 41 tools that are proposed for STCE, less than half of the tools (45%) were available for download. It is good to have this percentile of tools to be available, although not perfect, since the availability of tools can lead to higher impact on research community and industry. Conclusion: We discuss the emerging trends in STCE, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing STCE approaches and spot areas in the field that require more attention from the",
"title": ""
},
{
"docid": "bb201a87b4f81c9c4d2c8889d4bd3a6a",
"text": "Computers have difficulty learning how to play Texas Hold’em Poker. The game contains a high degree of stochasticity, hidden information, and opponents that are deliberately trying to mis-represent their current state. Poker has a much larger game space than classic parlour games such as Chess and Backgammon. Evolutionary methods have been shown to find relatively good results in large state spaces, and neural networks have been shown to be able to find solutions to non-linear search problems. In this paper, we present several algorithms for teaching agents how to play No-Limit Texas Hold’em Poker using a hybrid method known as evolving neural networks. Furthermore, we adapt heuristics such as halls of fame and co-evolution to be able to handle populations of Poker agents, which can sometimes contain several hundred opponents, instead of a single opponent. Our agents were evaluated against several benchmark agents. Experimental results show the overall best performance was obtained by an agent evolved from a single population (i.e., with no co-evolution) using a large hall of fame. These results demonstrate the effectiveness of our algorithms in creating competitive No-Limit Texas Hold’em Poker agents.",
"title": ""
},
{
"docid": "a99e30d406d5053d8345b36791899238",
"text": "Advances in sequencing technologies and increased access to sequencing services have led to renewed interest in sequence and genome assembly. Concurrently, new applications for sequencing have emerged, including gene expression analysis, discovery of genomic variants and metagenomics, and each of these has different needs and challenges in terms of assembly. We survey the theoretical foundations that underlie modern assembly and highlight the options and practical trade-offs that need to be considered, focusing on how individual features address the needs of specific applications. We also review key software and the interplay between experimental design and efficacy of assembly.",
"title": ""
},
{
"docid": "a30c2a8d3db81ae121e62af5994d3128",
"text": "Recent advances in the fields of robotics, cyborg development, moral psychology, trust, multi agent-based systems and socionics have raised the need for a better understanding of ethics, moral reasoning, judgment and decision-making within the system of man and machines. Here we seek to understand key research questions concerning the interplay of ethical trust at the individual level and the social moral norms at the collective end. We review salient works in the fields of trust and machine ethics research, underscore the importance and the need for a deeper understanding of ethical trust at the individual level and the development of collective social moral norms. Drawing upon the recent findings from neural sciences on mirror-neuron system (MNS) and social cognition, we present a bio-inspired Computational Model of Ethical Trust (CMET) to allow investigations of the interplay of ethical trust and social moral norms.",
"title": ""
},
{
"docid": "651a9d2c31748bda7de1b82bc5095f72",
"text": "In this paper, a fractional order PID controller is investigated for a position servomechanism control system considering actuator saturation and the shaft torsional flexibility. For actually implementation, we introduced a modified approximation method to realize the designed fractional order PID controller. Numerous simulation comparisons presented in this paper indicate that, the fractional order PID controller, if properly designed and implemented, will outperform the conventional integer order PID controller",
"title": ""
},
{
"docid": "c4df97f3db23c91f0ce02411d2e1e999",
"text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.",
"title": ""
},
{
"docid": "7d0bbf3a83881a97b0217b427b596b76",
"text": "This paper proposes a novel tracker which is controlled by sequentially pursuing actions learned by deep reinforcement learning. In contrast to the existing trackers using deep networks, the proposed tracker is designed to achieve a light computation as well as satisfactory tracking accuracy in both location and scale. The deep network to control actions is pre-trained using various training sequences and fine-tuned during tracking for online adaptation to target and background changes. The pre-training is done by utilizing deep reinforcement learning as well as supervised learning. The use of reinforcement learning enables even partially labeled data to be successfully utilized for semi-supervised learning. Through evaluation of the OTB dataset, the proposed tracker is validated to achieve a competitive performance that is three times faster than state-of-the-art, deep network–based trackers. The fast version of the proposed method, which operates in real-time on GPU, outperforms the state-of-the-art real-time trackers.",
"title": ""
},
{
"docid": "d0a6592c487f9963fb1ce9b78691257c",
"text": "In today's world, mobile phone penetration has reached a saturation point. As a result, subscriber churn has become an important issue for mobile operators as subscribers switch operators for a variety of reasons. Mobile operators typically employ churn prediction algorithms based on service usage metrics, network performance indicators, and traditional demographic information. A newly emerging technique is the use of social network analysis (SNA) to identify potential churners. Intuitively, a subscriber who is churning will have an impact on the churn propensity of his social circle. Call detail records are useful to understand the social connectivity of subscribers through call graphs but do not directly provide the strength of their relationship or have enough information to determine the diffusion of churn influence. In this paper, we present a way to address these challenges by developing a new churn prediction algorithm based on a social network analysis of the call graph. We provide a formulation that quantifies the strength of social ties between users based on multiple attributes and then apply an influence diffusion model over the call graph to determine the net accumulated influence from churners. We combine this influence and other social factors with more traditional metrics and apply machine-learning methods to compute the propensity to churn for individual users. We evaluate the performance of our algorithm over a real data set and quantify the benefit of using SNA in churn prediction.",
"title": ""
},
{
"docid": "fb83fca1b1ed1fca15542900bdb3748d",
"text": "Learning disease severity scores automatically from collected measurements may aid in the quality of both healthcare and scientific understanding. Some steps in that direction have been taken and machine learning algorithms for extracting scoring functions from data have been proposed. Given the rapid increase in both quantity and diversity of data measured and stored, the large amount of information is becoming one of the challenges for learning algorithms. In this work, we investigated the direction of the problemwhere the dimensionality of measured variables is large. Learning the severity score in such cases brings the issue of which of measured features are relevant. We have proposed a novel approach by combining desirable properties of existing formulations, which compares favorably to alternatives in accuracy and especially in the robustness of the learned scoring function.The proposed formulation has a nonsmooth penalty that induces sparsity.This problem is solved by addressing a dual formulationwhich is smooth and allows an efficient optimization.The proposed approachmight be used as an effective and reliable tool for both scoring function learning and biomarker discovery, as demonstrated by identifying a stable set of genes related to influenza symptoms’ severity, which are enriched in immune-related processes.",
"title": ""
},
{
"docid": "7292ceb6718d0892a154d294f6434415",
"text": "This article illustrates the application of a nonlinear system identification technique to the problem of STLF. Five NARX models are estimated using fixed-size LS-SVM, and two of the models are later modified into AR-NARX structures following the exploration of the residuals. The forecasting performance, assessed for different load series, is satisfactory. The MSE levels on the test data are below 3% in most cases. The models estimated with fixed-size LS-SVM give better results than a linear model estimated with the same variables and also better than a standard LS-SVM in dual space estimated using only the last 1000 data points. Furthermore, the good performance of the fixed-size LS-SVM is obtained based on a subset of M = 1000 initial support vectors, representing a small fraction of the available sample. Further research on a more dedicated definition of the initial input variables (for example, incorporation of external variables to reflect industrial activity, use of explicit seasonal information) might lead to further improvements and the extension toward other types of load series.",
"title": ""
},
{
"docid": "3ce69e8f46fac6029c506445b4e7634e",
"text": "Resumen. En este art́ıculo se presenta el desarrollo de un sistema de reconocimiento de emociones basado en la voz. Se consideraron las siguientes emociones básicas: Enojo, Felicidad, Neutro y Tristeza. Para este propósito una base de datos de voz emocional fue creada con ocho usuarios Mexicanos con 640 frases (8 usuarios × 4 emociones × 20 frases por emoción). Los Modelos Ocultos de Markov (Hidden Markov Models, HMMs) fueron usados para construir el sistema de reconocimiento. Basado en el concepto de modelado acústico de vocales espećıficas emotivas un total de 20 fonemas de vocales (5 vocales × 4 emociones) y 22 fonemas de consonantes fueron considerados para el entrenamiento de los HMMs. Un Algoritmo Genético (Genetic Algorithm, GA) fue integrado dentro del proceso de reconocimiento para encontrar la arquitectura más adecuada para el HMM para cada vocal espećıfica emotiva. Una tasa de reconocimiento total aproximada del 90.00 % fue conseguida con el reconocedor de voz construido con los HMMs optimizados.",
"title": ""
},
{
"docid": "25f67b19daa65a8c7ade4cabe1153c60",
"text": "This paper deals with feedback controller synthesis for Timed Event Graphs in dioids. We discuss here the existence and the computation of a controller which leads to a closed-loop system whose behavior is as close as possible to the one of a given reference model and which delays as much as possible the input of tokens inside the (controlled) system. The synthesis presented here is mainly based on residuation theory results and some Kleene star properties.",
"title": ""
},
{
"docid": "bcee490d287e146ff1c4fe7f1dee2cbf",
"text": "Biometrics is a growing technology, which has been widely used in forensics, secured access and prison security. A biometric system is fundamentally a pattern recognition system that recognizes a person by determining the authentication by using his different biological features i.e. Fingerprint, retina-scan, iris scan, hand geometry, and face recognition are leading physiological biometrics and behavioral characteristic are Voice recognition, keystroke-scan, and signature-scan. In this paper different biometrics techniques such as Iris scan, retina scan and face recognition techniques are discussed. Keyword: Biometric, Biometric techniques, Eigenface, Face recognition.",
"title": ""
},
{
"docid": "f463ee2dd3a9243ed7536d88d8c2c568",
"text": "A new silicon controlled rectifier-based power-rail electrostatic discharge (ESD) clamp circuit was proposed with a novel trigger circuit that has very low leakage current in a small layout area for implementation. This circuit was successfully verified in a 40-nm CMOS process by using only low-voltage devices. The novel trigger circuit uses a diode-string based level-sensing ESD detection circuit, but not using MOS capacitor, which has very large leakage current. Moreover, the leakage current on the ESD detection circuit is further reduced, adding a diode in series with the trigger transistor. By combining these two techniques, the total silicon area of the power-rail ESD clamp circuit can be reduced three times, whereas the leakage current is three orders of magnitude smaller than that of the traditional design.",
"title": ""
},
{
"docid": "d1868eb5bb8d2995e7035058bee58d1e",
"text": "All power systems have some inherent level of flexibility— designed to balance supply and demand at all times. Variability and uncertainty are not new to power systems because loads change over time and in sometimes unpredictable ways, and conventional resources fail unexpectedly. Variable renewable energy supply, however, can make this balance harder to achieve. Both wind and solar generation output vary significantly over the course of hours to days, sometimes in a predictable fashion, but often imperfectly forecasted.",
"title": ""
},
{
"docid": "44f41d363390f6f079f2e67067ffa36d",
"text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢",
"title": ""
},
{
"docid": "6a3dc4c6bcf2a4133532c37dfa685f3b",
"text": "Feature selection can be de ned as a problem of nding a minimum set of M relevant at tributes that describes the dataset as well as the original N attributes do where M N After examining the problems with both the exhaustive and the heuristic approach to fea ture selection this paper proposes a proba bilistic approach The theoretic analysis and the experimental study show that the pro posed approach is simple to implement and guaranteed to nd the optimal if resources permit It is also fast in obtaining results and e ective in selecting features that im prove the performance of a learning algo rithm An on site application involving huge datasets has been conducted independently It proves the e ectiveness and scalability of the proposed algorithm Discussed also are various aspects and applications of this fea ture selection algorithm",
"title": ""
},
{
"docid": "a53f26ef068d11ea21b9ba8609db6ddf",
"text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fbcbf7d6a53299708ecf6a780cf0834c",
"text": "We present an approach for weakly supervised learning of human actions from video transcriptions. Our system is based on the idea that, given a sequence of input data and a transcript, i.e. a list of the order the actions occur in the video, it is possible to infer the actions within the video stream and to learn the related action models without the need for any frame-based annotation. Starting from the transcript information at hand, we split the given data sequences uniformly based on the number of expected actions. We then learn action models for each class by maximizing the probability that the training video sequences are generated by the action models given the sequence order as defined by the transcripts. The learned model can be used to temporally segment an unseen video with or without transcript. Additionally, the inferred segments can be used as a starting point to train high-level fully supervised models. We evaluate our approach on four distinct activity datasets, namely Hollywood Extended, MPII Cooking, Breakfast and CRIM13. It shows that the proposed system is able to align the scripted actions with the video data, that the learned models localize and classify actions in the datasets, and that they outperform any current state-of-the-art approach for aligning transcripts with video data.",
"title": ""
},
{
"docid": "22572c36ce1b816ee30ef422cb290dea",
"text": "Visual context is important in object recognition and it is still an open problem in computer vision. Along with the advent of deep convolutional neural networks (CNN), using contextual information with such systems starts to receive attention in the literature. At the same time, aerial imagery is gaining momentum. While advances in deep learning make good progress in aerial image analysis, this problem still poses many great challenges. Aerial images are often taken under poor lighting conditions and contain low resolution objects, many times occluded by trees or taller buildings. In this domain, in particular, visual context could be of great help, but there are still very few papers that consider context in aerial image understanding. Here we introduce context as a complementary way of recognizing objects. We propose a dual-stream deep neural network model that processes information along two independent pathways, one for local and another for global visual reasoning. The two are later combined in the final layers of processing. Our model learns to combine local object appearance as well as information from the larger scene at the same time and in a complementary way, such that together they form a powerful classifier. We test our dual-stream network on the task of segmentation of buildings and roads in aerial images and obtain state-of-the-art results on the Massachusetts Buildings Dataset. We also introduce two new datasets, for buildings and road segmentation, respectively, and study the relative importance of local appearance vs. the larger scene, as well as their performance in combination. While our local-global model could also be useful in general recognition tasks, we clearly demonstrate the effectiveness of visual context in conjunction with deep nets for aerial image",
"title": ""
}
] |
scidocsrr
|
ea9b364a78fc2387e1dad358f0192471
|
Advances in Clickstream Data Analysis in Marketing
|
[
{
"docid": "6db749b222a44764cf07bde527c230a3",
"text": "There have been many claims that the Internet represents a new “frictionless market.” Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products — books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping and shopping costs are included in the price. Additionally, we find that Internet retailers’ price adjustments over time are up to 100 times smaller than conventional retailers’ price adjustments — presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we simply compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.",
"title": ""
},
{
"docid": "c02d207ed8606165e078de53a03bf608",
"text": "School of Business, University of Maryland (e-mail: mtrusov@rhsmith. umd.edu). Anand V. Bodapati is Associate Professor of Marketing (e-mail: [email protected]), and Randolph E. Bucklin is Peter W. Mullin Professor (e-mail: [email protected]), Anderson School of Management, University of California, Los Angeles. The authors are grateful to Christophe Van den Bulte and Dawn Iacobucci for their insightful and thoughtful comments on this work. John Hauser served as associate editor for this article. MICHAEL TRUSOV, ANAND V. BODAPATI, and RANDOLPH E. BUCKLIN*",
"title": ""
}
] |
[
{
"docid": "87be04b184d27c006bb06dd9906a9422",
"text": "With the significant growth of the markets for consumer electronics and various embedded systems, flash memory is now an economic solution for storage systems design. Because index structures require intensively fine-grained updates/modifications, block-oriented access over flash memory could introduce a significant number of redundant writes. This might not only severely degrade the overall performance, but also damage the reliability of flash memory. In this paper, we propose a very different approach, which can efficiently handle fine-grained updates/modifications caused by B-tree index access over flash memory. The implementation is done directly over the flash translation layer (FTL); hence, no modifications to existing application systems are needed. We demonstrate that when index structures are adopted over flash memory, the proposed methodology can significantly improve the system performance and, at the same time, reduce both the overhead of flash-memory management and the energy dissipation. The average response time of record insertions and deletions was also significantly reduced.",
"title": ""
},
{
"docid": "742dbd75ad995d5c51c4cbce0cc7f8cc",
"text": "Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used twofingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.",
"title": ""
},
{
"docid": "c02697087e8efd4c1ba9f9a26fa1115b",
"text": "OBJECTIVE\nTo estimate the current prevalence of limb loss in the United States and project the future prevalence to the year 2050.\n\n\nDESIGN\nEstimates were constructed using age-, sex-, and race-specific incidence rates for amputation combined with age-, sex-, and race-specific assumptions about mortality. Incidence rates were derived from the 1988 to 1999 Nationwide Inpatient Sample of the Healthcare Cost and Utilization Project, corrected for the likelihood of reamputation among those undergoing amputation for vascular disease. Incidence rates were assumed to remain constant over time and applied to historic mortality and population data along with the best available estimates of relative risk, future mortality, and future population projections. To investigate the sensitivity of our projections to increasing or decreasing incidence, we developed alternative sets of estimates of limb loss related to dysvascular conditions based on assumptions of a 10% or 25% increase or decrease in incidence of amputations for these conditions.\n\n\nSETTING\nCommunity, nonfederal, short-term hospitals in the United States.\n\n\nPARTICIPANTS\nPersons who were discharged from a hospital with a procedure code for upper-limb or lower-limb amputation or diagnosis code of traumatic amputation.\n\n\nINTERVENTIONS\nNot applicable.\n\n\nMAIN OUTCOME MEASURES\nPrevalence of limb loss by age, sex, race, etiology, and level in 2005 and projections to the year 2050.\n\n\nRESULTS\nIn the year 2005, 1.6 million persons were living with the loss of a limb. Of these subjects, 42% were nonwhite and 38% had an amputation secondary to dysvascular disease with a comorbid diagnosis of diabetes mellitus. It is projected that the number of people living with the loss of a limb will more than double by the year 2050 to 3.6 million. If incidence rates secondary to dysvascular disease can be reduced by 10%, this number would be lowered by 225,000.\n\n\nCONCLUSIONS\nOne in 190 Americans is currently living with the loss of a limb. Unchecked, this number may double by the year 2050.",
"title": ""
},
{
"docid": "74ca823c5dfb41e3566a29549c8137ab",
"text": "\"Experimental realization of quantum algorithm for solving linear systems of equations\" (2014). Many important problems in science and engineering can be reduced to the problem of solving linear equations. The quantum algorithm discovered recently indicates that one can solve an N-dimensional linear equation in O(log N) time, which provides an exponential speedup over the classical counterpart. Here we report an experimental demonstration of the quantum algorithm when the scale of the linear equation is 2 × 2 using a nuclear magnetic resonance quantum information processor. For all sets of experiments, the fidelities of the final four-qubit states are all above 96%. This experiment gives the possibility of solving a series of practical problems related to linear systems of equations and can serve as the basis to realize many potential quantum algorithms.",
"title": ""
},
{
"docid": "c3b07d5c9a88c1f9430615d5e78675b6",
"text": "Two new algorithms and associated neuron-like network architectures are proposed for solving the eigenvalue problem in real-time. The first approach is based on the solution of a set of nonlinear algebraic equations by employing optimization techniques. The second approach employs a multilayer neural network with linear artificial neurons and it exploits the continuous-time error back-propagation learning algorithm. The second approach enables us to find all the eigenvalues and the associated eigenvectors simultaneously by training the network to match some desired patterns, while the first approach is suitable to find during one run only one particular eigenvalue (e.g. an extreme eigenvalue) and the corresponding eigenvector in realtime. In order to find all eigenpairs the optimization process must be repeated in this case many times for different initial conditions. The performance and convergence behaviour of the proposed neural network architectures are investigated by extensive computer simulations.",
"title": ""
},
{
"docid": "2b09ae15fe7756df3da71cfc948e9506",
"text": "Repair of the injured spinal cord by regeneration therapy remains an elusive goal. In contrast, progress in medical care and rehabilitation has resulted in improved health and function of persons with spinal cord injury (SCI). In the absence of a cure, raising the level of achievable function in mobility and self-care will first and foremost depend on creative use of the rapidly advancing technology that has been so widely applied in our society. Building on achievements in microelectronics, microprocessing and neuroscience, rehabilitation medicine scientists have succeeded in developing functional electrical stimulation (FES) systems that enable certain individuals with SCI to use their paralyzed hands, arms, trunk, legs and diaphragm for functional purposes and gain a degree of control over bladder and bowel evacuation. This review presents an overview of the progress made, describes the current challenges and suggests ways to improve further FES systems and make these more widely available.",
"title": ""
},
{
"docid": "79e2e4af34e8a2b89d9439ff83b9fd5a",
"text": "PROBLEM\nThe current nursing workforce is composed of multigenerational staff members creating challenges and at times conflict for managers.\n\n\nMETHODS\nGenerational cohorts are defined and two multigenerational scenarios are presented and discussed using the ACORN imperatives and Hahn's Five Managerial Strategies for effectively managing a multigenerational staff.\n\n\nFINDINGS\nCommunication and respect are the underlying key strategies to understanding and bridging the generational gap in the workplace.\n\n\nCONCLUSION\nEmbracing and respecting generational differences can bring strength and cohesiveness to nursing teams on the managerial or unit level.",
"title": ""
},
{
"docid": "6ad90319d07abce021eda6f3a1d3886e",
"text": "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Fréchet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "78744205cf17be3ee5a61d12e6a44180",
"text": "Modeling of photovoltaic (PV) systems is essential for the designers of solar generation plants to do a yield analysis that accurately predicts the expected power output under changing environmental conditions. This paper presents a comparative analysis of PV module modeling methods based on the single-diode model with series and shunt resistances. Parameter estimation techniques within a modeling method are used to estimate the five unknown parameters in the single diode model. Two sets of estimated parameters were used to plot the I-V characteristics of two PV modules, i.e., SQ80 and KC200GT, for the different sets of modeling equations, which are classified into models 1 to 5 in this study. Each model is based on the different combinations of diode saturation current and photogenerated current plotted under varying irradiance and temperature. Modeling was done using MATLAB/Simulink software, and the results from each model were first verified for correctness against the results produced by their respective authors. Then, a comparison was made among the different models (models 1 to 5) with respect to experimentally measured and datasheet I-V curves. The resultant plots were used to draw conclusions on which combination of parameter estimation technique and modeling method best emulates the manufacturer specified characteristics.",
"title": ""
},
{
"docid": "b266069e91c24120b1732c5576087a90",
"text": "Reactions of organic molecules on Montmorillonite c lay mineral have been investigated from various asp ects. These include catalytic reactions for organic synthesis, chemical evolution, the mechanism of humus-formatio n, and environmental problems. Catalysis by clay minerals has attracted much interest recently, and many repo rts including the catalysis by synthetic or modified cl ays have been published. In this review, we will li mit the review to organic reactions using Montmorillonite clay as cat alyst.",
"title": ""
},
{
"docid": "b9652cf6647d9c7c1f91a345021731db",
"text": "Context: The processes of estimating, planning and managing are crucial for software development projects, since the results must be related to several business strategies. The broad expansion of the Internet and the global and interconnected economy make Web development projects be often characterized by expressions like delivering as soon as possible, reducing time to market and adapting to undefined requirements. In this kind of environment, traditional methodologies based on predictive techniques sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has provided some useful tools that, combined with Web Engineering techniques, can help to establish a framework to estimate, manage and plan Web development projects. Objective: This paper presents a proposal for estimating, planning and managing Web projects, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features. Method: The proposal is analyzed by means of a case study, including a real-life project, in order to obtain relevant conclusions. Results: The results achieved after using the framework in a development project are presented, including interesting results on project planning and estimation, as well as on team productivity throughout the project. Conclusion: It is concluded that the framework can be useful in order to better manage Web-based projects, through a continuous value-based estimation and management process.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "85719d4bc86c7c8bbe5799a716d6533b",
"text": "We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on ”expander-like” properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections. 1 ar X iv :1 70 6. 05 68 3v 1 [ cs .L G ] 1 8 Ju n 20 17",
"title": ""
},
{
"docid": "e5a18d6df921ab96da8e106cdb4eeac7",
"text": "This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.",
"title": ""
},
{
"docid": "7e9dbc7f1c3855972dbe014e2223424c",
"text": "Speech disfluencies (filled pauses, repe titions, repairs, and false starts) are pervasive in spontaneous speech. The ab ility to detect and correct disfluencies automatically is important for effective natural language understanding, as well as to improve speech models in general. Previous approaches to disfluency detection have relied heavily on lexical information, which makes them less applicable when word recognition is unreliable. We have developed a disfluency detection method using decision tree classifiers that use only local and automatically extracted prosodic features. Because the model doesn’t rely on lexical information, it is widely applicable even when word recognition is unreliable. The model performed significantly better than chance at detecting four disfluency types. It also outperformed a language model in the detection of false starts, given the correct transcription. Combining the prosody model with a specialized language model improved accuracy over either model alone for the detection of false starts. Results suggest that a prosody-only model can aid the automatic detection of disfluencies in spontaneous speech.",
"title": ""
},
{
"docid": "7340866fa3965558e1571bcc5294b896",
"text": "The human stress response has been characterized, both physiologically and behaviorally, as \"fight-or-flight.\" Although fight-or-flight may characterize the primary physiological responses to stress for both males and females, we propose that, behaviorally, females' responses are more marked by a pattern of \"tend-and-befriend.\" Tending involves nurturant activities designed to protect the self and offspring that promote safety and reduce distress; befriending is the creation and maintenance of social networks that may aid in this process. The biobehavioral mechanism that underlies the tend-and-befriend pattern appears to draw on the attachment-caregiving system, and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core. This previously unexplored stress regulatory system has manifold implications for the study of stress.",
"title": ""
},
{
"docid": "ad2546a681a3b6bcef689f0bb71636b5",
"text": "Data and computation integrity and security are major concerns for users of cloud computing facilities. Many production-level clouds optimistically assume that all cloud nodes are equally trustworthy when dispatching jobs; jobs are dispatched based on node load, not reputation. This increases their vulnerability to attack, since compromising even one node suffices to corrupt the integrity of many distributed computations. This paper presents and evaluates Hatman: the first full-scale, data-centric, reputation-based trust management system for Hadoop clouds. Hatman dynamically assesses node integrity by comparing job replica outputs for consistency. This yields agreement feedback for a trust manager based on EigenTrust. Low overhead and high scalability is achieved by formulating both consistency-checking and trust management as secure cloud computations; thus, the cloud's distributed computing power is leveraged to strengthen its security. Experiments demonstrate that with feedback from only 100 jobs, Hatman attains over 90% accuracy when 25% of the Hadoop cloud is malicious.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.