title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Deep Learning based Large Scale Visual Recommendation and Search for E-Commerce | In this paper, we present a unied end-to-end approach to build a large scale Visual Search and Recommendation system for ecommerce. Previous works have targeted these problems in isolation. We believe a more eective and elegant solution could be obtained by tackling them together. We propose a unied Deep Convolutional Neural Network architecture, called VisNet 1, to learn embeddings to capture the notion of visual similarity, across several semantic granularities. We demonstrate the superiority of our approach for the task of image retrieval, by comparing against the state-of-the-art on the Exact Street2Shop [14] dataset. We then share the design decisions and trade-os made while deploying the model to power Visual Recommendations across a catalog of 50M products, supporting 2K queries a second at Flipkart, India’s largest e-commerce company. e deployment of our solution has yielded a signicant business impact, as measured by the conversion-rate. |
Treatment of isolated posterior coronal fracture of the lateral tibial plateau through posterolateral approach for direct exposure and buttress plate fixation | To present a case series of patients with isolated posterior coronal fractures of lateral tibial plateau treated by direct exposure and buttress plate fixation through posterolateral approach. Between May 2007 and April of 2008, eight middle aged patients were identified that had isolated posterior coronal fractures of the lateral tibial plateau. All eight patients underwent direct fracture exposure, reduction under visualization, and buttress plate fixation through posterolateral approach. There were 1 case of split, two cases of pure depression and five cases of split-depression fractures. Four were associated fibular head split fractures without common peroneal nerve injuries. Five patients were injured from a simple fall on riding electrical bicycle while the knee was relaxed in 90° position The articular displacement (8 cases) measured in CT scan was 10.5 mm in average (range 8–15 mm). The cortical split length (from the articular rim to the distal tip, 6 cases) was 2.8 cm in average (range 2.4–3.5 cm). The articular reduction was perfect in seven (absolutely no step-off) and imperfect in 1(<2 mm step-off) as measured by X-ray. With a mean follow-up of 10 months (6 cases > 12 months), the average range of motion arc was 119°, four patients have flexion lag 10°–20°. The average SMFA dysfunction score was 15.8, and average HSS score was 98. All eight patients stated they were highly satisfied. Direct posterolateral approach by dividing lateral border of soleus muscle, provides excellent fracture reduction under visualization and internal buttress plate fixation for posterior coronal fracture of the lateral tibial plateau. Good functional results and recovery can be expected. |
Determinants of ERP implementation knowledge transfer | Our study examined the determinants of ERP knowledge transfer from implementation consultants (ICs) to key users (KUs), and vice versa. An integrated model was developed, positing that knowledge transfer was influenced by the knowledge-, source-, recipient-, and transfer context-related aspects. Data to test this model were collected from 85 ERP-implementation projects of firms that were mainly located in Zhejiang province, China. The results of the analysis demonstrated that all four aspects had a significant influence on ERP knowledge transfer. Furthermore, the results revealed the mediator role of the transfer activities and arduous relationship between ICs and KUs. The influence on knowledge transfer from the source’s willingness to transfer and the recipient’s willingness to accept knowledge was fully mediated by transfer activities, whereas the influence on knowledge transfer from the recipient’s ability to absorb knowledge was only partially mediated by transfer activities. The influence on knowledge transfer from the communication capability (including encoding and decoding competence) was fully mediated by arduous relationship. 2008 Elsevier B.V. All rights reserved. |
Identifying Customer Preferences about Tourism Products Using an Aspect-based Opinion Mining Approach | In this study we extend Bing Liu’s aspect-based opinion mining technique to apply it to the tourism domain. Using this extension, we also offer an approach for considering a new alternative to discover consumer preferences about tourism products, particularly hotels and restaurants, using opinions available on the Web as reviews. An experiment is also conducted, using hotel and restaurant reviews obtained from TripAdvisor, to evaluate our proposals. Results showed that tourism product reviews available on web sites contain valuable information about customer preferences that can be extracted using an aspect-based opinion mining approach. The proposed approach proved to be very effective in determining the sentiment orientation of opinions, achieving a precision and recall of 90%. However, on average, the algorithms were only capable of extracting 35% of the explicit aspect expressions. c © 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of KES International. |
Randomized comparison of a polymer-free sirolimus-eluting stent versus a polymer-based paclitaxel-eluting stent in patients with diabetes mellitus: the LIPSIA Yukon trial. | OBJECTIVES
The objective of the study was to assess noninferiority of the polymer-free sirolimus-eluting Yukon Choice stent (Translumina GmbH, Hechingen, Germany) compared with the polymer-based Taxus Liberté stent (Boston Scientific, Natick, Massachusetts) with regard to the primary endpoint, in-stent late lumen loss, at 9 months in patients with diabetes mellitus.
BACKGROUND
The Yukon Choice stent has been evaluated in several randomized controlled trials before, albeit to date, there has been no trial that exclusively enrolled patients with diabetes mellitus.
METHODS
Patients with diabetes mellitus undergoing percutaneous coronary intervention for clinically significant de novo coronary artery stenosis were randomized 1:1 to receive either the polymer-free sirolimus-eluting Yukon Choice stent or the polymer-based paclitaxel-eluting Taxus Liberté stent.
RESULTS
A total of 240 patients were randomized. Quantitative coronary angiography was available for 79% of patients. Mean in-stent late lumen loss was 0.63 ± 0.62 mm for the Yukon Choice stent and 0.45 ± 0.60 mm for the Taxus Liberté stent. Based on the pre-specified margin, the Yukon Choice stent failed to show noninferiority for the primary endpoint. During follow-up, there were no significant differences between groups regarding death, myocardial infarction, stent thrombosis, target lesion revascularization, target vessel revascularization, or nontarget vessel revascularization.
CONCLUSIONS
Compared with the Taxus Liberté stent, the polymer-free sirolimus-eluting Yukon Choice stent failed to show noninferiority with regard to the primary endpoint, in-stent late lumen loss, in patients with diabetes mellitus after 9-month follow-up. Both stents showed comparable clinical efficacy and safety. (Yukon Choice Versus Taxus Liberté in Diabetes Mellitus; NCT00368953). |
Validation of a Spanish version of the Revised Fibromyalgia Impact Questionnaire (FIQR) | BACKGROUND
The Revised version of the Fibromyalgia Impact Questionnaire (FIQR) was published in 2009. The aim of this study was to prepare a Spanish version, and to assess its psychometric properties in a sample of patients with fibromyalgia.
METHODS
The FIQR was translated into Spanish and administered, along with the FIQ, the Hospital Anxiety Depression Scale (HADS), the 36-Item Short-Form Health Survey (SF-36), and the Brief Pain Inventory (BPI), to 113 Spanish fibromyalgia patients. The administration of the Spanish FIQR was repeated a week later.
RESULTS
The Spanish FIQR had high internal consistency (Cronbach’s α was 0.91 and 0.95 at visits 1 and 2 respectively). The test-retest reliability was good for the FIQR total score and its function and symptoms domains (intraclass correlation coefficient (ICC > 0.70), but modest for the overall impact domain (ICC = 0.51). Statistically significant correlations (p < 0.05) were also found between the FIQR and the FIQ scores, as well as between the FIQR scores and the remaining scales’ scores.
CONCLUSIONS
The Spanish version of the FIQR has a good internal consistency and our findings support its validity for assessing fibromyalgia patients. It might be a valid instrument to apply in clinical and investigational grounds. |
ICWSM - A Great Catchy Name: Semi-Supervised Recognition of Sarcastic Sentences in Online Product Reviews | Sarcasm is a sophisticated form of speech act widely used in online communities. Automatic recognition of sarcasm is, however, a novel task. Sarcasm recognition could contribute to the performance of review summarization and ranking systems. This paper presents SASI, a novel Semi-supervised Algorithm for Sarcasm Identification that recognizes sarcastic sentences in product reviews. SASI has two stages: semisupervised pattern acquisition, and sarcasm classification. We experimented on a data set of about 66000 Amazon reviews for various books and products. Using a gold standard in which each sentence was tagged by 3 annotators, we obtained precision of 77% and recall of 83.1% for identifying sarcastic sentences. We found some strong features that characterize sarcastic utterances. However, a combination of more subtle pattern-based features proved more promising in identifying the various facets of sarcasm. We also speculate on the motivation for using sarcasm in online communities and social networks. |
Dynamic and Static Prototype Vectors for Semantic Composition | Compositional Distributional Semantic methods model the distributional behavior of a compound word by exploiting the distributional behavior of its constituent words. In this setting, a constituent word is typically represented by a feature vector conflating all the senses of that word. However, not all the senses of a constituent word are relevant when composing the semantics of the compound. In this paper, we present two different methods for selecting the relevant senses of constituent words. The first one is based on Word Sense Induction and creates a static multi prototype vectors representing the senses of a constituent word. The second creates a single dynamic prototype vector for each constituent word based on the distributional properties of the other constituents in the compound. We use these prototype vectors for composing the semantics of noun-noun compounds and evaluate on a compositionality-based similarity task. Our results show that: (1) selecting relevant senses of the constituent words leads to a better semantic composition of the compound, and (2) dynamic prototypes perform better than static prototypes. |
Graph Matching : Theoretical Foundations , Algorithms , and Applications | Graphs are a powerful and versatile tool useful in various subfields of science and engineering. In many applications, for example, in pattern recognition and computer vision, it is required to measure the similarity of objects. When graphs are used for the representation of structured objects, then the problem of measuring object similarity turns into the problem of computing the similarity of graphs, which is also known as graph matching. In this paper, similarity measures on graphs and related algorithms will be reviewed. Applications of graph matching will be demonstrated giving examples from the fields of pattern recognition and computer vision. Also recent theoretical work showing various relations between different similarity measures will be discussed. |
Backpropagation Through Time: What It Does and How to Do It | Backpropagation is now the most widely used tool in the field of artificial neural networks. At the core of backpropagation is a method for calculating derivatives exactly and efficiently in any large system made up of elementary subsystems or calculations which are represented by known, differentiable functions; thus, backpropagation has many applications which do not involve neural networks as such. This paper first reviews basic backpropagation, a simple method which is now being widely used in areas like pattern recognition and fault diagnosis. Next, it presents the basic equations for backpropagation through time, and discusses applications to areas like pattern recognition involving dynamic systems, systems identification, and control. Finally, i t describes further extensions of this method, to deal with systems other than neural networks, systems involving simultaneous equations or true recurrent networks, and other practical issues which arise with this method. Pseudocode is provided to clarify the algorithms. The chain rule for ordered derivatives-the theorem which underlies backpropagation-is briefly discussed. |
An Analysis of the Privacy and Security Risks of Android VPN Permission-enabled Apps | Millions of users worldwide resort to mobile VPN clients to either circumvent censorship or to access geo-blocked content, and more generally for privacy and security purposes. In practice, however, users have little if any guarantees about the corresponding security and privacy settings, and perhaps no practical knowledge about the entities accessing their mobile traffic.
In this paper we provide a first comprehensive analysis of 283 Android apps that use the Android VPN permission, which we extracted from a corpus of more than 1.4 million apps on the Google Play store. We perform a number of passive and active measurements designed to investigate a wide range of security and privacy features and to study the behavior of each VPN-based app. Our analysis includes investigation of possible malware presence, third-party library embedding, and traffic manipulation, as well as gauging user perception of the security and privacy of such apps. Our experiments reveal several instances of VPN apps that expose users to serious privacy and security vulnerabilities, such as use of insecure VPN tunneling protocols, as well as IPv6 and DNS traffic leakage. We also report on a number of apps actively performing TLS interception. Of particular concern are instances of apps that inject JavaScript programs for tracking, advertising, and for redirecting e-commerce traffic to external partners. |
Deeply Improved Sparse Coding for Image Super-Resolution | Deep learning techniques have been successfully applied in many areas of computer vision, including low-level image restoration problems. For image super-resolution, several models based on deep neural networks have been recently proposed and attained superior performance that overshadows all previous handcrafted models. The question then arises whether large-capacity and data-driven models have become the dominant solution to the ill-posed superresolution problem. In this paper, we argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results. We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end. The interpretation of the network based on sparse coding leads to much more efficient and effective training, as well as a reduced model size. Our model is evaluated on a wide range of images, and shows clear advantage over existing state-of-the-art methods in terms of both restoration accuracy and human subjective quality. |
Got Flow?: Using Machine Learning on Physiological Data to Classify Flow | As information technologies (IT) are both, drivers of highly engaging experiences and sources of disruptions at work, the phenomenon of flow - defined as "the holistic sensation that people feel when they act with total involvement" [5, p. 36] - has been suggested as promising vehicle to understand and enhance user behavior. Despite the growing relevance of flow at work, contemporary measurement approaches of flow are of subjective and retrospective nature, limiting our possibilities to investigate and support flow in a reliable and timely manner. Hence, we require objective and real-time classification of flow. To address this issue, this article combines recent theoretical considerations from psychology and experimental research on the physiology of flow with machine learning (ML). The overall aim is to build classifiers to distinguish flow states (i.e., low and high flow). Our results indicate that flow-classifiers can be derived from physiological signals. Cardiac features seem to play an important role in this process resulting in an accuracy of 72.3%. Hereby, our findings may serve as foundation for future work aiming to build flow-aware IT-systems. |
Graph Matching With a Dual-Step EM Algorithm | This paper describes a new approach to matching geometric structure in 2D point-sets. The novel feature is to unify the tasks of estimating transformation geometry and identifying point-correspondence matches. Unification is realized by constructing a mixture model over the bipartite graph representing the correspondence match and by affecting optimization using the EM algorithm. According to our EM framework, the probabilities of structural correspondence gate contributions to the expected likelihood function used to estimate maximum likelihood transformation parameters. These gating probabilities measure the consistency of the matched neighborhoods in the graphs. The recovery of transformational geometry and hard correspondence matches are interleaved and are realized by applying coupled update operations to the expected log-likelihood function. In this way, the two processes bootstrap one another. This provides a means of rejecting structural outliers. We evaluate the technique on two real-world problems. The first involves the matching of different perspective views of 3.5-inch floppy discs. The second example is furnished by the matching of a digital map against aerial images that are subject to severe barrel distortion due to a line-scan sampling process. We complement these experiments with a sensitivity study based on synthetic data. |
UWB Bowtie Slot Antenna for Breast Cancer Detection | UWB is a very attractive technology for many applications. It provides many advantages such as fine resolution and high power efficiency. Our interest in the current study is the use of UWB radar technique in microwave medical imaging systems, especially for early breast cancer detection. The Federal Communications Commission FCC allowed frequency bandwidth of 3.1 to 10.6 GHz for this purpose. In this paper we suggest an UWB Bowtie slot antenna with enhanced bandwidth. Effects of varying the geometry of the antenna on its performance and bandwidth are studied. The proposed antenna is simulated in CST Microwave Studio. Details of antenna design and simulation results such as return loss and radiation patterns are discussed in this paper. The final antenna structure exhibits good UWB characteristics and has surpassed the bandwidth requirements. Keywords—Ultra Wide Band (UWB), microwave imaging system, Bowtie antenna, return loss, impedance bandwidth enhancement. |
Lycopene: chemistry, biology, and implications for human health and disease. | A diet rich in carotenoid-containing foods is associated with a number of health benefits. Lycopene provides the familiar red color to tomato products and is one of the major carotenoids in the diet of North Americans and Europeans. Interest in lycopene is growing rapidly following the recent publication of epidemiologic studies implicating lycopene in the prevention of cardiovascular disease and cancers of the prostate or gastrointestinal tract. Lycopene has unique structural and chemical features that may contribute to specific biological properties. Data concerning lycopene bioavailability, tissue distribution, metabolism, excretion, and biological actions in experimental animals and humans are beginning to accumulate although much additional research is necessary. This review will summarize our knowledge in these areas as well as the associations between lycopene consumption and human health. |
Annual report: surveillance of adverse events following immunisation in Australia, 2008. | This report summarises Australian passive surveillance data for adverse events following immunisation (AEFI) reported to the Therapeutic Goods Administration (TGA) for 2008, and describes reporting trends over the 9-year period 2000 to 2008. There were 1,542 AEFI records for vaccines administered in 2008. This was an annual AEFI reporting rate of 7.2 per 100,000 population, a 5% decrease compared with 2007. The majority of AEFI reports described non-serious events while 10% (n = 152) were classified as serious. Two deaths temporally associated with immunisation were reported; there was no evidence to suggest a causal association. The most commonly reported reactions were injection site reaction, allergic reaction, fever and headache. AEFI reporting rates in 2008 were 2.7 events per 100,000 administered doses of influenza vaccine for adults aged > or = 18 years, 18.9 per 100,000 administered doses of pneumococcal polysaccharide vaccine for those aged > or = 65 years, and 17.2 per 100,000 administered doses of scheduled vaccines for children aged < 7 years. Reports for infants increased in 2008, mainly related to gastrointestinal system events temporally associated with receipt of rotavirus vaccine in the 1st full year of the rotavirus immunisation program, while there was a substantial decrease in AEFI reports for human papilIoma virus vaccine in adolescents compared with 2007 when the program commenced. Increases in reports in children and adults were also partly attributed to the implementation of enhanced passive surveillance in Victoria. The consistently low reporting rate of serious AEFI highlights the safety of vaccines in Australia and illustrates the value of the national TGA database as a surveillance tool for monitoring AEFIs nationally. |
Emotion Classification Using Massive Examples Extracted from the Web | In this paper, we propose a data-oriented method for inferring the emotion of a speaker conversing with a dialog system from the semantic content of an utterance. We first fully automatically obtain a huge collection of emotion-provoking event instances from the Web. With Japanese chosen as a target language, about 1.3 million emotion provoking event instances are extracted using an emotion lexicon and lexical patterns. We then decompose the emotion classification task into two sub-steps: sentiment polarity classification (coarsegrained emotion classification), and emotion classification (fine-grained emotion classification). For each subtask, the collection of emotion-proviking event instances is used as labelled examples to train a classifier. The results of our experiments indicate that our method significantly outperforms the baseline method. We also find that compared with the singlestep model, which applies the emotion classifier directly to inputs, our two-step model significantly reduces sentiment polarity errors, which are considered fatal errors in real dialog applications. |
Difficulty Adjustable and Scalable Constrained Multi-objective Test Problem Toolkit | Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs. |
LibGuides: Science - GeoMaths Library - Environmental Studies: Home | Environmental science is the study of how different elements of the environment interact; that is the biological, chemical and the physical. |
Growth at puberty. | Somatic growth and maturation are influenced by a number of factors that act independently or in concert to modify an individual's genetic potential. The secular trend in height and adolescent development is further evidence for the significant influence of environmental factors on an individual's genetic potential for linear growth. Nutrition, including energy and specific nutrient intake, is a major determinant of growth. Paramount to normal growth is the general health and well-being of an individual; in fact, normal growth is a strong testament to the overall good health of a child. More recently the effect of physical activity and fitness on linear growth, especially among teenage athletes, has become a topic of interest. Puberty is a dynamic period of development marked by rapid changes in body size, shape, and composition, all of which are sexually dimorphic. One of the hallmarks of puberty is the adolescent growth spurt. Body compositional changes, including the regional distribution of body fat, are especially large during the pubertal transition and markedly sexually dimorphic. The hormonal regulation of the growth spurt and the alterations in body composition depend on the release of the gonadotropins, leptin, the sex-steroids, and growth hormone. It is very likely that interactions among these hormonal axes are more important than their main effects, and that alterations in body composition and the regional distribution of body fat actually are signals to alter the neuroendocrine and peripheral hormone axes. These processes are merely magnified during pubertal development but likely are pivotal all along the way from fetal growth to the aging process. |
Escitalopram in the acute treatment of depressed patients aged 60 years or older. | OBJECTIVE
The present study examined the efficacy and tolerability of acute escitalopram treatment in depressed patients aged 60 years or older.
METHODS
Patients aged > or =60 years with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition major depressive disorder were randomized to 12 weeks of double-blind, flexible-dose treatment with escitalopram (10-20 mg/day; N = 130) or placebo (N = 134). The prospectively defined primary efficacy end point was change from baseline to week 12 in Montgomery-Asberg Depression Rating Scale (MADRS) total score using the last observation carried forward approach.
RESULTS
A total of 109 (81%) patients in the placebo group and 96 (74%) patients in the escitalopram group completed treatment. Mean age in both groups was approximately 68 years. Mean baseline MADRS scores were 28.4 and 29.4 for the placebo and escitalopram treatment groups, respectively. Escitalopram did not achieve statistical significance compared with placebo in change from baseline on the MADRS (least square mean difference: -1.34; last observation carried forward). Discontinuation rates resulting from adverse events were 6% for placebo and 11% for escitalopram. Treatment-emergent adverse events reported by >10% of patients in the escitalopram group were headache, nausea, diarrhea, and dry mouth.
CONCLUSIONS
Escitalopram treatment was not significantly different from placebo treatment on the primary efficacy measure, change from baseline to week 12 in MADRS. In patients aged 60 years or older with major depression, acute escitalopram treatment appeared to be well tolerated. |
To offload or not to offload: An efficient code partition algorithm for mobile cloud computing | A new class of cognition augmenting applications such as face recognition or natural language processing is emerging for mobile devices. This kind of applications is computation and power intensive and a cloud infrastructure would provide a great potential to facilitate the code execution. Since these applications usually consist of many composable components, finding the optimal execution layout is difficult in real time. In this paper, we propose an efficient code partition algorithm for mobile code offloading. Our algorithm is based on the observation that when a method is offloaded, the subsequent invocations will be offloaded with a high chance. Unlike the current approach which makes an individual decision for each component, our algorithm finds the offloading and integrating points on a sequence of calls by depth-first search and a linear time searching scheme. Experimental results show that, compared with the 0-1 Integer Linear Programming solver, our algorithm runs 2 orders of magnitude faster with more than 90% partition accuracy. |
Learning from the Web: Extracting General World Knowledge from Noisy Text | The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora – Wikipedia and a collection of weblog entries – indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources. |
Hardware Support for Non-photorealistic Rendering | Special features such as ridges, valleys and silhouettes, of a polygonal scene are usually displayed by explicitly identifying and then rendering `edges' for the corresponding geometry. The candidate edges are identified using the connectivity information, which requires preprocessing of the data. We present a non-obvious but surprisingly simple to implement technique to render such features without connectivity information or preprocessing. At the hardware level, based only on the vertices of a given flat polygon, we introduce new polygons, with appropriate color, shape and orientation, so that they eventually appear as special features. |
Keep it Unreal: Bridging the Realism Gap for 2.5D Recognition with Geometry Priors Only | With the increasing availability of large databases of 3D CAD models, methods for depth-based recognition of localized objects can be trained on an uncountable number of synthetically rendered images. However, discrepancies with the real data acquired from various depth sensors still noticeably impede progress. Previous works adopted unsupervised approaches to generate more realistic depth data, but they all require real scans for training, even if unlabeled. This still represents a strong requirement, especially when considering real-life/industrial settings where real training images are hard or impossible to acquire, but texture-less 3D models are available. We thus propose a novel approach leveraging only CAD models to bridge the realism gap. Purely trained on synthetic data, playing against an extensive augmentation pipeline in an unsupervised manner, our generative adversarial network learns to effectively segment depth images and recover the clean synthetic-looking depth information even from partial occlusions. As our solution is not only fully decoupled from the real domains but also from the task-specific analytics, the pre-processed scans can be handed to any kind and number of recognition methods also trained on synthetic data. Through various experiments, we demonstrate how this simplifies their training and consistently enhances their performance, with results on par with the same methods trained on real data, and better than usual approaches doing the reverse mapping. |
Short- and long-term outcomes of endoscopic submucosal dissection for early gastric cancer in elderly patients aged 75 years and older | Only a few studies have reported long-term outcomes for endoscopic submucosal dissection (ESD) of early gastric cancer (EGC) in elderly patients. The aim of this study was to evaluate the efficacy of ESD for EGC in elderly patients ≥75 years with respect to both short- and long-term outcomes. We reviewed the clinical data of elderly patients ≥75 years who had undergone ESD for EGC at Tonan Hospital from January 2003 to May 2010. A total of 177 consecutive patients, including 145 with curative resection (CR) and 32 with noncurative resection (non-CR), were examined. Of the 32 patients with non-CR, 15 underwent additional surgery, and lymph node metastases were found in 3 patients. The remaining 17 patients were followed without additional surgery because of advanced age or poor general condition. Procedure-related complications, such as post-ESD bleeding, perforation and pneumonia, were within the acceptable range. The 5-year survival rates of patients with CR, those with additional surgery after non-CR, and those without additional surgery after non-CR were 84.6, 73.3, and 58.8 %, respectively. No deaths were attributable to the original gastric cancer; patients succumbed to other illnesses, including malignancy and respiratory disease. In elderly patients, ESD is an acceptable treatment for EGC in terms of both short- and long-term outcomes. Careful clinical assessment of elderly patients is necessary before ESD. After ESD, medical follow-up is important so that other malignancies and diseases that affect the elderly are not overlooked. |
Hybrid crossbar architecture for a memristor based memory | This paper describes a new memristor crossbar architecture that is proposed for use in a high density cache design. This design has less than 10% of the write energy consumption than a simple memristor crossbar. Also, it has up to 4 times the bit density of an STT-MRAM system and up to 11 times the bit density of an SRAM architecture. The proposed architecture is analyzed using a detailed SPICE analysis that accounts for the resistance of the wires in the memristor structure. Additionally, the memristor model used in this work has been matched to specific device characterization data to provide accurate results in terms of energy, area, and timing. |
Dance motion pattern planning for K. Mei as dancing humanoid robot | As continuation of our last research about dancing humanoid robot in 33 degree of freedom, in this paper we describe how to build dance pattern planning of Indonesia traditional dance movement in complete degree of freedom. We discuss the primitive pose in that commonly happened in traditional dance and build the transition pattern among the pose. The motion pattern method between the poses is based on the ability of robot to reach zero moment point position, and also the system to synchronize timing for dance motion is built. In this research zero moment point is our most concern problem because the worst thing in humanoid research is when the robot cannot maintain the balance itself. The computation considers that the zero moment point stays in a support polygon area. |
Push and rotate: cooperative multi-agent path planning | In cooperative multi-agent path planning, agents must move between start and destination locations and avoid collisions with each other. Many recent algorithms require some sort of restriction in order to be complete, except for the Push and Swap algorithm [7], which claims only to require two unoccupied locations in a connected graph. Our analysis shows, however, that for certain types of instances Push and Swap may fail to find a solution. We present the Push and Rotate algorithm, an adaptation of the Push and Swap algorithm, and prove that by fixing the latter’s shortcomings, we obtain an algorithm that is complete for the class of instances with two unoccupied locations in a connected graph. In addition, we provide experimental results that show our algorithm to perform competitively on a set of benchmark problems from the video game industry. |
Single View Stereo Matching | Previous monocular depth estimation methods take a single view and directly regress the expected results. Though recent advances are made by applying geometrically inspired loss functions during training, the inference procedure does not explicitly impose any geometrical constraint. Therefore these models purely rely on the quality of data and the effectiveness of learning to generalize. This either leads to suboptimal results or the demand of huge amount of expensive ground truth labelled data to generate reasonable results. In this paper, we show for the first time that the monocular depth estimation problem can be reformulated as two sub-problems, a view synthesis procedure followed by stereo matching, with two intriguing properties, namely i) geometrical constraints can be explicitly imposed during inference; ii) demand on labelled depth data can be greatly alleviated. We show that the whole pipeline can still be trained in an end-to-end fashion and this new formulation plays a critical role in advancing the performance. The resulting model outperforms all the previous monocular depth estimation methods as well as the stereo block matching method in the challenging KITTI dataset by only using a small number of real training data. The model also generalizes well to other monocular depth estimation benchmarks. We also discuss the implications and the advantages of solving monocular depth estimation using stereo methods. |
1 DO NEIGHBORHOODS GENERATE FEAR OF CRIME ? : AN EMPIRICAL TEST USING THE BRITISH CRIME SURVEY | Home Office (Grant number: PTA-033-2005-00028). We gratefully acknowledge the three anonymous reviewers, whose comments and suggestions improved an earlier version of this paper. Criminologists have long contended that neighborhoods are important determinants of how individuals perceive their risk of criminal victimization. Yet, despite the theoretical importance and policy-relevance of these claims, the empirical evidence-base is surprisingly thin and inconsistent. Drawing on data from a national probability sample of individuals, linked to independent measures of neighborhood demographic characteristics, visual signs of physical disorder, and reported crime, we test four hypotheses about the mechanisms through which neighborhoods influence fear of crime. Our large sample size, analytical approach and the independence of our empirical measures enable us to overcome some of the limitations that have hampered much previous research into this question. We find that neighborhood structural characteristics, visual signs of disorder, and recorded crime all have direct and independent effects on individual level fear of crime. Additionally, we demonstrate that individual differences in fear of crime are strongly moderated by neighborhood socioeconomic characteristics; between group differences in expressed fear of crime are both exacerbated and ameliorated by the characteristics of the areas in which people live. interests include criminal statistics, neighborhood effects, missing data problems, and survey methodology. Methods at the University of Southampton. His research interests are in the areas of survey methodology, statistical methods, public opinion, and political behaviour. |
Special issue on scheduling and timing analysis for advanced real-time systems | Real-time embedded systems such as those found in automotive, aerospace, and other domains are characterised not only by the need for functional correctness, but also the need for temporal or timing correctness. Typically they continually monitor and respond to stimuli from the environment and the physical systems that they control. In order for such systems to behave correctly, they must not only execute the correct computations, but also do so within predefined time constraints or deadlines on the elapsed time between a stimuli and the corresponding response. Advances in software development methods, hardware technology, and the need to focus on energy efficiency pose significant challenges to the analysis of the overall timing behaviour of real-time systems. With hardware technology continuing to scale through the sub-micron to the deep sub-micron domain (e.g. 35–10nm) factors such as the increased probability of circuit failure causing permanent faults, and increased power consumption and problems of heat dissipation come to the fore. Further, increasing hardware capability has enabled significant increases in the volume and complexity of the software deployed in real-time applications; a trend that is set to continue. As a way of managing this complexity, software is often now produced via Model Based Design methods complemented by automatic code generation. Such an approach poses a severe challenge to the effectiveness of current worst-case execution time analysis techniques, but also opens up opportunities for tracking software |
Joint Dictionary Learning for Example-based Image Super-resolution | In this paper, we propose a new joint dictionary learning method for example-based image super-resolution (SR), using sparse representation. The low-resolution (LR) dictionary is trained from a set of LR sample image patches. Using the sparse representation coefficients of these LR patches over the LR dictionary, the high-resolution (HR) dictionary is trained by minimizing the reconstruction error of HR sample patches. The error criterion used here is the mean square error. In this way we guarantee that the HR patches have the same sparse representation over HR dictionary as the LR patches over the LR dictionary, and at the same time, these sparse representations can well reconstruct the HR patches. Simulation results show the effectiveness of our method compared to the state-of-art SR algorithms. |
An FPGA-Based Fully Synchronized Design of a Bilateral Filter for Real-Time Image Denoising | In this paper, a detailed description of a synchronous field-programmable gate array implementation of a bilateral filter for image processing is given. The bilateral filter is chosen for one unique reason: It reduces noise while preserving details. The design is described on register-transfer level. The distinctive feature of our design concept consists of changing the clock domain in a manner that kernel-based processing is possible, which means the processing of the entire filter window at one pixel clock cycle. This feature of the kernel-based design is supported by the arrangement of the input data into groups so that the internal clock of the design is a multiple of the pixel clock given by a targeted system. Additionally, by the exploitation of the separability and the symmetry of one filter component, the complexity of the design is widely reduced. Combining these features, the bilateral filter is implemented as a highly parallelized pipeline structure with very economical and effective utilization of dedicated resources. Due to the modularity of the filter design, kernels of different sizes can be implemented with low effort using our design and given instructions for scaling. As the original form of the bilateral filter with no approximations or modifications is implemented, the resulting image quality depends on the chosen filter parameters only. Due to the quantization of the filter coefficients, only negligible quality loss is introduced. |
Metaheuristic Algorithms for Convolution Neural Network | A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). |
Incorporating appraisal expression patterns into topic modeling for aspect and sentiment word identification | http://dx.doi.org/10.1016/j.knosys.2014.02.003 0950-7051/ 2014 Elsevier B.V. All rights reserved. ⇑ Corresponding author. Address: Zhejiang University, Hangzhou 310027, China. Tel.: +86 571 87951453. E-mail addresses: [email protected] (X. Zheng), [email protected] (Z. Lin), [email protected] (X. Wang), [email protected] (K.-J. Lin), [email protected] (M. Song). 1 http://www.yelp.com/. Xiaolin Zheng a,b,⇑, Zhen Lin , Xiaowei Wang , Kwei-Jay Lin , Meina Song e |
Urban planning and building smart cities based on the Internet of Things using Big Data analytics | The rapid growth in the population density in urban cities demands tolerable provision of services and infrastructure. To meet the needs of city inhabitants. Thus, increase in the request for embedded devices, such as sensors, actuators, and smartphones, etc., which is providing a great business potential towards the new era of Internet of Things (IoT); in which all the devices are capable of interconnecting and communicating with each other over the Internet. Therefore, the Internet technologies provide a way towards integrating and sharing a common communication medium. Having such knowledge, in this paper, we propose a combined IoT-based system for smart city development and urban planning using Big Data analytics. We proposed a complete system, which consists of various types of sensors deployment including smart home sensors, vehicular networking, weather and water sensors, smart parking sensors, and surveillance objects, etc. A four-tier architecture is proposed which include 1) Bottom Tier-1: which is responsible for IoT sources, data generations, and collections 2) Intermediate Tier-1: That is responsible for all type of communication between sensors, relays, base stations, the internet, etc. 3) Intermediate Tier 2: it is responsible for data management and processing using Hadoop framework, and 4) Top tier: is responsible for application and usage of the data analysis and results generated. The system implementation consists of various steps that start from data generation and collecting, aggregating, filtration, classification, preprocessing, computing and decision making. The proposed system is implemented using Hadoop with Spark, voltDB, Storm or S4 for real time processing of the IoT data to generate results in order to establish the smart city. For urban planning or city future development, the offline historical data is analyzed on Hadoop using MapReduce programming. IoT datasets generated by smart homes, smart parking weather, pollution, and vehicle data sets are used for analysis and evaluation. Such type of system with full functionalities does not exist. Similarly, the results show that the proposed system is more scalable and efficient than the existing systems. Moreover, the system efficiency is measured in term of throughput and processing time. |
Evaluating biochemical methane production from brewer’s spent yeast | Anaerobic digestion treatment of brewer’s spent yeast (SY) is a viable option for bioenergy capture. The biochemical methane potential (BMP) assay was performed with three different samples (SY1, SY2, and SY3) and SY1 dilutions (75, 50, and 25 % on a v/v basis). Gompertz-equation parameters denoted slow degradability of SY1 with methane production rates of 14.59–4.63 mL/day and lag phases of 10.72–19.7 days. Performance and kinetic parameters were obtained with the Gompertz equation and the first-order hydrolysis model with SY2 and SY3 diluted 25 % and SY1 50 %. A SY2 25 % gave a 17 % of TCOD conversion to methane as well as shorter lag phase (<1 day). Average estimated hydrolysis constant for SY was 0.0141 (±0.003) day−1, and SY2 25 % was more appropriate for faster methane production. Methane capture and biogas composition were dependent upon the SY source, and co-digestion (or dilution) can be advantageous. |
Consciousness in the universe: a review of the 'Orch OR' theory. | The nature of consciousness, the mechanism by which it occurs in the brain, and its ultimate place in the universe are unknown. We proposed in the mid 1990's that consciousness depends on biologically 'orchestrated' coherent quantum processes in collections of microtubules within brain neurons, that these quantum processes correlate with, and regulate, neuronal synaptic and membrane activity, and that the continuous Schrödinger evolution of each such process terminates in accordance with the specific Diósi-Penrose (DP) scheme of 'objective reduction' ('OR') of the quantum state. This orchestrated OR activity ('Orch OR') is taken to result in moments of conscious awareness and/or choice. The DP form of OR is related to the fundamentals of quantum mechanics and space-time geometry, so Orch OR suggests that there is a connection between the brain's biomolecular processes and the basic structure of the universe. Here we review Orch OR in light of criticisms and developments in quantum biology, neuroscience, physics and cosmology. We also introduce a novel suggestion of 'beat frequencies' of faster microtubule vibrations as a possible source of the observed electro-encephalographic ('EEG') correlates of consciousness. We conclude that consciousness plays an intrinsic role in the universe. |
3D Pictorial Structures for Multiple Human Pose Estimation | In this work, we address the problem of 3D pose estimation of multiple humans from multiple views. This is a more challenging problem than single human 3D pose estimation due to the much larger state space, partial occlusions as well as across view ambiguities when not knowing the identity of the humans in advance. To address these problems, we first create a reduced state space by triangulation of corresponding body joints obtained from part detectors in pairs of camera views. In order to resolve the ambiguities of wrong and mixed body parts of multiple humans after triangulation and also those coming from false positive body part detections, we introduce a novel 3D pictorial structures (3DPS) model. Our model infers 3D human body configurations from our reduced state space. The 3DPS model is generic and applicable to both single and multiple human pose estimation. In order to compare to the state-of-the art, we first evaluate our method on single human 3D pose estimation on HumanEva-I [22] and KTH Multiview Football Dataset II [8] datasets. Then, we introduce and evaluate our method on two datasets for multiple human 3D pose estimation. In order to compare to the state-of-the art, we first evaluate our method on single human 3D pose estimation on HumanEva-I [22] and KTH Multiview Football Dataset II [8] datasets. Then, we introduce and evaluate our method on two datasets for multiple human 3D pose estimation. |
Text2Sketch: Learning Face Sketch from Facial Attribute Text | Face sketch is the main approach to find suspect in law enforcement, especially in many cases when facial attribute descriptions of suspects by witnesses are available. Face sketch synthesized from facial attribute text can also be used in sketch based face recognition. While most previous work focus on face photo to sketch synthesis, the problem of sketch synthesis with facial attribute text has not been explored yet. The problem is challenging due to two facts: firstly, no database of face attribute text to sketch is available; secondly, it is hard to synthesize high-quality face sketches due to the ambiguity and complexity of text description. In this paper, we propose a face sketch synthesis approach with text using Stagewise-GAN. Our contributions lie in two aspects: 1) we construct the first text to face sketch database. The database, namely Text2Sketch dataset, is annotated with CUFSF dataset of 1194 sketches. For each sketch, an attribute description is labelled; 2) we synthesize vivid face sketches using Stagewise-GAN. We use user study, face retrieval performance with synthesized sketch, and quantitative results for evaluation. Experimental results show the effectiveness of our approach. |
Ultrastructure and organisation of the retina and pigment epithelium in the cutlips minnow, Exoglossum maxillingua (Cyprinidae, Teleostei). | The structure of the light- and dark-adapted retina, pigment epithelium and choriocapillaris of the cutlips minnow, Exoglossum maxillingua (Cyprinidae, Teleostei) is examined by light and electron microscopy. A pronounced vitreal vascularisation overlies the inner retina where the blood vessel walls, the inner limiting membrane and the Müller cell endfeet are all closely apposed. The thick Müller cell processes divide the inner plexiform layer and nerve fibre layer into discrete compartments. The ganglion cells do not form fascicles and lie within both the ganglion cell and inner plexiform layers. The inner nuclear layer consists of amacrine, bipolar, Müller cell somata and two rows of horizontal cells. The photoreceptor terminals comprise either multiple (3-5 in cone pedicles) or single (rod spherules) synaptic ribbons. These photoreceptor terminals form either a triad (rods and cones) or a quadrad (cones) arrangement of contact with the invaginating processes of the inner nuclear layer cells. The horizontal cell processes of the cone photoreceptor terminals reveal spinule formation in the light-adapted condition. Five photoreceptor types are classified using morphological criteria; triple cones, unequal double cones, large single cones, small single cones and rods. The ratio of rods to cones is approximately 7:1. All photoreceptor types show retinomotor responses. Only the cones possess accessory outer segments but both rods (8-11) and cones (15-19) possess calycal processes. The retinal pigment epithelium displays retinomotor responses where pigment granules within fine apical processes move vitread to mask the rods in the light. The cells of the retinal pigment epithelium are joined by various types of junctions and contain numerous phagosomes, mitochondria and polysomes. Bruch's membrane or the complexus basalis is trilaminate with two types of collagen fibrils comprising the central layer. The endothelia of the blood vessels of the choriocapillaris, facing Bruch's membrane, are fenestrated. Two to three layers of melanocytes interspersed between large thin-walled capillaries and several layers of collagen fibrils comprise the choriocapillaris. |
Hierarchical annotation of medical images | In this paper, we describe an approach for the automatic medical annotation task of the 2008 CLEF cross-language image retrieval campaign (ImageCLEF). The data comprise 12076 fully annotated images according to the IRMA code. This work is focused on the process of feature extraction from images and hierarchical multi-label classification. To extract features from the images we used a technique called: local distribution of edges. With this techniques each image was described with 80 variables. The goal of the classification task was to classify an image according to the IRMA code. The IRMA code is organized hierarchically. Hence, as classifer we selected an extension of the predictive clustering trees (PCTs) that is able to handle this type of data. Further more, we constructed ensembles (Bagging and Random Forests) that use PCTs as base classifiers. |
Deep Learners Benefit More from Out-of-Distribution Examples | Recent theoretical and empirical work in statistical machine learning has demonstrated the potential of learning algorithms for deep architectures, i.e., function classes obtained by composing multiple levels of representation. The hypothesis evaluated here is that intermediate levels of representation, because they can be shared across tasks and examples from different but related distributions, can yield even more benefits. Comparative experiments were performed on a large-scale handwritten character recognition setting with 62 classes (upper case, lower case, digits), using both a multi-task setting and perturbed examples in order to obtain out-ofdistribution examples. The results agree with the hypothesis, and show that a deep learner did beat previously published results and reached human-level performance. |
Deep Gaussian Embedding of Attributed Graphs: Unsupervised Inductive Learning via Ranking | Methods that learn representations of graph nodes play a critical role in network analysis since they enable many downstream learning tasks. We propose Graph2Gauss – an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification. Unlike most approaches that represent nodes as (point) vectors in a lower-dimensional continuous space, we embed each node as a Gaussian distribution, allowing us to capture uncertainty about the representation. Furthermore, in contrast to previous approaches we propose a completely unsupervised method that is also able to handle inductive learning scenarios and is applicable to different types of graphs (plain, attributed, directed, undirected). By leveraging both the topological network structure and the associated node attributes, we are able to generalize to unseen nodes without additional training. To learn the embeddings we adopt a personalized ranking formulation w.r.t. the node distances that exploits the natural ordering between the nodes imposed by the network structure. Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks. |
Distributed Auction Algorithms for the Assignment Problem with Partial Information | operations centers (MOC), in which multiple DMs with partial information and partial control over assets are involved in the development of operational level plans. The MOC emphasizes standardized processes and methods, centralized assessment and guidance, networked distributed planning capabilities, and decentralized execution for assessing, planning and executing missions across a range of military operations Abstract—Task-asset assignment is a fundamental problem paradigm in a wide variety of applications. Typical problem setting involves a single decision maker (DM) who has complete knowledge of the weight (reward, benefit, accuracy) matrix and who can control any of the assets to execute the tasks. Motivated by planning problems arising in distributed organizations, this paper introduces a novel variation of the assignment problem, wherein there are multiple DMs and each DM knows only a part of the weight matrix and/or controls a subset of the assets. We extend the auction algorithm to such realistic settings with various partial information structures using a blackboard coordination structure. We show that by communicating the bid, the best and the second best profits among DMs and with a coordinator, the DMs can reconstruct the centralized assignment solution. The auction setup provides a nice analytical framework for formalizing how team members build internal models of other DMs and achieve team cohesiveness over time. [1]. In this vein, we are developing analytical and computational models for multi-level coordinated mission planning and monitoring processes associated with MOCs, so that they can function effectively in highly dynamic, asymmetric, and unpredictable mission environments. Two key problem areas are: 1) realistic modeling of multi-level coordination structures that link tactical, operational and strategic levels of decision making; and 2) collaborative planning with partial information and partial control of assets. In the collaborating planning problem, each DM “owns” a set of assets and is responsible for planning certain tasks. Each task is characterized by a vector of resource requirements, while each asset is characterized by a vector of resource capabilities (see Fig. 1). Multiple assets (from the same DM or multiple DMs) may be required to process a task. The degree of match between the task-resource requirement vector and asset-resource capability vector determines the accuracy of task execution. In addition, the elements of |
Creating user-mode device drivers with a proxy | Writing Windows NT device drivers can be a daunting task. Device drivers must be fully re-entrant, must use only limited resources and must be created with special development environments. Executing device drivers in user-mode offers significant coding advantages. User-mode device drivers have access to all user-mode libraries and applications. They can be developed using standard development tools and debugged on a single machine. Using the Proxy Driver to retrieve I/O requests from the kernel, user-mode drivers can export full device services to the kernel and applications. User-mode device drivers offer enormous flexibility for emulating devices and experimenting with new file systems. Experimental results show that in many cases, the overhead of moving to user-mode for processing I/O can be masked by the inherent costs of accessing physical devices. |
A Combined Wye-Delta Connection to Increase the Performance of Axial-Flux PM Machines With Concentrated Windings | In this paper, a combined wye-delta connection is introduced and compared with a conventional wye-connection of a concentrated winding. Because the combined wye-delta connection has a higher fundamental winding factor, the output torque is higher for the same current density when a sinusoidal current is imposed. As the combined wye-delta connection has only a minor influence on the losses in the machine, the efficiency of the machine is also increased. The combined wye-delta connection is illustrated in detail for an axial-flux permanent-magnet synchronous machine with a rated power of 4 kW at a fixed speed of 2500 r/min, using finite element computation and measurements on a prototype machine. |
A machine learning approach for medication adherence monitoring using body-worn sensors | One of the most important challenges in chronic disease self-management is medication non-adherence, which has irrevocable outcomes. Although many technologies have been developed for medication adherence monitoring, the reliability and cost-effectiveness of these approaches are not well understood to date. This paper presents a medication adherence monitoring system by user-activity tracking based on wrist-band wearable sensors. We develop machine learning algorithms that track wrist motions in real-time and identify medication intake activities. We propose a novel data analysis pipeline to reliably detect medication adherence by examining single-wrist motions. Our system achieves an accuracy of 78.3% in adherence detection without need for medication pillboxes and with only one sensor worn on either of the wrists. The accuracy of our algorithm is only 7.9% lower than a system with two sensors that track motions of both wrists. |
Association between family members of dialysis patients and chronic kidney disease: a multicenter study in China | BACKGROUND
Family members of patients with end stage renal disease were reported to have an increased prevalence of chronic kidney disease (CKD). However, studies differentiated genetic and non-genetic family members are limited. We sought to investigate the prevalence of CKD among fist-degree relatives and spouses of dialysis patients in China.
METHODS
Seventeen dialysis facilities from 4 cities of China including 1062 first-degree relatives and 450 spouses of dialysis patients were enrolled. Sex- and age- matched controls were randomly selected from a representative sample of general population in Beijing. CKD was defined as decreased estimated glomerular (eGFR<60 mL/min/1.73 m2) or albuminuria.
RESULTS
The prevalence of eGFR less than 60 mL/min/1.73 m2, albuminuria and the overall prevalence of CKD in dialysis spouses were compared with their counterpart controls, which was 3.8% vs. 7.8% (P<0.01), 16.8% vs. 14.6% (P=0.29) and 18.4% vs. 19.8% (P=0.61), respectively. The prevalence of eGFR less than 60 mL/min/1.73 m2, albuminuria and the overall prevalence of CKD in dialysis relatives were also compared with their counterpart controls, which was 1.5% vs. 2.4% (P=0.12), 14.4% vs. 8.4% (P<0.01) and 14.6% vs. 10.5% (P<0.01), respectively. Multivariable Logistic regression analysis indicated that being spouses of dialysis patients is negatively associated with presence of low eGFR, and being relatives of dialysis patients is positively associated with presence of albuminuria.
CONCLUSIONS
The association between being family members of dialysis patients and presence of CKD is different between first-degree relatives and spouses. The underlying mechanisms deserve further investigation. |
A PSO and Pattern Search based Memetic Algorithm for SVMs Parameters Optimization | 7 Addressing the issue of SVMs parameters optimization, this study proposes an efficient 8 memetic algorithm based on Particle Swarm Optimization algorithm (PSO) and Pattern Search 9 (PS). In the proposed memetic algorithm, PSO is responsible for exploration of the search space 10 and the detection of the potential regions with optimum solutions, while pattern search (PS) is 11 used to produce an effective exploitation on the potential regions obtained by PSO. Moreover, a 12 novel probabilistic selection strategy is proposed to select the appropriate individuals among the 13 current population to undergo local refinement, keeping a well balance between exploration and 14 exploitation. Experimental results confirm that the local refinement with PS and our proposed 15 selection strategy are effective, and finally demonstrate effectiveness and robustness of the 16 proposed PSO-PS based MA for SVMs parameters optimization. 17 18 |
Applying Graph theory to the Internet of Things | In the Internet of Things (IoT), we all are ``things''. Graph theory, a branch of discrete mathematics, has been proven to be useful and powerful in understanding complex networks in history. By means of graph theory, we define new concepts and terminology, and explore the definition of IoT, and then show that IoT is the union of a topological network, a data-functional network and a domi-functional network. |
The rise of "big data" on cloud computing: Review and open research issues | Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and timedata processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized. & 2014 Elsevier Ltd. All rights reserved. |
Optimal Sizing of a Hybrid Grid-Connected Photovoltaic and Wind Power System 1 | Escola d’Enginyeria d’Igualada, Universitat Politècnica de Catalunya, Pla de la Massa 8, 08700 Igualada, Spain 3 *Corresponding author. Tel.: +34 938035300; fax: +34 938031589. E-mail address: [email protected] 4 5 Abstract—Hybrid renewable energy systems (HRES) have been widely identified as an efficient mechanism 6 to generate electrical power based on renewable energy sources (RES). This kind of energy generation 7 systems are based on the combination of one or more RES allowing to complement the weaknesses of one 8 with strengths of another and, therefore, reducing installation costs with an optimized installation. To do so, 9 optimization methodologies are a trendy mechanism because they allow attaining optimal solutions given a 10 certain set of input parameters and variables. This work is focused on the optimal sizing of hybrid grid11 connected photovoltaic – wind power systems from real hourly wind and solar irradiation data and electricity 12 demand from a certain location. The proposed methodology is capable of finding the sizing that leads to a 13 minimum life cycle cost of the system while matching the electricity supply with the local demand. In the 14 present article, the methodology is tested by means of a case study in which the actual hourly electricity retail 15 and market prices have been implemented to obtain realistic estimations of life cycle costs and benefits. A 16 sensitivity analysis that allows detecting to which variables the system is more sensitive has also been 17 performed. Results presented show that the model responds well to changes in the input parameters and 18 variables while providing trustworthy sizing solutions. According to these results, a grid-connected HRES 19 consisting of photovoltaic (PV) and wind power technologies would be economically profitable in the studied 20 rural township in the Mediterranean climate region of central Catalonia (Spain), being the system paid off 21 after 18 years of operation out of 25 years of system lifetime. Although the annual costs of the system are 22 notably lower compared with the cost of electricity purchase, which is the current alternative, a significant 23 upfront investment of over $10M – roughly two thirds of total system lifetime cost – would be required to 24 install such system. 25 26 Keywords—Grid-connected hybrid renewable energy system, life-cycle cost, sizing optimization, solar 27 photovoltaic power, wind power 28 29 |
Wasserstein GAN | The problem this paper is concerned with is that of unsupervised learning. Mainly, what does it mean to learn a probability distribution? The classical answer to this is to learn a probability density. This is often done by defining a parametric family of densities (Pθ)θ∈Rd and finding the one that maximized the likelihood on our data: if we have real data examples {x}i=1, we would solve the problem |
Mechatronic Systems for Machine Tools | This paper reviews current developments in mechatronic systems for metal cutting and forming machine tools. The integration of mechatronic modules to the machine tool and their interaction with manufacturing processes are presented. Sample mechatronic components for precision positioning and compensation of static, dynamic and thermal errors are presented as examples. The effect of modular integration of mechatronic system on the reconfigurability and reliability of the machine tools is discussed along with intervention strategies during machine tool operation. The performance and functionality aspects are discussed through active and passive intervention methods. A special emphasis was placed on active and passive damping of vibrations through piezo, magnetic and electro-hydraulic actuators. The modular integration of mechatronic components to the machine tool structure, electronic unit and CNC software system is presented. The paper concludes with the current research challenges required to expand the application of mechatronics in machine tools and manufacturing systems. |
Anti-inflammatory, antioxidant and antimicrobial activity of Ophthacare brand, an herbal eye drops. | In the present study, the herbal preparation of Ophthacare brand eye drops was investigated for its anti-inflammatory, antioxidant and antimicrobial activity, using in vivo and in vitro experimental models. Ophthacare brand eye drops exhibited significant anti-inflammatory activity in turpentine liniment-induced ocular inflammation in rabbits. The preparation dose-dependently inhibited ferric chloride-induced lipid peroxidation in vitro and also showed significant antibacterial activity against Escherichia coli and Staphylococcus aureus and antifungal activity against Candida albicans. All these findings suggest that Ophthacare brand eye drops can be used in the treatment of various ophthalmic disorders. |
Self-driving and driver relaxing vehicle | In the modern era, the vehicles are focused to be automated to give human driver relaxed driving. In the field of automobile various aspects have been considered which makes a vehicle automated. Google, the biggest network has started working on the self-driving cars since 2010 and still developing new changes to give a whole new level to the automated vehicles. In this paper we have focused on two applications of an automated car, one in which two vehicles have same destination and one knows the route, where other don't. The following vehicle will follow the target (i.e. Front) vehicle automatically. The other application is automated driving during the heavy traffic jam, hence relaxing driver from continuously pushing brake, accelerator or clutch. The idea described in this paper has been taken from the Google car, defining the one aspect here under consideration is making the destination dynamic. This can be done by a vehicle automatically following the destination of another vehicle. Since taking intelligent decisions in the traffic is also an issue for the automated vehicle so this aspect has been also under consideration in this paper. |
Betalain producing cell cultures ofBeta vulgaris L. var. bikores monogerm (red beet) | The betalains are a class of natural pigments comprising the yellow betaxanthins and the violet betacyanins. Callus lines developed fromBeta vulgaris, L. var. bikores monogerm exhibited cell colors ranging from white/green (nonpigmented) through yellow, orange, red, and violet and were representative of all betalain pigments found in the whole plant. The betalains have gained particular interest from the food industry as potential natural alternatives to synthetic food colorants in use today. Red beet extracts (E162), which contain significant amounts of the betacyanins, are currently used in products such as yogurts and ice creams. We describe here the characteristics of culture growth and betalain production for cell suspensions derived from the orange (predominantly betaxanthin-producing) and violet (betacyanin producing) callus lines. The major factors affecting betalain biosynthesis in both cultured and whole plant tissues are reviewed. |
Cognitive function and neurodevelopmental outcomes in HIV-infected Children older than 1 year of age randomized to early versus deferred antiretroviral therapy: the PREDICT neurodevelopmental study. | BACKGROUND
We previously reported similar AIDS-free survival at 3 years in children who were >1 year old initiating antiretroviral therapy (ART) and randomized to early versus deferred ART in the Pediatric Randomized to Early versus Deferred Initiation in Cambodia and Thailand (PREDICT) study. We now report neurodevelopmental outcomes.
METHODS
Two hundred eighty-four HIV-infected Thai and Cambodian children aged 1-12 years with CD4 counts between 15% and 24% and no AIDS-defining illness were randomized to initiate ART at enrollment ("early," n = 139) or when CD4 count became <15% or a Centers for Disease Control (CDC) category C event developed ("deferred," n = 145). All underwent age-appropriate neurodevelopment testing including Beery Visual Motor Integration, Purdue Pegboard, Color Trails and Child Behavioral Checklist. Thai children (n = 170) also completed Wechsler Intelligence Scale (intelligence quotient) and Stanford Binet Memory test. We compared week 144 measures by randomized group and to HIV-uninfected children (n = 319).
RESULTS
At week 144, the median age was 9 years and 69 (48%) of the deferred arm children had initiated ART. The early arm had a higher CD4 (33% versus 24%, P < 0.001) and a greater percentage of children with viral suppression (91% versus 40%, P < 0.001). Neurodevelopmental scores did not differ by arm, and there were no differences in changes between arms across repeated assessments in time-varying multivariate models. HIV-infected children performed worse than uninfected children on intelligence quotient, Beery Visual Motor Integration, Binet memory and Child Behavioral Checklist.
CONCLUSIONS
In HIV-infected children surviving beyond 1 year of age without ART, neurodevelopmental outcomes were similar with ART initiation at CD4 15%-24% versus <15%, but both groups performed worse than HIV-uninfected children. The window of opportunity for a positive effect of ART initiation on neurodevelopment may remain in infancy. |
A Spreadsheet Auditing Tool Evaluated in an Industrial Context | Amongst the large number of write-and-throw-away-spreadsheets developed for one-time use there is a rather neglected proportion of spreadsheets that are huge, periodically used, and submitted to regular update-cycles like any conventionally evolving valuable legacy application software. However, due to the very nature of spreadsheets, their evolution is particularly tricky and therefore error-prone. In our strive to develop tools and methodologies to improve spreadsheet quality, we analysed consolidation spreadsheets of an internationally operating company for the errors they contain. The paper presents the results of the field audit, involving 78 spreadsheets with 60,446 non-empty cells. As a by-product, the study performed was also to validate our analysis tools in an industrial context. The evaluated auditing tool offers the auditor a new view on the formula structure of the spreadsheet by grouping similar formulas into equivalence classes. Our auditing approach defines three similarity criteria between formulae, namely copy, logical and structural equivalence. To improve the visualization of large spreadsheets, equivalences and data dependencies are displayed in separated windows that are interlinked with the spreadsheet. The auditing approach helps to find irregularities in the geometrical pattern of similar formulas. |
American ginseng (Panax quinquefolius) administration does not affect performance of the Roche COBAS Ampliprep/Taqman HIV-1 RNA assay | BACKGROUND
Previous data indicate that purified components of ginseng can inhibit HIV reverse transcriptase in vitro, suggesting that ginseng components in plasma may interfere with HIV-1 RNA detection assays.
METHODS
Pre- and post-dose plasma from three volunteers dosed with 3000 mg American ginseng was spiked with HIV and analyzed by the Roche COBAS Ampliprep/Taqman v2.0 HIV-1 RNA assay.
RESULTS
Presence of American ginseng had no significant effect on measured HIV-1 RNA concentration. Variation within pre- and post-dose plasma pair was insignificant and within assay performance limits.
CONCLUSION
Plasma from subjects dosed with 3000 mg American ginseng does not interfere with the Roche COBAS Ampliprep/Taqman v2.0 HIV-1 RNA assay. This implies that in vitro inhibition of HIV reverse transcriptase by American ginseng components is unlikely to be clinically relevant. |
Mining Optimized Association Rules with Categorical and Numeric Attributes | ÐMining association rules on large data sets has received considerable attention in recent years. Association rules are useful for determining correlations between attributes of a relation and have applications in marketing, financial, and retail sectors. Furthermore, optimized association rules are an effective way to focus on the most interesting characteristics involving certain attributes. Optimized association rules are permitted to contain uninstantiated attributes and the problem is to determine instantiations such that either the support or confidence of the rule is maximized. In this paper, we generalize the optimized association rules problem in three ways: 1) association rules are allowed to contain disjunctions over uninstantiated attributes, 2) association rules are permitted to contain an arbitrary number of uninstantiated attributes, and 3) uninstantiated attributes can be either categorical or numeric. Our generalized association rules enable us to extract more useful information about seasonal and local patterns involving multiple attributes. We present effective techniques for pruning the search space when computing optimized association rules for both categorical and numeric attributes. Finally, we report the results of our experiments that indicate that our pruning algorithms are efficient for a large number of uninstantiated attributes, disjunctions, and values in the domain of the attributes. |
Mind Your Language: Learning Visually Grounded Dialog in a Multi-Agent Setting | The task of visually grounded dialog involves learning goal-oriented cooperative dialog between autonomous agents who exchange information about a scene through several rounds of questions and answers. We posit that requiring agents to adhere to rules of human language while also maximizing information exchange is an ill-posed problem, and observe that humans do not stray from a common language because they are social creatures and have to communicate with many people everyday, and it is far easier to stick to a common language even at the cost of some efficiency loss. Using this as inspiration, we propose and evaluate a multi-agent dialog framework where each agent interacts with, and learns from, multiple agents, and show that this results in more relevant and coherent dialog (as judged by human evaluators) without sacrificing task performance (as judged by quantitative metrics). |
Advances in clinical application of cryoablation therapy for hepatocellular carcinoma and metastatic liver tumor. | Hepatocellular carcinoma (HCC) is one of the most common cancers worldwide. Although surgical resection and liver transplantation are the curative treatments, many of HCC patients do not qualify for these curative therapies at the presentation. Thus, ablation therapies are currently important modalities in HCC treatment. Among currently available ablation therapies, cryoablation (ie, cryotherapy) is a novel local therapeutic modality. However, cryoablation has not been widely used as one of ablation therapies for HCC, because of historical concerns about risk of bleeding when cryotherapy is delivered by early generation of the argon-helium device. Nevertheless, with technological advances and increased clinical experience in the past decade, clinical application of cryoablation for HCC management has significantly increased. Accumulating data have demonstrated that cryoablation is highly effective in local tumor control with well-acceptable safety profile, and the overall survival is comparable with that of radiofrequency ablation in patients with tumors <5 cm. Compared with radiofrequency ablation and other thermal-based modalities, cryoablation has several advantages, such as the ability to produce larger and precise zones of ablation. This article systemically reviews the advances in clinical application of cryoablation therapy for HCC, including the related mechanisms and technology, clinical indications, efficacy and safety profiles, and future research directions. |
Objectives, design and enrollment results from the Infant Susceptibility to Pulmonary Infections and Asthma Following RSV Exposure Study (INSPIRE) | BACKGROUND
Respiratory syncytial virus (RSV) lower respiratory tract infection (LRI) during infancy has been consistently associated with an increased risk of childhood asthma. In addition, evidence supports that this relationship is causal. However, the mechanisms through which RSV contributes to asthma development are not understood. The INSPIRE (Infant Susceptibility to Pulmonary Infections and Asthma Following RSV Exposure) study objectives are to: 1) characterize the host phenotypic response to RSV infection in infancy and the risk of recurrent wheeze and asthma, 2) identify the immune response and lung injury patterns of RSV infection that are associated with the development of early childhood wheezing illness and asthma, and 3) determine the contribution of specific RSV strains to early childhood wheezing and asthma development. This article describes the INSPIRE study, including study aims, design, recruitment results, and enrolled population characteristics.
METHODS/DESIGN
The cohort is a population based longitudinal birth cohort of term healthy infants enrolled during the first months of life over a two year period. Respiratory infection surveillance was conducted from November to March of the first year of life, through surveys administered every two weeks. In-person illness visits were conducted if infants met pre-specified criteria for a respiratory illness visit. Infants will be followed annually to ages 3-4 years for assessment of the primary endpoint: wheezing illness. Nasal, urine, stool and blood samples were collected at various time points throughout the study for measurements of host and viral factors that predict wheezing illness. Nested case-control studies will additionally be used to address other primary and secondary hypotheses.
DISCUSSION
In the INSPIRE study, 1952 infants (48% female) were enrolled during the two enrollment years and follow-up will continue through 2016. The mean age of enrollment was 60 days. During winter viral season, more than 14,000 surveillance surveys were carried out resulting in 2,103 respiratory illness visits on 1189 infants. First year follow-up has been completed on over 95% percent of participants from the first year of enrollment. With ongoing follow-up for wheezing and childhood asthma outcomes, the INSPIRE study will advance our understanding of the complex causal relationship between RSV infection and early childhood wheezing and asthma. |
Designing Interfaces with Social Presence: Using Vividness and Extraversion to Create Social Recommendation Agents | Research Article |
Frontal brain dysfunction in alcoholism with and without antisocial personality disorder | Alcoholism and antisocial personality disorder (ASPD) often are comorbid conditions. Alcoholics, as well as nonalcoholic individuals with ASPD, exhibit behaviors associated with prefrontal brain dysfunction such as increased impulsivity and emotional dysregulation. These behaviors can influence drinking motives and patterns of consumption. Because few studies have investigated the combined association between ASPD and alcoholism on neuropsychological functioning, this study examined the influence of ASPD symptoms and alcoholism on tests sensitive to frontal brain deficits. The participants were 345 men and women. Of them, 144 were abstinent alcoholics (66 with ASPD symptoms), and 201 were nonalcoholic control participants (24 with ASPD symptoms). Performances among the groups were examined with Trails A and B tests, the Wisconsin Card Sorting Test, the Controlled Oral Word Association Test, the Ruff Figural Fluency Test, and Performance subtests of the Wechsler Adult Intelligence Scale. Measures of affect also were obtained. Multiple regression analyses showed that alcoholism, specific drinking variables (amount and duration of heavy drinking), and ASPD were significant predictors of frontal system and affective abnormalities. These effects were different for men and women. The findings suggested that the combination of alcoholism and ASPD leads to greater deficits than the sum of each. |
Fracture Property Studies of Paracetamol Single Crystals Using Microindentation Techniques | Purpose. To study the fracture behavior of the major habit faces of paracetamol single crystals using microindentation techniques and to correlate this with crystal structure and molecular packing. Methods. Vicker's microindentation techniques were used to measure the hardness and crack lengths. The development of all the major radial cracks was analyzed using the Laugier relationship and fracture toughness values evaluated. Results. Paracetamol single crystals showed severe cracking and fracture around all Vicker's indentations with a limited zone of plastic deformation close to the indent. This is consistent with the material being a highly brittle solid that deforms principally by elastic deformation to fracture rather than by plastic flow. Fracture was associated predominantly with the (010) cleavage plane, but was also observed parallel to other lattice planes including (110), (210) and (100). The cleavage plane (010) had the lowest fracture toughness value, Kc = 0.041MPa m1/2, while the greatest value, Kc = 0.105MPa m1/2; was obtained for the (210) plane. Conclusions. Paracetamol crystals showed severe cracking and fracture because of the highly brittle nature of the material. The fracture behavior could be explained on the basis of the molecular packing arrangement and the calculated attachment energies across the fracture planes. |
Beam Codebook Based Beamforming Protocol for Multi-Gbps Millimeter-Wave WPAN Systems | In this paper, we propose a feasible beamforming (BF) scheme realized in media access control (MAC) layer following the guidelines of the IEEE 802.15.3c criteria for millimeterwave 60GHz wireless personal area networks (60GHz WPANs). The proposed BF targets to minimize the BF set-up time and mitigates the high path loss of 60GHz WPAN systems. It is based on designed multi-resolution codebooks, which generate three kinds of patterns of different half power beam widths (HPBWs): quasi-omni pattern, sector and beam. These three kinds of patterns are employed in the three stages of the BF protocol, namely device-to-device (DEV-to-DEV) linking, sectorlevel searching and beam-level searching. All the three stages can be completed within one superframe, which minimizes the potential interference to other systems during BF set-up period. In this paper, we show some example codebooks and provide the details of BF procedure. Simulation results show that the setup time of the proposed BF protocol is as small as 2% when compared to the exhaustive searching protocol. The proposed BF is a complete design, it re-uses commands specified in IEEE 802.15.3c, completely compliant to the standard; It has thus been adopted by IEEE 802.15.3c as an optional functionality to realize Giga-bit-per-second (Gbps) communication in WPAN Systems. |
An artificial neural network to estimate physical activity energy expenditure and identify physical activity type from an accelerometer. | The aim of this investigation was to develop and test two artificial neural networks (ANN) to apply to physical activity data collected with a commonly used uniaxial accelerometer. The first ANN model estimated physical activity metabolic equivalents (METs), and the second ANN identified activity type. Subjects (n = 24 men and 24 women, mean age = 35 yr) completed a menu of activities that included sedentary, light, moderate, and vigorous intensities, and each activity was performed for 10 min. There were three different activity menus, and 20 participants completed each menu. Oxygen consumption (in ml x kg(-1) x min(-1)) was measured continuously, and the average of minutes 4-9 was used to represent the oxygen cost of each activity. To calculate METs, activity oxygen consumption was divided by 3.5 ml x kg(-1) x min(-1) (1 MET). Accelerometer data were collected second by second using the Actigraph model 7164. For the analysis, we used the distribution of counts (10th, 25th, 50th, 75th, and 90th percentiles of a minute's second-by-second counts) and temporal dynamics of counts (lag, one autocorrelation) as the accelerometer feature inputs to the ANN. To examine model performance, we used the leave-one-out cross-validation technique. The ANN prediction of METs root-mean-squared error was 1.22 METs (confidence interval: 1.14-1.30). For the prediction of activity type, the ANN correctly classified activity type 88.8% of the time (confidence interval: 86.4-91.2%). Activity types were low-level activities, locomotion, vigorous sports, and household activities/other activities. This novel approach of applying ANNs for processing Actigraph accelerometer data is promising and shows that we can successfully estimate activity METs and identify activity type using ANN analytic procedures. |
Quick Tips | · Getting off the subject · No goals or agenda · Disorganized · Ineffective leadership/lack of control · Wasted time · Ineffective decision-making · No pre-meeting orientation · Too lengthy · Poor/inadequate preparation · Inconclusive · Irrelevant information discussed · Starting late · Interruptions · Rambling, redundant discussion · Individuals dominate discussion · No published results or follow-up action |
What Is the Evidence to Support the Use of Therapeutic Gardens for the Elderly? | Horticulture therapy employs plants and gardening activities in therapeutic and rehabilitation activities and could be utilized to improve the quality of life of the worldwide aging population, possibly reducing costs for long-term, assisted living and dementia unit residents. Preliminary studies have reported the benefits of horticultural therapy and garden settings in reduction of pain, improvement in attention, lessening of stress, modulation of agitation, lowering of as needed medications, antipsychotics and reduction of falls. This is especially relevant for both the United States and the Republic of Korea since aging is occurring at an unprecedented rate, with Korea experiencing some of the world's greatest increases in elderly populations. In support of the role of nature as a therapeutic modality in geriatrics, most of the existing studies of garden settings have utilized views of nature or indoor plants with sparse studies employing therapeutic gardens and rehabilitation greenhouses. With few controlled clinical trials demonstrating the positive or negative effects of the use of garden settings for the rehabilitation of the aging populations, a more vigorous quantitative analysis of the benefits is long overdue. This literature review presents the data supporting future studies of the effects of natural settings for the long term care and rehabilitation of the elderly having the medical and mental health problems frequently occurring with aging. |
Integration of SNPs-FMRI-methylation data with sparse multi-CCA for schizophrenia study | Schizophrenia (SZ) is a complex mental disorder associated with genetic variations, brain development and activities, and environmental factors. There is an increasing interest in combining genetic, epigenetic and neuroimaging datasets to explore different level of biomarkers for the correlation and interaction between these diverse factors. Sparse Multi-Canonical Correlation Analysis (sMCCA) is a powerful tool that can analyze the correlation of three or more datasets. In this paper, we propose the sMCCA model for imaging genomics study. We show the advantage of sMCCA over sparse CCA (sCCA) through the simulation testing, and further apply it to the analysis of real data (SNPs, fMRI and methylation) from schizophrenia study. Some new genes and brain regions related to SZ disease are discovered by sMCCA and the relationships among these biomarkers are further discussed. |
STAR-MPI: self tuned adaptive routines for MPI collective operations | Message Passing Interface (MPI) collective communication routines are widely used in parallel applications. In order for a collective communication routine to achieve high performance for different applications on different platforms, it must be adaptable to both the system architecture and the application workload. Current MPI implementations do not support such software adaptability and are not able to achieve high performance on many platforms. In this paper, we present STAR-MPI (Self Tuned Adaptive Routines for MPI collective operations), a set of MPI collective communication routines that are capable of adapting to system architecture and application workload. For each operation, STAR-MPI maintains a set of communication algorithms that can potentially be efficient at different situations. As an application executes, a STAR-MPI routine applies the Automatic Empirical Optimization of Software (AEOS) technique at run time to dynamically select the best performing algorithm for the application on the platform. We describe the techniques used in STAR-MPI, analyze STAR-MPI overheads, and evaluate the performance of STAR-MPI with applications and benchmarks. The results of our study indicate that STAR-MPI is robust and efficient. It is able to and efficient algorithms with reasonable overheads, and it out-performs traditional MPI implementations to a large degree in many cases. |
Does the Quality of Online Customer Experience Create a Sustainable Competitive Advantage for E-Commerce Firms? | Claims have often been made that the quality of the online customer experience in terms of web site ease of use, selection of goods offered, quality of customer service, the effectiveness of virtual community building, and site personalization are crucial to the success of e-commerce firms. If differences in the quality of online customer experiences provide a long-term competitive advantage, we would expect a positive relation between quality of online customer experience and shareholder value. Skeptics, however, argue that any advantage arising from the quality of online customer experiences would simply be competed away through imitation and innovation. To test these opposing views, we use scorecards of online customer experience provided by Gomez Advisors for a sample of 48 e-commerce firms during the period 4Q:1999 to 3Q:2000 and examine the relation between the scores and shareholder value. We measure shareholder value as the price-to-sales ratio, a measure commonly used for e-commerce firms. On average, we find that the association between online customer experience scores and the price-to-sales ratio is positive. In addition, the magnitude of the positive association is decreasing in the extent of competition (especially from brick-and-mortar firms) and the probability of failure, as measured by the amount of cash left to fund operations. The stock market appears to view differences in the quality of online customer experience as a viable competitive advantage even after the April 2000 stock market crash. |
Disturbance-Observer-Based Position Tracking Controller in the Presence of Biased Sinusoidal Disturbance for Electrohydraulic Actuators | A nonlinear position tracking controller with a disturbance observer (DOB) is proposed to track the desired position in the presence of the disturbance for electrohydraulic actuators (EHAs). The DOB is designed in the form of a second-order high-pass filter in order to estimate the disturbance. The nonlinear controller is designed for position tracking as a near input-output linearizing inner-loop load pressure controller and a backstepping outer-loop position controller. Variable structure control is implemented in order to compensate for the error in disturbance estimation. The desired load pressure is designed to generate the pressure using the differential flatness property of the EHA's mechanical subsystem. The disturbance within the bandwidth of the DOB can be cancelled by the proposed method. The performance of the proposed method is validated via simulations and experiments. |
Creating a Vision of Creativity: The First 25 Years | This article describes three stages of my attempts to understand, measure, and develop creative thinking. The first stage explored creative intelligence. The second investigated a theory of creativity, the investment theory. The third proposed a theory of creative leadership. Together, these three stages comprise the development of my thought on creativity—its nature, measurement, and development. |
Learning a Part-Based Pedestrian Detector in a Virtual World | Detecting pedestrians with on-board vision systems is of paramount interest for assisting drivers to prevent vehicle-to-pedestrian accidents. The core of a pedestrian detector is its classification module, which aims at deciding if a given image window contains a pedestrian. Given the difficulty of this task, many classifiers have been proposed during the last 15 years. Among them, the so-called (deformable) part-based classifiers, including multiview modeling, are usually top ranked in accuracy. Training such classifiers is not trivial since a proper aspect clustering and spatial part alignment of the pedestrian training samples are crucial for obtaining an accurate classifier. In this paper, we first perform automatic aspect clustering and part alignment by using virtual-world pedestrians, i.e., human annotations are not required. Second, we use a mixture-of-parts approach that allows part sharing among different aspects. Third, these proposals are integrated in a learning framework, which also allows incorporating real-world training data to perform domain adaptation between virtual- and real-world cameras. Overall, the obtained results on four popular on-board data sets show that our proposal clearly outperforms the state-of-the-art deformable part-based detector known as latent support vector machine. |
Stunting, wasting and associated factors among children aged 6–24 months in Dabat health and demographic surveillance system site: A community based cross-sectional study in Ethiopia | BACKGROUND
Though there is a marked decline in burden of undernutrition, about 44 and 10% of children under five are stunted and wasted, respectively in Ethiopia. The highest prevalence of wasting occurs in young children (6-23 months), however literature are limited in these population groups. Therefore, this study aimed to assess stunting, wasting and associated factors among children aged 6-24 months in Dabat Health and Demographic Surveillance System (HDSS) site, northwest Ethiopia.
METHODS
A community based cross-sectional study was conducted in Dabat HDSS site from May 01 to June 29, 2015. A total of 587 mother-child pairs were included in the study. A multivariate logistic regression analysis was carried out to identify factors associated with stunting and wasting, separately.
RESULTS
The prevalence of stunting and wasting among children aged 6-24 months were 58.1 and 17.0%, respectively. Poor wealth status [Adjusted Odds Ratio (AOR) = 2.20; 95% Confidence Interval (CI): 1.42, 3.40], unavailability of latrine [AOR = 1.76; 95% CI: 1.17, 2.66], child age: 12-24 months [AOR = 3.24; 95% CI: 2.24, 4.69], not receiving maternal postnatal vitamin-A supplementation [AOR = 1.54; 95%: 1.02, 2.33] and source of family food: own food production [AOR = 1.71; 95% CI: 1.14, 2.57] were significantly associated with higher odds of stunting. However, only history of diarrheal morbidity was significantly associated with wasting [AOR = 2.06; 95% CI: 1.29, 3.30].
CONCLUSIONS
In this community, the magnitude of stunting and wasting exists as a severe public health concern. Therefore, improving socio-economic status, latrine and maternal postnatal vitamin-supplementation coverage are essential to mitigate the high burden of stunting. Besides, reducing the childhood diarrheal morbidity as well as strengthening early diagnosis and management of the problem are crucial to curve the high prevalence of wasting. |
A Theory of Social Comparison Processes | In this paper we shall present a further development of a previously published theory concerning opinion influence processes in social groups (7). This further development has enabled us to extend the theory to deal with other areas, in addition to opinion formation, in which social comparison is important. Specifically, we shall develop below how the theory applies to the appraisal and evaluation of abilities as well as opinions. Such theories and hypotheses in the area of social psychology are frequently viewed in terms of how “plausible” they seem. “Plausibility” usually means whether or not the theory or hypothesis fits one’s intuition or one’s common sense. In this meaning much of the theory which is to be presented here is not” plausible “. The theory does, however, explain a considerable amount of data and leads to testable derivations. Three experiments, specifically designed to test predictions from this extension of the theory, have now been completed (5, 12, 19). They all provide good corroboration. We will in the following pages develop the theory and present the relevant data. |
Better Static Memory Management: Improving Region-Based Analysis of Higher-Order Languages | Static memory management replaces runtime garbage collection with compile-time annotations that make all memory allocation and deallocation explicit in a program. We improve upon the Tofte/Talpin region-based scheme for compile-time memory management[TT94]. In the Tofte/Talpin approach, all values, including closures, are stored in regions. Region lifetimes coincide with lexical scope, thus forming a runtime stack of regions and eliminating the need for garbage collection. We relax the requirement that region lifetimes be lexical. Rather, regions are allocated late and deallocated as early as possible by explicit memory operations. The placement of allocation and deallocation annotations is determined by solving a system of constraints that expresses all possible annotations. Experiments show that our approach reduces memory requirements significantly, in some cases asymptotically. |
Mokken scale analysis: Between the Guttman scale and parametric item response theory | This article introduces a model of ordinal unidimensional measurement known as Mokken scale analysis. Mokken scaling is based on principles of Item Response Theory (IRT) that originated in the Guttman scale. I compare the Mokken model with both Classical Test Theory (reliability or factor analysis) and parametric IRT models (especially with the oneparameter logistic model known as the Rasch model). Two nonparametric probabilistic versions of the Mokken model are described: the model of Monotone Homogeneity and the model of Double Monotonicity. I give procedures for dealing with both dichotomous and polytomous data, along with two scale analyses of data from the World Values Study that demonstrate the usefulness of the Mokken model. |
The Vegetation Outlook ( VegOut ) : A New Method for Predicting Vegetation Seasonal Greenness | The vegetation outlook (VegOut) is a geospatial tool for predicting general vegetation condition patterns across large areas. VegOut predicts a standardized seasonal greenness (SSG) measure, which represents a general indicator of relative vegetation health. VegOut predicts SSG values at multiple time steps (two to six weeks into the future) based on the analysis of “historical patterns” (i.e., patterns at each 1 km grid cell and time of the year) of satellite, climate, and oceanic data over an 18-year period (1989 to 2006). The model underlying VegOut capitalizes on historical climate–vegetation interactions and ocean–climate teleconnections (such as El Niño and the Southern Oscillation, ENSO) expressed over the 18-year data record and also considers several environmental characteristics (e.g., land use/cover type and soils) that influence vegetation’s response to weather conditions to produce 1 km maps that depict future general vegetation conditions. VegOut provides regionallevel vegetation monitoring capabilities with local-scale information (e.g., county to sub-county level) that can complement more traditional remote sensing–based approaches that monitor “current” vegetation conditions. In this paper, the VegOut approach is discussed and a case study over the central United States for selected periods of the 2008 growing season is presented to demonstrate the potential of this new tool for assessing and predicting vegetation conditions. |
Formalising the Design of an SECD chip | Several abstract SECD machines can be found in the literature, for example [Bur75], [Hen80], [FH88]. Typically, an SECD machine is characterised by the four status registers (Store, Environment, Control, Dump) from which its name is taken. |
Presence detection, identification and tracking in smart homes utilizing bluetooth enabled smartphones | Advances in ubiquitous computing over the last decade have allowed us to inch closer to the realization of true smart homes. Many sensors are already embedded in our living environments which can monitor several environmental parameters such as temperature, humidity, brightness and appliance-level power consumption. However, in order to achieve the primary goal of the smart home, we should be able to detect, identify, and localize the entities inside it. Therefore, the user detection, identification and localization problems represent a crucial facet of the challenges introduced by the smart home problem. Our approach towards solving these challenges entailed the usage of Bluetooth technology for user identification and tracking, alongside a Wireless Local Area Network setup to collate the sensor data at a centralized server such as a home gateway which subsequently processed and stored the entries. Moreover, we have studied the efficacy of various pattern recognition algorithms for real time processing and decision modeling on the received data. We have hence demonstrated our solution represents a non-intrusive, inexpensive and energy-conserving methodology to solve an essential part of the smart home problem by integrating already existent devices and infrastructure in an innocuous manner to obtain good results with minimum overhead. |
Acoustic analysis with vocal loading test in occupational voice disorders: outcomes before and after voice therapy. | OBJECTIVE
To assess the usefulness of acoustic analysis with vocal loading test for evaluating the treatment outcomes in occupational voice disorders.
METHODS
Fifty-one female teachers with dysphonia were examined (Voice Handicap Index--VHI, laryngovideostroboscopy and acoustic analysis with vocal loading) before and after treatment. The outcomes of teachers receiving vocal training (group I) were referred to outcomes of group II receiving only voice hygiene instructions.
RESULTS
The results of subjective assessment (VHI score) and objective evaluation (acoustic analysis) improved more significantly in group I than in group II. The post-treatment examination revealed a decreased percentage of subjects with deteriorated jitter parameters after vocal loading, particularly in group I.
CONCLUSIONS
Acoustic analysis with vocal loading test can be a helpful tool in the diagnosis and evaluation of treatment efficacy in occupational dysphonia. |
Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks | Congestion avoidance mechanisms allow a network to operate in the optimal region of low delay and high throughput, thereby, preventing the network from becoming congested. This is different from the traditional congestion control mechanisms that allow the network to recover from the congested state of high delay and low throughput. Both congestion avoidance and congestion control mechanisms are basically resource management problems. They can be formulated as system control problems in which the system senses its state and feeds this back to its users who adjust their controls. The key component of any congestion avoidance scheme is the algorithm (or control function) used by the users to increase or decrease their load (window or rate). We abstractly characterize a wide class of such increase/decrease algorithms and compare them using several different performance metrics. They key metrics are efficiency, fairness, convergence time, and size of oscillations. It is shown that a simple additive increase and multiplicative decrease algorithm satisfies the sufficient conditions for convergence to an efficient and fair state regardless of the starting state of the network. This is the algorithm finally chosen for implementation in the congestion avoidance scheme recommended for Digital Networking Architecture and OSI Transport Class 4 Networks. |
The Use of Pit and Fissure Sealants—A Literature Review | This paper reviews the literature and discusses the latest updates on the use of pit and fissure sealants. It demonstrates the effectiveness of pit and fissure sealants in preventing caries and the management of early carious lesions. It compares the use of different sealant materials and their indications. It describes the application technique for sealants. It also reviews the cost-effectiveness of sealants as a preventive strategy. From this review and after the discussion of recently published studies on pit and fissure sealants, it is evident that sealants are effective in caries prevention and in preventing the progression of incipient lesions. It is therefore recommended that pit and fissure sealant be applied to high-caries-risk children for optimum cost-effectiveness. It is a highly sensitive technique that needs optimum isolation, cleaning of the tooth surface, etching, and the application of a thin bonding layer for maximum benefit. Recall and repair, when needed, are important to maximize the effectiveness of such sealant use. |
Defektrekonstruktion in der Knieendoprothetik mit Wedges und Blöcken | Rekonstruktion der Gelenkline in der Primär- und Revisionsendoprothetik des Kniegelenks mit Rekonstruktion gelenknaher Knochendefekte mittels metallischer Blöcke und modularer Implantate. Primär- und Revisionsendoprothetik des Kniegelenks mit gelenknahen Knochendefekten (AORI Typ II, nach der Klassifikation des Anderson Orthopaedic Research Instituts). Knochendefekte mit kompletter Destruktion der Metaphyse. Die Implantation der Endoprothese erfolgt in 3 aufeinanderfolgenden Schritten: Zuerst wird die Tibiakomponente in korrekter Höhe und Rotation positioniert. Der zweite Schritt umfasst die Festlegung der hinteren Gelenklinie durch die Größenbestimmung der Femurkomponente in Flexion des Kniegelenks bei korrekter Rotation. Im dritten Schritt wird die distale Gelenklinie durch die Positionierung der Femurkomponente bestimmt. Die Festlegung der Gelenklinie erfolgt unabhängig von den jeweiligen Knochendefekten. Diese werden nachfolgend nach Anfrischung des Knochens mit metallischen Augmenten aufgefüllt. Mobilisierung mit schmerzorientierter Vollbelastung der operierten Extremität und Freigabe der Beweglichkeit abhängig von Weichteil- und knöcherner Situation. In einer prospektiven Studie wurden 132 Primär- und Revisionseingriffe bei 76 Frauen und 56 Männern in einem Durchschnittsalter von 72,4 Jahren (Spanne 49–93 Jahre) unter Nutzung von metallischen Augmenten prä- und postoperativ nach durchschnittlich 74 Monaten (Spanne 38–105 Monate) klinisch und radiologisch nachuntersucht. Bei der klinischen Untersuchung wurde der funktionelle Knee Society Score verwendet. Der präoperative Wert von 46,3 (Spanne 31–65) konnte auf 82,5 (Spanne 61–96) verbessert werden. Radiologisch zeigten 12,1 % Lysesäume im Bereich des Augments keine klinischen Lockerungszeichen. Revisionen aufgrund aseptischer Lockerungen wurden nicht durchgeführt. Die Gelenklinie wurde in 84,8 % rekonstruiert. Surgical technique for primary and revision total knee arthroplasty to reconstruct bone defects with metal augments and reproducible positioning of the implant at the right joint line. Primary and revision total knee arthroplasty with bone defects. Complete destruction of the metaphysis. Implantation of revision components performed in three consecutive steps: first, positioning of the tibia component at correct height and rotation; second, determination of the posterior joint line in flexion through the size and correct rotation of the femoral implant; third, determination of the distal joint line by use of positioning of the femoral component. These steps are performed independently from bone defects, which are subsequently reconstructed with metal augments. Mobilization with weight bearing and range of motion as tolerated, depending on osseous and soft tissue condition at primary or revision surgery. In a prospective study, 132 consecutive knee revisions in 76 women and 56 men with an average age of 72.4 years (range 49–93 years) were followed up clinically and radiologically preoperatively and at a mean follow-up of 74 months (range 38–105 months). Clinical results were based on the American Knee Society score. The score was 46.3 (range 31–65) preoperatively and 82.5 (range 61–96) at follow-up. Radiologically 12.1 % of the knees showed lysis around the augment with no clinical signs of loosening. No revisions were performed due to aseptic loosening. The joint line was correctly reconstructed in 84.8 %. |
Agreement between periapical radiographs and cone-beam computed tomography for assessment of periapical status of root filled molar teeth. | AIM
To assess the agreement between periapical radiograph (PA) and cone-beam computed tomography (CBCT) for periapical assessment of root filled maxillary and mandibular molars.
METHODOLOGY
Periapical radiograph and CBCT (iCat) images of 60 previously root filled molars (30 maxillary and 30 mandibular) were obtained at a review clinic. Agreement between PA and CBCT assessments of (i) number of canals per tooth, (ii) number of lesions per tooth, (iii) mesial-distal dimension of lesions, (iv) coronal-apical dimension of lesions and (v) presence of 'J'-shaped lesions were determined in comparison analyses and correlation analysis.
RESULTS
There were significant differences between PA and CBCT assessment for the mean number of canals (P < 0.001) and periapical lesions (P < 0.001), mean mesial-distal (P < 0.001) and coronal-apical dimension of the lesion (if present; P < 0.001) and the mean number of 'J'-shaped lesions (P < 0.05). The magnitude of the statistical differences (or bias) was greater for maxillary than mandibular molars regarding the number and size of the lesions identified. Correlation values were weaker between PA and CBCT assessments of maxillary molars than for mandibular molars in all parameters assessed.
CONCLUSION
There were substantial disagreements between PA and CBCT for assessing the periapical status of molar teeth, especially for the maxillary arch. The findings have implications in periapical diagnosis and for evaluating the outcome of endodontic care. |
Linear codes over F2+uF2+vF2+uvF2 | In this work, we investigate linear codes over the ring F2 + uF2 + vF2 + uvF2. We first analyze the structure of the ring and then define linear codes over this ring which turns out to be a ring that is not finite chain or principal ideal contrary to the rings that have hitherto been studied in coding theory. Lee weights and Gray maps for these codes are defined by extending on those introduced in works such as Betsumiya et al. (Discret Math 275:43–65, 2004) and Dougherty et al. (IEEE Trans Inf 45:32–45, 1999). We then characterize the F2 +uF2 +vF2 +uvF2-linearity of binary codes under the Gray map and give a main class of binary codes as an example of F2 + uF2 + vF2 + uvF2-linear codes. The duals and the complete weight enumerators for F2 + uF2 + vF2 + uvF2-linear codes are also defined after which MacWilliams-like identities for complete and Lee weight enumerators as well as for the ideal decompositions of linear codes over F2 + uF2 + vF2 + uvF2 are obtained. |
Lack of effect of beta-blocker on flat dose response to thiazide in hypertension: efficacy of low dose thiazide combined with beta-blocker. | Increasing the dose of a thiazide diuretic used alone in patients with essential hypertension has little further effect on blood pressure but increases the deleterious metabolic consequences of the diuretic. The effect of a beta-blocker on this flat dose response is not known. In two randomised crossover studies the effect of 12.5 mg, 25 mg, and 50 mg hydrochlorothiazide combined with 400 mg acebutolol was assessed. The mean fall in supine blood pressure was about 15% and was the same whatever dose of thiazide was used with the beta-blocker. As the dose of hydrochlorothiazide was increased, however, there was evidence of increasing metabolic consequences of the diuretic. The study did not define the minimum dose of diuretic, and doses of hydrochlorothiazide lower than 12.5 mg might be as effective. These results suggest that many patients who are being treated with a combination of a beta-blocker and a diuretic are receiving unnecessarily large amounts of the diuretic without benefit to their blood pressure and with adverse metabolic consequences. |
First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations | In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition. |
Whippersnapper: A P4 Language Benchmark Suite | As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages. |
Research of Output-Pin-Wheel Cycloid Drive Reducer | By analyzing the shortcomings of various types of cycloid drive, a kind of new cycloid drive that named as output-pin-wheel cycloid drive is innovated. The structure of reducer and the structural dimensions of specific components are designed by analyzing its working principle. In order to validating the correctness of drive model and assembly relations of parts, a three-dimensional solid modeling and virtual prototyping are built. Dynamic simulation analysis of virtual prototyping is done in the no-load and full load conditions, which provides the theory basis for manufacture and optimization of the reducer. |
Crohn's disease of the vulva in a 10-year-old girl. | Crohn's disease may involve all parts of the gastrointestinal tract and may often involve other organs as well. These non-intestinal affections are termed extraintestinal manifestations. Vulval involvement is an uncommon extraintestinal manifestation of Crohn's disease, and it is very rare in children. Patients with vulval CD typically present with erythema and edema of the labia majora, which progresses to extensive ulcer formation. Vulval Crohn's disease can appear before or after intestinal problems or it may occur simultaneously. We present a 10-year-old girl with intestinal Crohn's disease complicated with perianal skin tags and asymptomatic unilateral labial hypertrophy. The course of her lesion was independent of the intestinal disease and responded significantly to medical treatment including azathioprine and topical steroid. We emphasize that although vulval involvement in childhood is uncommon, Crohn's disease must be considered in the differential diagnosis of nontender, red, edematous lesions of the genital area. |
A Study of Wind Farm Stabilization Using DFIG or STATCOM Considering Grid Requirements | Recently, the grid codes require taking into account the reactive power of the wind farm in order to contribute to the network stability, thus operating the wind farm as active compensator devices. This paper presents a comparative study of stabilizing a wind farm using (Doubly Fed Induction Generators) DFIGs or using a (Static Synchronous Compensator) STATCOM during wind speed change and grid fault. Simulation results show that the wind farm could be effectively stabilized with both systems, but at a reduced cost with the DFIGs system because it can provide reactive power through its frequency converters without an external reactive power compensation unit like the STATCOM system significant. |
Refining Gondwana and Pangea palaeogeography: estimates of Phanerozoic non‐dipole (octupole) fields | S U M M A R Y Reliable Phanerozoic paleopoles have been selected from the stable parts of the Gondwana continents and, upon appropriate reconstruction, have been combined in an apparent polar wander (APW) path, which can be compared with a previously compiled path for Laurussia. This comparison once again confirms that Pangea-A reconstructions for Late Palaeozoic and early Mesozoic times cannot be reconciled with the available paleomagnetic data, unless these data are corrected for latitudinal errors caused by non-dipole (octupole) field contributions or by inclination shallowing. Because the discrepancies persist even when only paleopoles from igneous rocks are used, inclination shallowing cannot be the sole cause of the problem. There is an apparent decrease in the percentage of octupole field contributions needed as a function of time; for Mesozoic and younger time, Gondwana–Laurussia comparisons require, on average, lower ratios of octupole/dipole fields than for Palaeozoic time. However, the Gondwana paleopoles for the Palaeozoic include a much greater proportion of results derived from sedimentary rocks than do those for the Mesozoic, so that this apparently diminishing octupole field contribution may be an artefact. We have also examined whether the clustering of coeval Gondwana poles improves with optimal G3 contributions, but found that while there are improvements, they are not systematic and not statistically significant. A combined APW path has been constructed for Pangea for times since the Mid-Carboniferous, which accounts for octupole fields, or equivalently, inclination shallowing. We argue that this ‘global’ path is an improvement over previous constructions as it represents a self-consistent plate tectonic model and does not violate widely accepted Pangea-A reconstructions. |
Subsets and Splits