title
stringlengths
8
300
abstract
stringlengths
0
10k
Conference on Urban Runoff Quality-Objectives
The agenda for this conference on urban runoff quality was structured to expedite technology transfer through the expression, consolidation and evaluation of opinions, facts, experience and past research on the topic of urban runoff quality. The conference brings together the leaders in the field and provides an opportunity for researchers, practitioners and administrators to explore urban runoff quality, its impacts on receiving waters and impacts mitigation technologies.
MDig : Multi-digit Recognition using Convolutional Nerual Network on Mobile
Multi-character recognition in arbitrary photographs on mobile platform is difficult, in terms of both accuracy and real-time performance. In this paper, we focus on the domain of hand-written multi-digit recognition. Convolutional neural network (CNN) is the state-of-the-art solution for object recognition, and presents a workload that is both compute and data intensive. To reduce the workload, we train a shallow CNN offline, achieving 99.07% top-1 accuracy. And we utilize preprocessing and segmentation to reduce input image size fed into CNN. For CNN implementation on the mobile platform, we adopt and modify DeepBeliefSDK to support batching fully-connected layers. On NVIDIA SHIELD tablet, the application processes a frame and extracts 32 digits in approximately 60ms, and batching the fully-connected layers reduces the CNN runtime by another 12%.
Short-term evolution of spinal cord damage in multiple sclerosis: a diffusion tensor MRI study
The potential of diffusion tensor imaging (DTI) to detect spinal cord abnormalities in patients with multiple sclerosis has already been demonstrated. The objective of this study was to apply DTI techniques to multiple sclerosis patients with a recently diagnosed spinal cord lesion, in order to demonstrate a correlation between variations of DTI parameters and clinical outcome, and to try to identify DTI parameters predictive of outcome. A prospective single-centre study of patients with spinal cord relapse treated by intravenous steroid therapy was made. Patients were assessed clinically and by conventional MRI with DTI sequences at baseline and at 3 months. Sixteen patients were recruited. At 3 months, 12 patients were clinically improved. All but one patient had lower fractional anisotropy (FA) and apparent diffusion coefficient (ADC) values than normal subjects in either inflammatory lesions or normal-appearing spinal cord. Patients who improved at 3 months presented a significant reduction in the radial diffusivity (p = 0.05) in lesions during the follow-up period. They also had a significant reduction in the mean ADC (p = 0.002), axial diffusivity (p = 0.02), radial diffusivity (p = 0.02) and a significant increase in FA values (p = 0.02) in normal-appearing spinal cord. Patients in whom the American Spinal Injury Association sensory score improved at 3 months showed a significantly higher FA (p = 0.009) and lower radial diffusivity (p = 0.04) in inflammatory lesion at baseline compared to patients with no improvement. DTI MRI detects more extensive abnormalities than conventional T2 MRI. A less marked decrease in FA value and more marked decreased in radial diffusivity inside the inflammatory lesion were associated with better outcome.
Readability of Texts: State of the Art
In TEFL, it is often stated that communication presupposes comprehension. The main purpose of readability studies is thus to measure the comprehensibility of a piece of writing. In this regard, different readability measures were initially devised to help educators select passages suitable for both children and adults. However, readability formulas can certainly be extremely helpful in the realm of EFL reading. They were originally designed to assess the suitability of books for students at particular grade levels or ages. Nevertheless, they can be used as basic tools in determining certain crucial EFL text-characteristics instrumental in the skill of reading and its related issues. The aim of the present paper is to familiarize the readers with the most frequently used readability formulas as well as the pros and cons views toward the use of such formulas. Of course, this part mostly illustrates studies done on readability formulas with the results obtained. The main objective of this part is to help readers to become familiar with the background of the formulas, the theory on which they stand, what they are good for and what they are not with regard to a number of studies cited in this section.
Survey on Collaborative AR for Multi-user in Urban Simulation
This paper describes an augmented reality (AR) environment that allows multiple participants or multi-user to interact with 2D and 3D data. AR simply can provide a collaborative interactive AR environment for urban simulation, where users can interact naturally and intuitively. In addition, the collaborative AR makes multi-user in urban simulation to share simultaneously a real world and virtual world. The fusion between real and virtual world, existed in AR environment by see-through HMDs, achieves higher interactivity as a key features of collaborative AR. In real-time, precise registration between both worlds and multiuser are crucial for the collaborations. Collaborative AR allow multi-user to simultaneously share a real world surrounding them and a virtual world. Common problems in AR environment will be discussed and major issues in collaborative AR will be explained details in this paper. The features of collaboration in AR environment are will be identified and the requirements of collaborative AR will be defined. This paper will give an overview on collaborative AR environment for multi-user in urban studies and planning. The work will also cover numerous systems of collaborative AR environments for multi-user.
A waitlist-controlled trial of behavioral parent training for fathers of children with ADHD.
Fathers, in general, have been underrepresented in studies of parent training outcome for children with attention deficit hyperactivity disorder (ADHD), and the present study aimed to investigate the efficacy of a behavioral parent training program developed expressly for fathers. The present investigation randomly assigned 55 fathers of children ages 6 to 12 with ADHD to the Coaching Our Acting-out Children: Heightening Essential Skills (COACHES) program or a waitlist control group. Outcomes for the study included objective observations of parent behaviors and parent ratings of child behavior. Results indicated that fathers in the COACHES group reduced their rates of negative talk and increased rates of praise as measured in parent-child observations, and father ratings of the intensity of problem behaviors were reduced, relative to the waitlist condition. Groups did not differ on observations of use of commands or father ratings of child behavior problems. Untreated mothers did not significantly improve on observational measures or behavioral ratings. This study provides preliminary evidence for the efficacy of the COACHES parenting program for fathers of children with ADHD. Results are cast in light of the larger literature on behavioral parent training for ADHD as well as how to best work with fathers of children with ADHD in treatment contexts.
Horticultural therapy: the 'healing garden'and gardening in rehabilitation measures at Danderyd Hospital Rehabilitation Clinic, Sweden.
OBJECTIVES Objectives were to review the literature on horticultural therapy and describe the Danderyd Hospital Horticultural Therapy Garden and its associated horticultural therapy programme. DESIGN The literature review is based on the search words 'gardening', 'healing garden' and 'horticultural therapy'. The description is based on the second author's personal knowledge and popular-scientific articles initiated by her. The material has been integrated with acknowledged occupational therapy literature. SETTING The setting was the Danderyd Hospital Rehabilitation Clinic, Sweden, Horticultural Therapy Garden. PARTICIPANTS Forty-six patients with brain damage participated in group horticultural therapy. RESULTS Horticulture therapy included the following forms: imagining nature, viewing nature, visiting a hospital healing garden and, most important, actual gardening. It was expected to influence healing, alleviate stress, increase well-being and promote participation in social life and re-employment for people with mental or physical illness. The Horticultural Therapy Garden was described regarding the design of the outdoor environment, adaptations of garden tools, cultivation methods and plant material. This therapy programme for mediating mental healing, recreation, social interaction, sensory stimulation, cognitive re-organization and training of sensory motor function is outlined and pre-vocational skills and the teaching of ergonomical body positions are assessed. CONCLUSION This study gives a broad historic survey and a systematic description of horticultural therapy with emphasis on its use in rehabilitation following brain damage. Horticulture therapy mediates emotional, cognitive and/or sensory motor functional improvement, increased social participation, health, well-being and life satisfaction. However, the effectiveness, especially of the interacting and acting forms, needs investigation.
Global Relation Embedding for Relation Extraction
We study the problem of textual relation embedding with distant supervision. To combat the wrong labeling problem of distant supervision, we propose to embed textual relations with global statistics of relations, i.e., the cooccurrence statistics of textual and knowledge base relations collected from the entire corpus. This approach turns out to be more robust to the training noise introduced by distant supervision. On a popular relation extraction dataset, we show that the learned textual relation embedding can be used to augment existing relation extraction models and significantly improve their performance. Most remarkably, for the top 1,000 relational facts discovered by the best existing model, the precision can be improved from 83.9% to 89.3%.
Principles of cosmetic dentistry in orthodontics: Part 1. Shape and proportionality of anterior teeth.
In the past decade, there has been a remarkable upswing in interdisciplinary collaboration between dentists, orthodontists, and periodontists in smile enhancement, and now an entire field of “cosmetic periodontics” has evolved in collaboration with cosmetic dentistry. Contemporary orthodontic smile analysis is generally defined in terms of (1) vertical placement of the anterior teeth to the upper lip at rest and on smile (adequate incisor display but not too gummy), (2) transverse smile dimension (buccal corridors), (3) smile arc characteristics, and (4) the vertical relationship of gingival margins to each other. Through the interaction with these other disciplines and the knowledge gained, we have expanded our diagnosis of the smile to further refine the finishing of anterior esthetics for our patients. As our interaction with cosmetic dentistry has increased, we have become very aware of what standards guide the dentist who strives for an excellent smile. Through cosmetic bonding and laminate veneers, the dentist can control tooth shape by adding or taking away from the tooth, crown, or laminate. As orthodontists, we have generally limited our toothreshaping efforts to incisal edge “dressing.” The purpose of this article is to examine some cosmetic ideas and present new ways in which we can improve the smiles of our patients. In Part 1, I will define and illustrate how these principles are applied to improve the cosmetics of orthodontic patients. In Part 2, my coauthor and I will review the new laser technology available for reshaping soft tissues, and, in Part 3, we will discuss the clinical use of those lasers.
Optimal Design Strategy for Improved Operation of IPM BLDC Motors With Low-Resolution Hall Sensors
This study proposes an interior permanent magnet (IPM) brushless dc (BLDC) motor design strategy that utilizes BLDC control based on Hall sensor signals. The magnetic flux of IPM motors varies according to the rotor position and abnormal Hall sensor problems are related to magnetic flux. To find the cause of the abnormality in the Hall sensors, an analysis of the magnetic flux density at the Hall sensor position by finite element analysis is conducted. In addition, an IPM model with a notch structure is proposed to solve abnormal Hall sensor problems and its magnetic equivalent circuit (MEC) model is derived. Based on the MEC model, an optimal rotor design method is proposed and the final model is derived. However, the Hall sensor signal achieved from the optimal rotor is not perfect. To improve the accuracy of the BLDC motor control, a rotor position estimation method is proposed. Finally, experiments are performed to evaluate the performance of the proposed IPM-type BLDC motor and the Hall sensor compensation method.
Perceived fit and satisfaction on web learning performance: IS continuance intention and task-technology fit perspectives
Virtual learning system (VLS) is an information system that facilitates e-learning have been widely implemented by higher education institutions to support face-to-face teaching and self-managed learning in the virtual learning and education environment (VLE). This is referred to a blended learning instruction. By adopting the VLS, students are expected to enhance learning by getting access to courserelated information and having full opportunities to interact with instructors and peers. However, there are mixed findings revealed in the literature with respect to the learning outcomes in adopting VLS. In this study, we argue that the link between the precedents of leading students to continue to use VLSs and their impacts on learning effectiveness and productivity are overlooked in the literature. This paper aims to tackle this question by integrating information system (IS) continuance theory with task-technology fit (TTF) to extend our understandings of the precedents of the intention to continue VLS and their impacts on learning. By doing it, factors of technology-acceptance-to-performance, based on TAM (technology acceptance model) and TTF and post-technology-acceptance, based on expectation–confirmation theory, models can be included to test in one study. The results reveal that perceived fit and satisfaction are important precedents of the intention to continue VLS and individual performance. Later, a discussion and conclusions are provided. This study sheds light on learning system design as assisted by IS in VLE and can serve as a basis for promoting VLS in assisting learning. & 2012 Elsevier Ltd. All rights reserved.
Trading end-to-end latency for composability
The periodic resource model for hierarchical, compositional scheduling abstracts task groups by resource requirements. We study this model in the presence of dataflow constraints between the tasks within a group (intragroup dependencies), and between tasks in different groups (inter-group dependencies). We consider two natural semantics for dataflow constraints, namely, RTW (real-time workshop) semantics and LET (logical execution time) semantics. We show that while RTW semantics offers better end-to-end latency on the task group level, LET semantics allows tighter resource bounds in the abstraction hierarchy and therefore provides better composability properties. This result holds both for intragroup and intergroup dependencies, as well as for shared and for distributed resources
Raji Srinivasan & Christine Moorman Strategic Firm Commitments and Rewards for Customer Relationship Management in Online Retailing
Academic studies offer a generally positive portrait of the effect of customer relationship management (CRM) on firm performance, but practitioners question its value. The authors argue that a firm’s strategic commitments may be an overlooked organizational factor that influences the rewards for a firm’s investments in CRM. Using the context of online retailing, the authors consider the effects of two key strategic commitments of online retailers on the performance effect of CRM: their bricks-and-mortar experience and their online entry timing. They test the proposed model with a multimethod approach that uses manager ratings of firm CRM and strategic commitments and third-party customers’ ratings of satisfaction from 106 online retailers. The findings indicate that firms with moderate bricks-and-mortar experience are better able to leverage CRM for superior customer satisfaction outcomes than firms with either low or high bricks-and-mortar experience. Likewise, firms with moderate online experience are better able to leverage CRM into superior customer satisfaction outcomes than firms with either low or high online experience. These findings help resolve disparate results about the value of CRM, and they establish the importance of examining CRM within the strategic context of the firm.
Male-to-female transsexualism: technique, results and 3-year follow-up in 50 patients.
AIM To evaluate the functional and cosmetic results of male-to-female gender-transforming surgery. PATIENTS AND METHODS Between May 2001 and April 2008 we performed 50 male-to-female gender-transforming surgeries. All patients had been cross-dressing, living as women, and receiving estrogen and progesterone for at least 12 months, which was sufficient for breast development and atrophy of the testes and prostate to occur. This hormonal therapy was suspended 1 month before the operation. RESULTS The mean operative time was 190 min and the mean depth of the vagina was 10 cm. On follow-up, the most common complication (10%) was shrinkage of the neovagina, which could be corrected by a second surgical intervention. Of the 50 patients, 45 (90%) were satisfied with the esthetic results; 42 patients (84%) reported having regular sexual intercourse, 2 of whom had pain during intercourse. Of the 50 patients, 35 (70%) reported achieving clitoral orgasm. CONCLUSION Male-to-female gender-transforming surgery can assure satisfactory cosmetic and functional results, with a reduced intra- and postoperative morbidity. Nevertheless the experience of the surgeon and the center remains central to obtaining optimal results.
Fossil chrysophycean cyst flora of Racze Lake, Wolin Island (Poland) in relation to paleoenvironmental conditions
The study presents stratigraphic distribution of fossil chrysophycean cysts in the bottom sediments of Racze Lake (Poland). Thirty morphotypes are described, most of them for the first time. The description of the cyst includes SEM microphotographs. The long-term relationship between the lake's conditions and the occurrence of characteristic morphotypes of chrysophycean cysts is discussed.
Pharmacodynamic effects of high dose lovastatin in subjects with advanced malignancies
Lovastatin, an inhibitor of the rate-limiting enzyme in the cholesterol biosynthetic pathway, hydroxymethylglutaryl coenzyme A reductase, has shown interesting antiproliferative activities in cell culture and in animal models of cancer. The goal of the current study is to determine whether lovastatin bioactivity levels, in a range equivalent to those used in in vitro and preclinical studies, can be safely achieved in human subjects. Here we present the findings from a dose-escalating trial of lovastatin in subjects with advanced malignancies. Lovastatin was administered every 6 h for 96 h in 4-week cycles in doses ranging from 10 mg/m2 to 415 mg/m2. Peak plasma lovastatin bioactivity levels of 0.06–12.3 μM were achieved in a dose-independent manner. Cholesterol levels decreased during treatment and normalized during the rest period. A dose-limiting toxicity was not reached and there were no clinically significant increases in creatine phosphokinase or serum hepatic aminotransferases levels. No antitumor responses were observed. These results demonstrate that high doses of lovastatin, given every 4 h for 96 h, are well-tolerated and in select cases, bioactivity levels in the range necessary for antiproliferative activity were achieved.
Convolutional deep maxout networks for phone recognition
Convolutional neural networks have recently been shown to outperform fully connected deep neural networks on several speech recognition tasks. Their superior performance is due to their convolutional structure that processes several, slightly shifted versions of the input window using the same weights, and then pools the resulting neural activations. This pooling operation makes the network less sensitive to translations. The convolutional network results published up till now used sigmoid or rectified linear neurons. However, quite recently a new type of activation function called the maxout activation has been proposed. Its operation is closely related to convolutional networks, as it applies a similar pooling step, but over different neurons evaluated on the same input. Here, we combine the two technologies, and experiment with deep convolutional neural networks built from maxout neurons. Phone recognition tests on the TIMIT database show that switching to maxout units from rectifier units decreases the phone error rate for each network configuration studied, and yields relative error rate reductions of between 2% and 6%.
Parametric Hidden Markov Models for Gesture Recognition
ÐA new method for the representation, recognition, and interpretation of parameterized gesture is presented. By parameterized gesture we mean gestures that exhibit a systematic spatial variation; one example is a point gesture where the relevant parameter is the two-dimensional direction. Our approach is to extend the standard hidden Markov model method of gesture recognition by including a global parametric variation in the output probabilities of the HMM states. Using a linear model of dependence, we formulate an expectation-maximization (EM) method for training the parametric HMM. During testing, a similar EM algorithm simultaneously maximizes the output likelihood of the PHMM for the given sequence and estimates the quantifying parameters. Using visually derived and directly measured three-dimensional hand position measurements as input, we present results that demonstrate the recognition superiority of the PHMM over standard HMM techniques, as well as greater robustness in parameter estimation with respect to noise in the input features. Last, we extend the PHMM to handle arbitrary smooth (nonlinear) dependencies. The nonlinear formulation requires the use of a generalized expectation-maximization (GEM) algorithm for both training and the simultaneous recognition of the gesture and estimation of the value of the parameter. We present results on a pointing gesture, where the nonlinear approach permits the natural spherical coordinate parameterization of pointing direction. Index TermsÐGesture recognition, hidden Markov models, expectation-maximization algorithm, time-series modeling, computer vision.
Magnetic light
Spherical silicon nanoparticles with sizes of a few hundreds of nanometers represent a unique optical system. According to theoretical predictions based on Mie theory they can exhibit strong magnetic resonances in the visible spectral range. The basic mechanism of excitation of such modes inside the nanoparticles is very similar to that of split-ring resonators, but with one important difference that silicon nanoparticles have much smaller losses and are able to shift the magnetic resonance wavelength down to visible frequencies. We experimentally demonstrate for the first time that these nanoparticles have strong magnetic dipole resonance, which can be continuously tuned throughout the whole visible spectrum varying particle size and visually observed by means of dark-field optical microscopy. These optical systems open up new perspectives for fabrication of low-loss optical metamaterials and nanophotonic devices.
Active and passive contributions to spatial learning.
It seems intuitively obvious that active exploration of a new environment will lead to better spatial learning than will passive exposure. However, the literature on this issue is decidedly mixed-in part, because the concept itself is not well defined. We identify five potential components of active spatial learning and review the evidence regarding their role in the acquisition of landmark, route, and survey knowledge. We find that (1) idiothetic information in walking contributes to metric survey knowledge, (2) there is little evidence as yet that decision making during exploration contributes to route or survey knowledge, (3) attention to place-action associations and relevant spatial relations contributes to route and survey knowledge, although landmarks and boundaries appear to be learned without effort, (4) route and survey information are differentially encoded in subunits of working memory, and (5) there is preliminary evidence that mental manipulation of such properties facilitates spatial learning. Idiothetic information appears to be necessary to reveal the influence of attention and, possibly, decision making in survey learning, which may explain the mixed results in desktop virtual reality. Thus, there is indeed an active advantage in spatial learning, which manifests itself in the task-dependent acquisition of route and survey knowledge.
HIV/AIDS prevention in New York City: identifying sociocultural needs of the community.
New York City has always been and remains at the epicenter of the country's AIDS epidemic, with more than 100,000 people living with HIV/AIDS. More than Los Angeles, San Francisco, and Miami combined (CDC, 2007b). Each year there may be as many as 4,800 people in New York City who are newly diagnosed with HIV and 1,700 who die from the disease (NYC Commission on HIV/AIDS, 2005; NYC AIDS Institute, 2006g). Recent research indicates that these HIV infection rates are actually significantly higher (perhaps by as much as 40%), with the "virus spreading in NY at three times the national rate" making it evident that HIV education and prevention efforts are not effectively reaching New Yorkers (Altman, 2008, p. 1). This article reports on the findings of a quantitative study (n = 98) that sought to identify the unique sociocultural needs of NYC residents who seek HIV/AIDS care. Key questions were aimed at who gets HIV tested and why, what HIV education services were reported as most effective, and identifying the unique sociocultural obstacles to getting HIV tested. Some of the statistically significant findings include: (1) the most helpful HIV education was found to be support groups and the second most helpful was reading material offered in community based settings; (2) most residents choose to get tested under the direct advice of a physician; (3) Latinos tend to hold more HIV/AIDS Stigma than their African-American counterparts. Culturally competent implications are provided for policy, program development and direct care for those providing HIV education services in urban communities.
Type-2 fuzzified flappy bird control system
In this study, we will present the novel application of Type-2 (T2) fuzzy control into the popular video game called flappy bird. To the best of our knowledge, our work is the first deployment of the T2 fuzzy control into the computer games research area. We will propose a novel T2 fuzzified flappy bird control system that transforms the obstacle avoidance problem of the game logic into the reference tracking control problem. The presented T2 fuzzy control structure is composed of two important blocks which are the reference generator and Single Input Interval T2 Fuzzy Logic Controller (SIT2-FLC). The reference generator is the mechanism which uses the bird's position and the pipes' positions to generate an appropriate reference signal to be tracked. Thus, a conventional fuzzy feedback control system can be defined. The generated reference signal is tracked via the presented SIT2-FLC that can be easily tuned while also provides a certain degree of robustness to system. We will investigate the performance of the proposed T2 fuzzified flappy bird control system by providing comparative simulation results and also experimental results performed in the game environment. It will be shown that the proposed T2 fuzzified flappy bird control system results with a satisfactory performance both in the framework of fuzzy control and computer games. We believe that this first attempt of the employment of T2-FLCs in games will be an important step for a wider deployment of T2-FLCs in the research area of computer games.
More targets, more pathways and more clues for mutant p53
Mutations in the transcription factor p53 are among the most common genetic alterations in human cancer, and missense p53 mutations in cancer cells can lead to aggressive phenotypes. So far, only few studies investigated transcriptional reprogramming under mutant p53 expression as a means to identify deregulated targets and pathways. A review of the literature was carried out focusing on mutant p53-dependent transcriptome changes with the aims of (i) verifying whether different p53 mutations can be equivalent for their effects, or whether there is a mutation-specific transcriptional reprogramming of target genes, (ii) understanding what is the main mechanism at the basis of upregulation or downregulation of gene expression under the p53 mutant background, (iii) identifying novel candidate target genes of WT and/or mutant p53 and (iv) defining cellular pathways affected by the mutant p53-dependent gene expression reprogramming. Nearly 600 genes were consistently found upregulated or downregulated upon ectopic expression of mutant p53, regardless of the specific p53 mutation studied. Promoter analysis and the use of ChIP-seq data indicate that, for most genes, the expression changes could be ascribed to a loss both of WT p53 transcriptional activation and repressor functions. Pathway analysis indicated changes in the metabolism/catabolism of amino acids such as aspartate, glutamate, arginine and proline. Novel p53 candidate target genes were also identified, including ARID3B, ARNT2, CLMN, FADS1, FTH1, KPNA2, LPHN2, PARD6B, PDE4C, PIAS2, PRPF40A, PYGL and RHOBTB2, involved in the metabolism, xenobiotic responses and cell differentiation.
Active feedback control as a solution to severe slugging
Severe slugging in multiphase pipelines can cause serious and troublesome operational problems for downstream receiving production facilities. Recent results demonstrating the feasibility and the potential of applying dynamic feedback control to unstable multiphase flow like severe slugging and casing heading have been published (Refs. 4,9,1,5 and 2). This paper summarizes our findings on terrain-induced slug flow (Ref. 2). Results from field tests and as well as results from dynamic multiphase flow simulations are presented. The simulations were performed with the pipeline code OLGA2000. The controllers applied to all of these cases aim to stabilize the flow conditions by applying feedback control rather than coping with the slug flow in the downstream processing unit. The results from simulations with feedback control show in all cases stable process conditions both at the pipeline inlet and outlet, whereas without control severe slug flow is experienced. Pipeline profile plots of liquid volume fraction through a typical slug flow cycle are compared against corresponding plots with feedback control applied. The comparison is used to justify internal stability of the pipeline. Feedback control enables in many cases a reduced pipeline inlet pressure, which again means increased production rate. The paper summarizes the experience gained with active feedback control applied to severe slugging. Focus will be on extracting similarities and differences between the cases. The 1 Author to whom correspondence should be addressed: Present adress: Scandpower AS, P.O.Box 3, N-2027 Kjeller, Norway. Fax: +47 64 84 45 00, E-mail: [email protected] main contribution is to demonstrate that dynamic feedback control can be a solution to the severe slugging problem. Introduction Multiphase pipelines connecting remote wellhead platforms and subsea wells are already common in offshore oil production, and even more of them will be laid in the years to come. In addition, the proven feasibility of using long-distance tie back pipelines to connect subsea processing units directly to on-shore processing plants makes it likely that these will appear also in the future. Such developments are turning the spotlight on one of the biggest challenges for control and operation of offshore processing facilities and subsea separation units: controlling the feed disturbance to the separation process. That is, smoothening or avoiding flow variations at the outlet of the multiphase pipelines connecting wells and remote installations to the processing unit. Common forms of flow variations are slug flow in multiphase pipelines and casing heading in gas lifted oil wells. In both cases the liquid flows intermittently along the pipe in a concentrated mass, called a slug. The unstable behaviour of slug flow and casing heading has a negative impact on the operation of offshore production facilities. Severe slugging can even cause platform trips and plant shutdown. More frequently, the large and rapid flow variation causes unwanted flaring and limits the operating capacity in the separation and the compression units. This reduction is due to the need for larger operating margins for both separation (to meet the product specifications) and compression (to ensure safe operation with minimum flaring). Backing off the plant's optimal operating in this way reduces its throughput. A lot of effort and money has been spent trying to avoid the operational problems with severe slugging and reduce the effects of the slugs. Roughly speaking, there are three main categories of principles for avoiding or reducing the effects of slugs: 1. Design changes 2. Operational changes and procedures 3. Control methods A. Feed forward control B. Slug choking C. Active feedback control An example of a typical slug handling technique involving design changes is to install slug catchers (on-shore) or increase SPE 71540 Active Feedback Control as the Solution to Severe Slugging Kjetil Havre, ABB Corporate Research, and Morten Dalsmo, SPE, ABB 2 K. Havre, M. Dalsmo SPE 71540 the size of the first stage separator(s) to provide the necessary buffer capacity. A different compact process design change is reported in Ref. 10, where the authors introduce an additional small pressurized closed vessel upstream the first stage separator in order to cope with slug flow. An example of operational change is to increase the flow-line pressure so that operation of the pipeline/well is outside the slug flow regime (Refs. 13,6). For older wells with reduced lifting capacity this is not viable option. For gas lifted wells an option would be to increase the gas injection rate (see Ref. 1). For already existing installations where problems with slug flow are present and for compact separation units, these design and operational changes may not be appropriate. Control methods for slug handling are characterized by the use of process and/or pipeline information to adjust available degrees of freedom (pipeline chokes, pressure and levels) to reduce or eliminate the effect of slugs in the downstream separation and compression units. The idea of feed-forward control is to detect the build-up of slugs and, accordingly, prepare the separators to receive them, e.g. via feed-forward control to the separator level and pressure control loops. The aim of slug choking is to avoid overloading the process facilities with liquid or gas. These methods make use of a topside pipeline choke by reducing it’s opening in the presence of a slug, and thereby protecting the downstream equipment. The slug choking may utilize measurements in the separation unit and/or the output from a slug detection device/algorithm. For a more complete assessment of current technology for slug handling refer to Ref. 9. However, in this assessment, active control methods are not properly addressed. Recently, results demonstrating the feasibility and the potential of applying dynamic feedback control to unstable multiphase flow like severe slugging and casing heading have been published (Refs. 4,9,1,5 and 2). Like slug choking, active feedback control makes use of a topside choke. However, with dynamic feedback control, the approach is to solve the slug problem by stabilizing the multiphase flow. Despite the promising results first reported in 1990 (Ref. 4) the use of active slug control on multiphase flow has been limited. To our knowledge only two installations in operation has stabilizing controllers installed. These are the Dunbar-Alwyn pipeline (Refs. 5,9) and the Hod-Valhall pipeline (Ref. 2). One reason for this might be that control engineering and fluid flow dynamics usually are separated technical fields, i.e. the control engineers have limited knowledge about multiphase flow and the experts in fluid flow dynamics have limited insights into what can be achieved with feedback control. Indeed, when presenting the results on the Hod–Valhall pipeline (Ref. 2), we had a hard time in convincing several of the fluid flow dynamics engineers that one can avoid the slug formation in severe slugging by active control. Hence, one objective of this paper is to provide insights and understanding into how feedback control can be used to avoid severe slugging, and thereby contribute to bridging the gap between control and petroleum engineering. Previous work Elimination of terrain and riser-induced slug flow by choking was first suggested by Schmidt et. al. (Ref. 13). Taitel (Ref. 6) state that stable flow can be achieved by using a choke to control the pipeline backpressure. He state that an unstable system can still operate around equilibrium steady state provided a feedback control system is used to stabilize the system. Furthermore, he refers to Schmidt (1980, Ref. 14) who found experimentally that it is possible to stabilize the flow by choking at the top of the riser upstream the separator. Taitel used stability analysis to define a theoretical control law. The control law relates the backpressure to the propagation of the slug tail into the riser. In Ref. 6 Taitel claims [sic] “It is interesting to observe that, to a good approximation, little movement of the choking valve is needed for such a control system. This makes it possible to set the valve in a precalculated constant value’’. In the experiments reported in the paper, no feedback control system is used. Instead, the choke is fixed in a pre-calculated position. Note that the derived stability condition is related to quasi-equilibrium flow conditions with bubble flow in the riser and no or limited propagation of the slug tail into the riser. From control theory it is well known that feedback control is needed to operate in an unstable operating point, otherwise disturbances will push the operation out of the desired operating point. Our conclusion is that the quasi-equilibrium flow conditions comprises a stable operating point with an unnecessary high riser base pressure which must be higher than the corresponding pressure which can be achieved by applying stabilizing feedback control. Furthermore, we believe that the riser base pressure at quasi equilibrium flow conditions is equal to, or larger than, the peak riser base pressure with slug flow. Typical flow maps showing the slug flow region's dependency on the pressure, justifies these statements by fact that the slug flow region shrinks with increasing pressure and that the bubble flow region lies above slug flow region (see Fig. 1 and Fig. 2). In Ref. 4, experiments on suppression of terrain-induced slugging by means of a remotely controlled control valve, installed in the riser top, are presented. Manual valve closure about 80% was necessary to remove the terrain-induced slugging, with a pressure difference about 7bar over the valve. In automatic mode the valve was controlled by PI algorithm with the pressure over the riser as the input signal. Terraininduced slugging was successfully alleviated with PI control algorithm ope
An enhanced WiFi indoor localization system based on machine learning
The Global Navigation Satellite Systems (GNSS) suffer from accuracy deterioration and outages in dense urban canyons and are almost unavailable for indoor environments. Nowadays, developing indoor positioning systems has become an attractive research topic due to the increasing demands on ubiquitous positioning. WiFi technology has been studied for many years to provide indoor positioning services. The WiFi indoor localization systems based on machine learning approach are widely used in the literature. These systems attempt to find the perfect match between the user fingerprint and pre-defined set of grid points on the radio map. However, Fingerprints are duplicated from available Access Points (APs) and interference, which increase number of matched patterns with the user's fingerprint. In this research, the Principle Component Analysis (PCA) is utilized to improve the performance and to reduce the computation cost of the WiFi indoor localization systems based on machine learning approach. All proposed methods were developed and physically realized on Android-based smart phone using the IEEE 802.11 WLANs. The experimental setup was conducted in a real indoor environment in both static and dynamic modes. The performance of the proposed method was tested using K-Nearest Neighbors, Decision Tree, Random Forest and Support Vector Machine classifiers. The results show that the performance of the proposed method outperforms other indoor localization reported in the literature. The computation time was reduced by 70% when using Random Forest classifier in the static mode and by 33% when using KNN in the dynamic mode.
A Software Defined Network-Based Security Assessment Framework for CloudIoT
The integration of cloud and Internet of Things (IoT), named CloudIoT, has been considered as an enabler for many different applications. However, the suspicion about the security issue is one main concern that some organizations hesitate to adopt such technologies while some just ignore the security issue while integrating the CloudIoT into their business. Therefore, given the numerous choices of cloud-resource providers and IoT devices, how to evaluate their security level becomes an important issue to promote the adoption of CloudIoT as well as reduce the business security risks. To solve this problem, considering the importance of the business data in CloudIoT, we develop an end-to-end security assessment framework based on software defined network (SDN) to evaluate the security level for the given CloudIoT offering. Specially, in order to simplify the network controls and focus on the analysis about the data flow through CloudIoT, we develop a three-layer framework by integrating SDN and CloudIoT, which consists of 23 different indicators to describe its security features. Then, the interviews from industry and academic are carried out to understand the importance of these features for the overall security. Furthermore, given the relevant evidences from the CloudIoT offering, the Google Brillo and Microsoft Azure IoT Suite, our framework can effectively evaluate the security level which can help the consumers for their CloudIoT selection.
Factors associated with the concentration of immunoglobulin G in the colostrum of dairy cows.
Transfer of sufficient immunoglobulin G (IgG) to the neonatal calf via colostrum is vital to provide the calf with immunological protection and resistance against disease. The objective of the present study was to determine the factors associated with both colostral IgG concentration and colostral weight in Irish dairy cows. Fresh colostrum samples were collected from 704 dairy cows of varying breed and parity from four Irish research farms between January and December 2011; colostral weight was recorded and the IgG concentration was determined using an ELISA method. The mean IgG concentration in the colostrum was 112 g/l (s.d. = 51 g/l) and ranged from 13 to 256 g/l. In total, 96% of the samples in this study contained >50 g/l IgG, which is considered to be indicative of high-quality colostrum. Mean colostral weight was 6.7 kg (s.d. = 3.6 kg) with a range of 0.1 to 24 kg. Factors associated with both colostral IgG concentration and colostral weight were determined using a fixed effects multiple regression model. Parity, time interval from calving to next milking, month of calving, colostral weight and herd were all independently associated with IgG concentration. IgG concentration decreased (P < 0.01) by 1.7 (s.e. = 0.6) g/l per kg increase in the colostral weight. Older parity cows, cows that had a shorter time interval from calving to milking, and cows that calved earlier in spring or in the autumn produced colostrum with higher IgG concentration. Parity (P < 0.001), time interval from calving to milking (P < 0.01), weight of the calf at birth (P < 0.05), colostral IgG concentration (P < 0.01) and herd were all independently associated with colostral weight at the first milking. Younger parity cows, cows milked earlier post-calving, and cows with lighter calves produced less colostrum. In general, colostrum quality of cows in this study was higher than in many previous studies; possible reasons include use of a relatively low-yielding cow type that produces low weight of colostrum, short calving to colostrum collection interval and grass-based nutritional management. The results of this study indicate that colostral IgG concentration can be maximised by reducing the time interval between calving and collection of colostrum.
Vision-based approach towards lane line detection and vehicle localization
Localization of the vehicle with respect to road lanes plays a critical role in the advances of making the vehicle fully autonomous. Vision based road lane line detection provides a feasible and low cost solution as the vehicle pose can be derived from the detection. While good progress has been made, the road lane line detection has remained an open one, given challenging road appearances with shadows, varying lighting conditions, worn-out lane lines etc. In this paper, we propose a more robust vision-based approach with respect to these challenges. The approach incorporates four key steps. Lane line pixels are first pooled with a ridge detector. An effective noise filtering mechanism will next remove noise pixels to a large extent. A modified version of sequential RANdom Sample Consensus) is then adopted in a model fitting procedure to ensure each lane line in the image is captured correctly. Finally, if lane lines on both sides of the road exist, a parallelism reinforcement technique is imposed to improve the model accuracy. The results obtained show that the proposed approach is able to detect the lane lines accurately and at a high success rate compared to current approaches. The model derived from the lane line detection is capable of generating precise and consistent vehicle localization information with respect to road lane lines, including road geometry, vehicle position and orientation.
Design Principles of the Hippocampal Cognitive Map
Hippocampal place fields have been shown to reflect behaviorally relevant aspects of space. For instance, place fields tend to be skewed along commonly traveled directions, they cluster around rewarded locations, and they are constrained by the geometric structure of the environment. We hypothesize a set of design principles for the hippocampal cognitive map that explain how place fields represent space in a way that facilitates navigation and reinforcement learning. In particular, we suggest that place fields encode not just information about the current location, but also predictions about future locations under the current transition distribution. Under this model, a variety of place field phenomena arise naturally from the structure of rewards, barriers, and directional biases as reflected in the transition policy. Furthermore, we demonstrate that this representation of space can support efficient reinforcement learning. We also propose that grid cells compute the eigendecomposition of place fields in part because is useful for segmenting an enclosure along natural boundaries. When applied recursively, this segmentation can be used to discover a hierarchical decomposition of space. Thus, grid cells might be involved in computing subgoals for hierarchical reinforcement learning.
A Noise Reduction and Linearity Improvement Technique for a Differential Cascode LNA
A typical common source cascode low-noise amplifier (CS-LNA) can be treated as a CS-CG two stage amplifier. In the published literature, an inductor is added at the drain of the main transistor to reduce the noise contribution of the cascode transistors. In this work, an inductor connected at the gate of the cascode transistor and capacitive cross-coupling are strategically combined to reduce the noise and the nonlinearity influences of the cascode transistors in a differential cascode CS-LNA. It uses a smaller noise reduction inductor compared with the conventional inductor based technique. It can reduce the noise, improve the linearity and also increase the voltage gain of the LNA. The proposed technique is theoretically formulated. Furthermore, as a proof of concept, a 2.2 GHz inductively degenerated CS-LNA was fabricated using TSMC 0.35 mum CMOS technology. The resulting LNA achieves 1.92 dB noise figure, 8.4 dB power gain, better than 13 dB S11, more than 30 dB isolation (S12), and -2.55 dBm IIP3, with the core fully differential LNA consuming 9 mA from a 1.8 V power supply.
Oxidative stress in ALS: key role in motor neuron injury and therapeutic target.
Amyotrophic lateral sclerosis (ALS) is a devastating neurodegenerative disorder characterized by death of motor neurons leading to muscle wasting, paralysis, and death, usually within 2-3 years of symptom onset. The causes of ALS are not completely understood, and the neurodegenerative processes involved in disease progression are diverse and complex. There is substantial evidence implicating oxidative stress as a central mechanism by which motor neuron death occurs, including elevated markers of oxidative damage in ALS patient spinal cord and cerebrospinal fluid and mutations in the antioxidant enzyme superoxide dismutase 1 (SOD1) causing approximately 20% of familial ALS cases. However, the precise mechanism(s) by which mutant SOD1 leads to motor neuron degeneration has not been defined with certainty, and the ultimate trigger for increased oxidative stress in non-SOD1 cases remains unclear. Although some antioxidants have shown potential beneficial effects in animal models, human clinical trials of antioxidant therapies have so far been disappointing. Here, the evidence implicating oxidative stress in ALS pathogenesis is reviewed, along with how oxidative damage triggers or exacerbates other neurodegenerative processes, and we review the trials of a variety of antioxidants as potential therapies for ALS.
Guided data repair
In this paper we present GDR, a Guided Data Repair framework that incorporates user feedback in the cleaning process to enhance and accelerate existing automatic repair techniques while minimizing user involvement. GDR consults the user on the updates that are most likely to be beneficial in improving data quality. GDR also uses machine learning methods to identify and apply the correct updates directly to the database without the actual involvement of the user on these specific updates. To rank potential updates for consultation by the user, we first group these repairs and quantify the utility of each group using the decision-theory concept of value of information (VOI). We then apply active learning to order updates within a group based on their ability to improve the learned model. User feedback is used to repair the database and to adaptively refine the training set for the model. We empirically evaluate GDR on a real-world dataset and show significant improvement in data quality using our user guided repairing process. We also, assess the trade-off between the user efforts and the resulting data quality.
A Guided Search Genetic Algorithm for the University Course Timetabling Problem
The university course timetabling problem is a combinatorial optimisation problem in which a set of events has to be scheduled in time slots and located in suitable rooms. The design of course timetables for academic institutions is a very difficult task because it is an NP-hard problem. This paper proposes a genetic algorithm with a guided search strategy and a local search technique for the university course timetabling problem. The guided search strategy is used to create offspring into the population based on a data structure that stores information extracted from previous good individuals. The local search technique is used to improve the quality of individuals. The proposed genetic algorithm is tested on a set of benchmark problems in comparison with a set of state-of-the-art methods from the literature. The experimental results show that the proposed genetic algorithm is able to produce promising results for the university course timetabling problem.
Estimating the Impact of Plasma HIV-1 RNA Reductions on Heterosexual HIV-1 Transmission Risk
BACKGROUND The risk of sexual transmission of HIV-1 is strongly associated with the level of HIV-1 RNA in plasma making reduction in HIV-1 plasma levels an important target for HIV-1 prevention interventions. A quantitative understanding of the relationship of plasma HIV-1 RNA and HIV-1 transmission risk could help predict the impact of candidate HIV-1 prevention interventions that operate by reducing plasma HIV-1 levels, such as antiretroviral therapy (ART), therapeutic vaccines, and other non-ART interventions. METHODOLOGY/PRINCIPAL FINDINGS We use prospective data collected from 2004 to 2008 in East and Southern African HIV-1 serodiscordant couples to model the relationship of plasma HIV-1 RNA levels and heterosexual transmission risk with confirmation of HIV-1 transmission events by HIV-1 sequencing. The model is based on follow-up of 3381 HIV-1 serodiscordant couples over 5017 person-years encompassing 108 genetically-linked HIV-1 transmission events. HIV-1 transmission risk was 2.27 per 100 person-years with a log-linear relationship to log(10) plasma HIV-1 RNA. The model predicts that a decrease in average plasma HIV-1 RNA of 0.74 log(10) copies/mL (95% CI 0.60 to 0.97) reduces heterosexual transmission risk by 50%, regardless of the average starting plasma HIV-1 level in the population and independent of other HIV-1-related population characteristics. In a simulated population with a similar plasma HIV-1 RNA distribution the model estimates that 90% of overall HIV-1 infections averted by a 0.74 copies/mL reduction in plasma HIV-1 RNA could be achieved by targeting this reduction to the 58% of the cohort with plasma HIV-1 levels ≥4 log(10) copies/mL. CONCLUSIONS/SIGNIFICANCE This log-linear model of plasma HIV-1 levels and risk of sexual HIV-1 transmission may help estimate the impact on HIV-1 transmission and infections averted from candidate interventions that reduce plasma HIV-1 RNA levels.
Foot cooling reduces exercise-induced hyperthermia in men with spinal cord injury.
UNLABELLED The number of individuals with spinal cord injury (SCI) participating in sports at recreational and elite levels is on the rise. However, loss of autonomic nervous system function below the lesion can compromise thermoregulatory capacity and increase the risk of heat stress relative to able-bodied (AB) individuals. PURPOSE To test the hypotheses that exercise in a heated environment would increase tympanic temperature (TTY) more in individuals with SCI than AB individuals, and that foot cooling using a new device would attenuate the rise in TTY during exercise in both groups. METHODS Six subjects with SCI (lesions C5-T5) and six AB controls were tested in a heated environment (means +/- SEM, temperature = 31.8 +/- 0.2 degrees C, humidity = 26 +/- 1%) for 45 min at 66% +/- 5 of arm cranking VO2peak and 30 min of recovery on two separate occasions with foot cooling (FC) or no foot cooling (NC) in randomized order. RESULTS During exercise and recovery in both trials, SCI TTY was elevated above baseline (P < 0.001) but more so in the NC versus FC trial (1.6 +/- 0.2 degrees C vs 1.0 +/- 0.2 degrees C, respectively, P < 0.005). Within the AB group, TTY was elevated above baseline for both trials (P < 0.001) with peak increases of 0.5 +/- 0.2 degrees C and 0.3 +/- 0.2 degrees C for NC and FC, respectively. TTY, face, and back temperature were higher in both SCI trials compared with AB trials (P < 0.05). Heart rate during exercise and recovery was lower in the SCI FC versus SCI NC (P < 0.05). CONCLUSION These results suggest that extraction of heat through the foot may provide an effective way to manipulate tympanic temperature in individuals with SCI, especially during exercise in the heat.
Autonomous information fusion for robust obstacle localization on a humanoid robot
Recent developments in sensor technology [1], [2] have resulted in the deployment of mobile robots equipped with multiple sensors, in specific real-world applications [3]–[6]. A robot equipped with multiple sensors, however, obtains information about different regions of the scene, in different formats and with varying levels of uncertainty. In addition, the bits of information obtained from different sensors may contradict or complement each other. One open challenge to the widespread deployment of robots is the ability to fully utilize the information obtained from each sensor, in order to operate robustly in dynamic environments. This paper presents a probabilistic framework to address autonomous multisensor information fusion on a humanoid robot. The robot exploits the known structure of the environment to autonomously model the expected performance of the individual information processing schemes. The learned models are used to effectively merge the available information. As a result, the robot is able to robustly detect and localize mobile obstacles in its environment. The algorithm is fully implemented and tested on a humanoid robot platform (Aldebaran Naos [7]) in the robot soccer scenario.
Millimeter-Wave Devices and Circuit Blocks up to 104 GHz in 90 nm CMOS
A systematic methodology for layout optimization of active devices for millimeter-wave (mm-wave) application is proposed. A hybrid mm-wave modeling technique was developed to extend the validity of the device compact models up to 100 GHz. These methods resulted in the design of a customized 90 nm device layout which yields an extrapolated of 300 GHz from an intrinsic device . The device is incorporated into a low-power 60 GHz amplifier consuming 10.5 mW, providing 12.2 dB of gain, and an output of 4 dBm. An experimental three-stage 104 GHz tuned amplifier has a measured peak gain of 9.3 dB. Finally, a Colpitts oscillator operating at 104 GHz delivers up to 5 dBm of output power while consuming 6.5 mW.
BJOLP : The Big Joint Optimal Landmarks Planner
BJOLP, The Big Joint Optimal Landmarks Planner uses landmarks to derive an admissible heuristic, which is then used to guide a search for a cost-optimal plan. In this paper we review landmarks and describe how they can be used to derive an admissible heuristic. We conclude with presenting the BJOLP
On Graph-Based Name Disambiguation
Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.
Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses
Individuals with strong self-regulated learning (SRL) skills, characterized by the ability to plan, manage and control their learning process, can learn faster and outperform those with weaker SRL skills. SRL is critical in learning environments that provide low levels of support and guidance, as is commonly the case in Massive Open Online Courses (MOOCs). Learners can be trained to engage in SRL and actively supported with prompts and activities. However, effective implementation of learner support systems in MOOCs requires an understanding of which SRL strategies are most effective and how these strategies manifest in online behavior. Moreover, identifying learner characteristics that are predictive of weaker SRL skills can advance efforts to provide targeted support without obtrusive survey instruments. We investigated SRL in a sample of 4,831 learners across six MOOCs based on individual records of overall course achievement, interactions with course content, and survey responses. We found that goal setting and strategic planning predicted attainment of personal course goals, while help seeking was associated with lower goal attainment. Learners with stronger SRL skills were more likely to revisit previously studied course materials, especially course assessments. Several learner characteristics, including demographics and motivation, predicted learners’ SRL skills. We discuss implications for theory and the development of learning environments that provide
Estimation of Energy Balance Components over a Drip-Irrigated Olive Orchard Using Thermal and Multispectral Cameras Placed on a Helicopter-Based Unmanned Aerial Vehicle (UAV)
A field experiment was carried out to implement a remote sensing energy balance (RSEB) algorithm for estimating the incoming solar radiation (Rsi), net radiation (Rn), sensible heat flux (H), soil heat flux (G) and latent heat flux (LE) over a drip-irrigated olive (cv. Arbequina) orchard located in the Pencahue Valley, Maule Region, Chile (35 ̋251S; 71 ̋441W; 90 m above sea level). For this study, a helicopter-based unmanned aerial vehicle (UAV) was equipped with multispectral and infrared thermal cameras to obtain simultaneously the normalized difference vegetation index (NDVI) and surface temperature (Tsurface) at very high resolution (6 cm ˆ 6 cm). Meteorological variables and surface energy balance components were measured at the time of the UAV overpass (near solar noon). The performance of the RSEB algorithm was evaluated using measurements of H and LE obtained from an eddy correlation system. In addition, estimated values of Rsi and Rn were compared with ground-truth measurements from a four-way net radiometer while those of G were compared with soil heat flux based on flux plates. Results indicated that RSEB algorithm estimated LE and H with errors of 7% and 5%, respectively. Values of the root mean squared error (RMSE) and mean absolute error (MAE) for LE were 50 and 43 W m ́2 while those for H were 56 and 46 W m ́2, respectively. Finally, the RSEB algorithm computed Rsi, Rn and G with error less than 5% and with values of RMSE and MAE less than 38 W m ́2. Results demonstrated that multispectral and thermal cameras placed on an UAV could provide an excellent tool to evaluate the intra-orchard spatial variability of Rn, G, H, LE, NDVI and Tsurface over the tree canopy and soil surface between rows.
Impact of diabetes mellitus on clinical parameters and treatment outcomes of newly diagnosed pulmonary tuberculosis patients in Thailand
BACKGROUND To assess the clinical and laboratory parameters, response to therapy and development of antituberculosis (TB) drug resistance in pulmonary TB (PTB) patients with diabetes mellitus (DM) and without DM. METHODS Using a prospective design, 227 of 310 new cases of culture-positive PTB diagnosed at the Queen Savang Vadhana Memorial Hospital and the Chonburi Hospital between April 2010 and July 2012 that met the study criteria were selected. Data regarding clinical and laboratory parameters, drug susceptibility and treatment outcomes were compared between PTB patients with DM and those without DM. To control for age, the patients were stratified into two age groups (< 50 and ≥ 50 years) and their data were analysed. RESULTS Of the 227 patients, 37 (16.3%) had DM, of which 26 (70.3%) had been diagnosed with DM prior to PTB diagnosis and 11 (29.7%) had developed DM at PTB diagnosis. After controlling for age, no significant differences were found between the two groups regarding mycobacterium burden, sputum-culture conversion rate, evidence of multidrug-resistant tuberculosis, frequency of adverse drug events from anti-TB medications, treatment outcomes and relapse rate. The presenting symptoms of anorexia (p = 0.050) and haemoptysis (p = 0.036) were observed significantly more frequently in PTB patients with DM, while the presenting symptom of cough was observed significantly more frequently in PTB patients without DM (p = 0.047). CONCLUSIONS Plasma glucose levels should be monitored in all newly diagnosed PTB patients and a similar treatment regimen should be prescribed to PTB patients with DM and those without DM in high TB-burden countries.
Nominalization and Alternations in Biomedical Language
BACKGROUND This paper presents data on alternations in the argument structure of common domain-specific verbs and their associated verbal nominalizations in the PennBioIE corpus. Alternation is the term in theoretical linguistics for variations in the surface syntactic form of verbs, e.g. the different forms of stimulate in FSH stimulates follicular development and follicular development is stimulated by FSH. The data is used to assess the implications of alternations for biomedical text mining systems and to test the fit of the sublanguage model to biomedical texts. METHODOLOGY/PRINCIPAL FINDINGS We examined 1,872 tokens of the ten most common domain-specific verbs or their zero-related nouns in the PennBioIE corpus and labelled them for the presence or absence of three alternations. We then annotated the arguments of 746 tokens of the nominalizations related to these verbs and counted alternations related to the presence or absence of arguments and to the syntactic position of non-absent arguments. We found that alternations are quite common both for verbs and for nominalizations. We also found a previously undescribed alternation involving an adjectival present participle. CONCLUSIONS/SIGNIFICANCE We found that even in this semantically restricted domain, alternations are quite common, and alternations involving nominalizations are exceptionally diverse. Nonetheless, the sublanguage model applies to biomedical language. We also report on a previously undescribed alternation involving an adjectival present participle.
Ingestion of a protein hydrolysate is accompanied by an accelerated in vivo digestion and absorption rate when compared with its intact protein.
BACKGROUND It has been suggested that a protein hydrolysate, as opposed to its intact protein, is more easily digested and absorbed from the gut, which results in greater plasma amino acid availability and a greater muscle protein synthetic response. OBJECTIVE We aimed to compare dietary protein digestion and absorption kinetics and the subsequent muscle protein synthetic response to the ingestion of a single bolus of protein hydrolysate compared with its intact protein in vivo in humans. DESIGN Ten elderly men (mean +/- SEM age: 64 +/- 1 y) were randomly assigned to a crossover experiment that involved 2 treatments in which the subjects consumed a 35-g bolus of specifically produced L-[1-(13)C]phenylalanine-labeled intact casein (CAS) or hydrolyzed casein (CASH). Blood and muscle-tissue samples were collected to assess the appearance rate of dietary protein-derived phenylalanine in the circulation and subsequent muscle protein fractional synthetic rate over a 6-h postprandial period. RESULTS The mean (+/-SEM) exogenous phenylalanine appearance rate was 27 +/- 6% higher after ingestion of CASH than after ingestion of CAS (P < 0.001). Splanchnic extraction was significantly lower in CASH compared with CAS treatment (P < 0.01). Plasma amino acid concentrations increased to a greater extent (25-50%) after the ingestion of CASH than after the ingestion of CAS (P < 0.01). Muscle protein synthesis rates averaged 0.054 +/- 0.004% and 0.068 +/- 0.006%/h in the CAS and CASH treatments, respectively (P = 0.10). CONCLUSIONS Ingestion of a protein hydrolysate, as opposed to its intact protein, accelerates protein digestion and absorption from the gut, augments postprandial amino acid availability, and tends to increase the incorporation rate of dietary amino acids into skeletal muscle protein.
Robust Segmentation for Large Volumes of Laser Scanning Three-Dimensional Point Cloud Data
This paper investigates the problems of outliers and/or noise in surface segmentation and proposes a statistically robust segmentation algorithm for laser scanning 3-D point cloud data. Principal component analysis (PCA)-based local saliency features, e.g., normal and curvature, have been frequently used in many ways for point cloud segmentation. However, PCA is sensitive to outliers; saliency features from PCA are nonrobust and inaccurate in the presence of outliers; consequently, segmentation results can be erroneous and unreliable. As a remedy, robust techniques, e.g., RANdom SAmple Consensus (RANSAC), and/or robust versions of PCA (RPCA) have been proposed. However, RANSAC is influenced by the well-known swamping effect, and RPCA methods are computationally intensive for point cloud processing. We propose a region growing based robust segmentation algorithm that uses a recently introduced maximum consistency with minimum distance based robust diagnostic PCA (RDPCA) approach to get robust saliency features. Experiments using synthetic and laser scanning data sets show that the RDPCA-based method has an intrinsic ability to deal with outlier- and/or noise-contaminated data. Results for a synthetic data set show that RDPCA is 105 times faster than RPCA and gives more accurate and robust results when compared with other segmentation methods. Compared with RANSAC and RPCA based methods, RDPCA takes almost the same time as RANSAC, but RANSAC results are markedly worse than RPCA and RDPCA results. Coupled with a segment merging algorithm, the proposed method is efficient for huge volumes of point cloud data consisting of complex objects surfaces from mobile, terrestrial, and aerial laser scanning systems.
Realization of three-qubit quantum error correction with superconducting circuits
Quantum computers could be used to solve certain problems exponentially faster than classical computers, but are challenging to build because of their increased susceptibility to errors. However, it is possible to detect and correct errors without destroying coherence, by using quantum error correcting codes. The simplest of these are three-quantum-bit (three-qubit) codes, which map a one-qubit state to an entangled three-qubit state; they can correct any single phase-flip or bit-flip error on one of the three qubits, depending on the code used. Here we demonstrate such phase- and bit-flip error correcting codes in a superconducting circuit. We encode a quantum state, induce errors on the qubits and decode the error syndrome—a quantum state indicating which error has occurred—by reversing the encoding process. This syndrome is then used as the input to a three-qubit gate that corrects the primary qubit if it was flipped. As the code can recover from a single error on any qubit, the fidelity of this process should decrease only quadratically with error probability. We implement the correcting three-qubit gate (known as a conditional-conditional NOT, or Toffoli, gate) in 63 nanoseconds, using an interaction with the third excited state of a single qubit. We find 85 ± 1 per cent fidelity to the expected classical action of this gate, and 78 ± 1 per cent fidelity to the ideal quantum process matrix. Using this gate, we perform a single pass of both quantum bit- and phase-flip error correction and demonstrate the predicted first-order insensitivity to errors. Concatenation of these two codes in a nine-qubit device would correct arbitrary single-qubit errors. In combination with recent advances in superconducting qubit coherence times, this could lead to scalable quantum technology.
CROP GROWTH AND PRODUCTIVITY MONITORING AND SIMULATION USING REMOTE SENSING AND GIS
Crop growth and productivity are determined by a large number of weather, soil and management variables, which vary significantly across space. Remote Sensing (RS) data, acquired repetitively over agricultural land help in identification and mapping of crops and also in assessing crop vigour. As RS data and techniques have improved, the initial efforts that directly related RS-derived vegetation indices (VI) to crop yield have been replaced by approaches that involve retrieved biophysical quantities from RS data. Thus, crop simulation models (CSM) that have been successful in field-scale applications are being adapted in a GIS framework to model and monitor crop growth with remote sensing inputs making assessments sensitive to seasonal weather factors, local variability and crop management signals. The RS data can provide information of crop environment, crop distribution, leaf area index (LAI), and crop phenology. This information is integrated in CSM, in a number of ways such as use as direct forcing variable, use for re-calibrating specific parameters, or use simulation-observation differences in a variable to correct yield prediction. A number of case studies that demonstrated such use of RS data and demonstrated applications of CSM-RS linkage are presented.
Neural mechanisms of ageing and cognitive decline
During the past century, treatments for the diseases of youth and middle age have helped raise life expectancy significantly. However, cognitive decline has emerged as one of the greatest health threats of old age, with nearly 50% of adults over the age of 85 afflicted with Alzheimer's disease. Developing therapeutic interventions for such conditions demands a greater understanding of the processes underlying normal and pathological brain ageing. Recent advances in the biology of ageing in model organisms, together with molecular and systems-level studies of the brain, are beginning to shed light on these mechanisms and their potential roles in cognitive decline.
Spectral regression: a unified subspace learning framework for content-based image retrieval
Relevance feedback is a well established and effective framework for narrowing down the gap between low-level visual features and high-level semantic concepts in content-based image retrieval. In most of traditional implementations of relevance feedback, a distance metric or a classifier is usually learned from user's provided negative and positive examples. However, due to the limitation of the user's feedbacks and the high dimensionality of the feature space, one is often confront with the issue of the curse of the dimensionality. Recently, several researchers have considered manifold ways to address this issue, such as Locality Preserving Projections, Augmented Relation Embedding, and Semantic Subspace Projection. In this paper, by using techniques from spectral graph embedding and regression, we propose a unified framework, called spectral regression, for learning an image subspace. This framework facilitates the analysis of the differences and connections between the algorithms mentioned above. And more crucially, it provides much faster computation and therefore makes the retrieval system capable of responding to the user's query more efficiently.
Automatic Reverse Engineering of Data Structures from Binary Execution
With only the binary executable of a program, it is useful to discover the program’s data structures and infer their syntactic and semantic definitions. Such knowledge is highly valuable in a variety of security and forensic applications. Although there exist efforts in program data structure inference, the existing solutions are not suitable for our targeted application scenarios. In this paper, we propose a reverse engineering technique to automatically reveal program data structures from binaries. Our technique, called REWARDS, is based on dynamic analysis. More specifically, each memory location accessed by the program is tagged with a timestamped type attribute. Following the program’s runtime data flow, this attribute is propagated to other memory locations and registers that share the same type. During the propagation, a variable’s type gets resolved if it is involved in a type-revealing execution point or “type sink”. More importantly, besides the forward type propagation, REWARDS involves a backward type resolution procedure where the types of some previously accessed variables get recursively resolved starting from a type sink. This procedure is constrained by the timestamps of relevant memory locations to disambiguate variables reusing the same memory location. In addition, REWARDS is able to reconstruct in-memory data structure layout based on the type information derived. We demonstrate that REWARDS provides unique benefits to two applications: memory image forensics and binary fuzzing for vulnerability discovery.
Avatar reshaping and automatic rigging using a deformable model
3D scans of human figures have become widely available through online marketplaces and have become relatively easy to acquire using commodity scanning hardware. In addition to static uses of such 3D models, such as 3D printed figurines or rendered 3D still imagery, there are numerous uses for an animated 3D character that uses such 3D scan data. In order to effectively use such models as dynamic 3D characters, the models must be properly rigged before they are animated. In this work, we demonstrate a method to automatically rig a 3D mesh by matching a set of morphable models against the 3D scan. Once the morphable model has been matched against the 3D scan, the skeleton position and skinning attributes are then copied, resulting in a skinning and rigging that is similar in quality to the original hand-rigged model. In addition, the use of a morphable model allows us to reshape and resize the 3D scan according to approximate human proportions. Thus, a human 3D scan can be modified to be taller, shorter, fatter or skinnier. Such manipulations of the 3D scan are useful both for social science research, as well as for visualization for applications such as fitness, body image, plastic surgery and the like.
Still too far to walk: Literature review of the determinants of delivery service use
BACKGROUND Skilled attendance at childbirth is crucial for decreasing maternal and neonatal mortality, yet many women in low- and middle-income countries deliver outside of health facilities, without skilled help. The main conceptual framework in this field implicitly looks at home births with complications. We expand this to include "preventive" facility delivery for uncomplicated childbirth, and review the kinds of determinants studied in the literature, their hypothesized mechanisms of action and the typical findings, as well as methodological difficulties encountered. METHODS We searched PubMed and Ovid databases for reviews and ascertained relevant articles from these and other sources. Twenty determinants identified were grouped under four themes: (1) sociocultural factors, (2) perceived benefit/need of skilled attendance, (3) economic accessibility and (4) physical accessibility. RESULTS There is ample evidence that higher maternal age, education and household wealth and lower parity increase use, as does urban residence. Facility use in the previous delivery and antenatal care use are also highly predictive of health facility use for the index delivery, though this may be due to confounding by service availability and other factors. Obstetric complications also increase use but are rarely studied. Quality of care is judged to be essential in qualitative studies but is not easily measured in surveys, or without linking facility records with women. Distance to health facilities decreases use, but is also difficult to determine. Challenges in comparing results between studies include differences in methods, context-specificity and the substantial overlap between complex variables. CONCLUSION Studies of the determinants of skilled attendance concentrate on sociocultural and economic accessibility variables and neglect variables of perceived benefit/need and physical accessibility. To draw valid conclusions, it is important to consider as many influential factors as possible in any analysis of delivery service use. The increasing availability of georeferenced data provides the opportunity to link health facility data with large-scale household data, enabling researchers to explore the influences of distance and service quality.
Reinforced Feedback in Virtual Environment for Rehabilitation of Upper Extremity Dysfunction after Stroke: Preliminary Data from a Randomized Controlled Trial
OBJECTIVES To study whether the reinforced feedback in virtual environment (RFVE) is more effective than traditional rehabilitation (TR) for the treatment of upper limb motor function after stroke, regardless of stroke etiology (i.e., ischemic, hemorrhagic). DESIGN Randomized controlled trial. Participants. Forty-four patients affected by stroke. Intervention. The patients were randomized into two groups: RFVE (N = 23) and TR (N = 21), and stratified according to stroke etiology. The RFVE treatment consisted of multidirectional exercises providing augmented feedback provided by virtual reality, while in the TR treatment the same exercises were provided without augmented feedbacks. Outcome Measures. Fugl-Meyer upper extremity scale (F-M UE), Functional Independence Measure scale (FIM), and kinematics parameters (speed, time, and peak). RESULTS The F-M UE (P = 0.030), FIM (P = 0.021), time (P = 0.008), and peak (P = 0.018), were significantly higher in the RFVE group after treatment, but not speed (P = 0.140). The patients affected by hemorrhagic stroke significantly improved FIM (P = 0.031), time (P = 0.011), and peak (P = 0.020) after treatment, whereas the patients affected by ischemic stroke improved significantly only speed (P = 0.005) when treated by RFVE. CONCLUSION These results indicated that some poststroke patients may benefit from RFVE program for the recovery of upper limb motor function. This trial is registered with NCT01955291.
CIS-UltraCal: An open-source ultrasound calibration toolkit
We present an open-source MATLAB toolkit for ultrasound calibration. It has a convenient graphical user interface which sits on top of an extensive API. Calibration using three different phantoms is explicitly supported: the cross-wire phantom, the single-wall phantom, and the Hopkins phantom. Image processing of the Hopkins phantom is automated by making use of techniques from binary morphology, radon transform and RANSAC. Numerous calibration and termination parameters are exposed. It is also modular, allowing one to apply the system to original phantoms by writing a minimum of new code.
LaneQuest: An accurate and energy-efficient lane detection system
Current outdoor localization techniques fail to provide the required accuracy for estimating the car's lane. In this paper, we present LaneQuest: a system that leverages the ubiquitous and low-energy inertial sensors available in commodity smart-phones to provide an accurate estimate of the car's current lane. LaneQuest leverages hints from the phone sensors about the surrounding environment to detect the car's lane. For example, a car making a right turn most probably will be in the right-most lane, a car passing by a pothole will be in a specific lane, and the car's angular velocity when driving through a curve reflects its lane. Our investigation shows that there are amble opportunities in the environment, i.e. lane “anchors”, that provide cues about the car's lane. To handle the ambiguous location, sensors noise, and fuzzy lane anchors; LaneQuest employs a novel probabilistic lane estimation algorithm. Furthermore, it uses an unsupervised crowd-sourcing approach to learn the position and lane-span distribution of the different lane-level anchors. Our evaluation results from implementation on different android devices and 260Km driving traces by 13 drivers in different cities shows that LaneQuest can detect the different lane-level anchors with an average precision and recall of more than 90%. This leads to an accurate detection of the exact car's lane position 80% of the time, increasing to 89% of the time to within one lane. This comes with a low-energy footprint, allowing LaneQuest to be implemented on the energy-constrained mobile devices.
Women's emotional adjustment to IVF: a systematic review of 25 years of research.
This review provides an overview of how women adjust emotionally to the various phases of IVF treatment in terms of anxiety, depression or general distress before, during and after different treatment cycles. A systematic scrutiny of the literature yielded 706 articles that paid attention to emotional aspects of IVF treatment of which 27 investigated the women's emotional adjustment with standardized measures in relation to norm or control groups. Most studies involved concurrent comparisons between women in different treatment phases and different types of control groups. The findings indicated that women starting IVF were only slightly different emotionally from the norm groups. Unsuccessful treatment raised the women's levels of negative emotions, which continued after consecutive unsuccessful cycles. In general, most women proved to adjust well to unsuccessful IVF, although a considerable group showed subclinical emotional problems. When IVF resulted in pregnancy, the negative emotions disappeared, indicating that treatment-induced stress is considerably related to threats of failure. The concurrent research reviewed, should now be underpinned by longitudinal studies to provide more information about women's long-term emotional adjustment to unsuccessful IVF and about indicators of risk factors for problematic emotional adjustment after unsuccessful treatment, to foster focused psychological support for women at risk.
Word-Initial Stops in Korean and English Monolinguals and Bilinguals *
Oh, Mira & Daland, Robert. 2011. Word-Initial Stops in Korean and English Monolinguals and Bilinguals. Linguistic Research 28(3), 625-634. The theoretical status of early bilinguals is an area of some controversy. This study investigates subphonemic variation in Korean and English by monolinguals, and bilinguals who were born in the US (simultaneous), moved to the US as young children (early), or moved to the US during late adolescence (late). Speakers were recorded producing word-initial stops in a phrase-medial context. Measurements included stop VOT, H1-H2 and initial pitch on the following vowel. Bilingual productions were generally similar to monolinguals'. However, early and simultaneous bilinguals exhibited a 3-way VOT contrast for Korean stops that was recently neutralized to a 2-way contrast in Korea (Silva 2006). These findings are discussed with respect to transfer/convergence effects and phonetic change in Korea.
A practical evaluation of radio signal strength for ranging-based localization
Radio signal strength (RSS) is notorious for being a noisy signal that is difficult to use for ranging-based localization. In this study, we demonstrate that RSS can be used to localize a multi-hop sensor network, and we quantify the effects of various environmental factors on the resulting localization error. We achieve 4.1m error in a 49 node network deployed in a half-football field sized area, demonstrating that RSS localization can be a feasible alternative to solutions like GPS given the right conditions. However, we also show that this result is highly sensitive to subtle environmental factors such as the grass height, radio enclosure, and elevation of the nodes from the ground.
Compact Multi-Band H-Shaped Slot Antenna
A compact triple-band H-shaped slot antenna fed by microstrip coupling is proposed. Four resonant modes are excited, including a monopole mode, a slot mode, and their higher-order modes, to cover GPS (1.575 GHz) and Wi-Fi (2.4-2.485 GHz and 5.15-5.85 GHz), respectively. Sensitivity study of the slot geometry upon the resonant modes have been conducted. The measured gains at these four resonant frequencies are 0.2 dBi, 3.5 dBi, 2.37 dBi, and 3.7 dBi, respectively, and the total efficiencies are -2.5 dB, -1.07 dB, -3.06 dB, and -2.7 dB, respectively. The size of this slot antenna is only 0.24λ0×0.034λ0, where λ0 is the free-space wavelength at 1.575 GHz, hence is suitable to install on notebook PC's and handheld devices.
A time-split nonhydrostatic atmospheric model for weather research and forecasting applications
The sub-grid-scale parameterization of clouds is one of the weakest aspects of weather and climate modeling today, and the explicit simulation of clouds will be one of the next major achievements in numerical weather prediction. Research cloud models have been in development over the last 45 years and they continue to be an important tool for investigating clouds, cloud-systems, and other small-scale atmospheric dynamics. The latest generation are now being used for weather prediction. The Advanced Research WRF (ARW) model, representative of this generation and of a class of models using explicit time-splitting integration techniques to efficiently integrate the Euler equations, is described in this paper. It is the first fully compressible conservative-form nonhydrostatic atmospheric model suitable for both research and weather prediction applications. Results are presented demonstrating its ability to resolve strongly nonlinear small-scale phenomena, clouds, and cloud systems. Kinetic energy spectra and other statistics show that the model is simulating small scales in numerical weather prediction applications, while necessarily removing energy at the gridscale but minimizing artificial dissipation at the resolved scales. Filtering requirements for atmospheric models and filters used in the ARW model are discussed. ! 2007 Elsevier Inc. All rights reserved. MCS: 65M06; 65M12; 76E06; 76R10; 76U05; 86A10
Data-driven comparison of spatio-temporal monitoring techniques
Monitoring marine ecosystems is challenging due to the dynamic and unpredictable nature of environmental phenomena. In this work we survey a series of techniques used in information gathering that can be used to increase experts' understanding of marine ecosystems through dynamic monitoring. To achieve this, an underwater glider simulator is constructed, and four different path planning algorithms are investigated: Boustrophendon paths, a gradient based approach, a Level-Sets method, and Sequential Bayesian Optimization. Each planner attempts to maximize the time the glider spends in an area where ocean variables are above a threshold value of interest. To emulate marine ecosystem sensor data, ocean temperatures are used. The planners are simulated 50 times each at random starting times and locations. After validation through simulation, we show that informed decision making improves performance, but more accurate prediction of ocean conditions would be necessary to benefit from long horizon lookahead planning.
Understanding mobile SNS continuance usage in China from the perspectives of social influence and privacy concern
Retaining users and facilitating continuance usage are crucial to the success of mobile social network services (SNS). This research examines the continuance usage of mobile SNS in China by integrating both the perspectives of social influence and privacy concern. Social influence includes three processes: compliance, identification and internalization, which are respectively represented by subjective norm, social identity, and group norm. The results indicate that these three factors and privacy concern have significant effects on continuance usage. The results suggest that service providers should address the issues of social influence and privacy concern to encourage mobile SNS continuance usage. 2014 Elsevier Ltd. All rights reserved.
Triple seasonal methods for short-term electricity demand forecasting
Online short-term load forecasting is needed for the real-time scheduling of electricity generation. Univariate methods have been developed that model the intraweek and intraday seasonal cycles in intraday load data. Three such methods, shown to be competitive in recent empirical studies, are double seasonal ARMA, an adaptation of Holt-Winters exponential smoothing for double seasonality, and another, recently proposed, exponential smoothing method. In multiple years of load data, in addition to intraday and intraweek cycles, an intrayear seasonal cycle is also apparent. We extend the three double seasonal methods in order to accommodate the intrayear seasonal cycle. Using six years of British and French data, we show that for prediction up to a day-ahead the triple seasonal methods outperform the double seasonal methods, and also a univariate neural network approach. Further improvement in accuracy is produced by using a combination of the forecasts from two of the triple seasonal methods.
Learning to Paraphrase for Question Answering
Question answering (QA) systems are sensitive to the many different ways natural language expresses the same information need. In this paper we turn to paraphrases as a means of capturing this knowledge and present a general framework which learns felicitous paraphrases for various QA tasks. Our method is trained end-toend using question-answer pairs as a supervision signal. A question and its paraphrases serve as input to a neural scoring model which assigns higher weights to linguistic expressions most likely to yield correct answers. We evaluate our approach on QA over Freebase and answer sentence selection. Experimental results on three datasets show that our framework consistently improves performance, achieving competitive results despite the use of sim-
Fast moving-object detection in H.264/AVC compressed domain for video surveillance
This paper discusses a novel fast approach for moving object detection in H.264/AVC compressed domain for video surveillance applications. The proposed algorithm initially segments out edges from regions with motion at macroblock level by utilizing the gradient of quantization parameter over 2D-image space. A spatial median filtering of the segmented edges followed by weighted temporal accumulation accounts for whole object segmentation. To attain sub-macroblock (4×4) level precision, the size of macroblocks (in bits) is interpolated using a two tap filter. Partial decoding rules out the complexity involved in full decoding and gives fast foreground segmentation results. Compared to other compressed domain techniques, the proposed approach allows the video streams to be encoded with different quantization parameters across macroblocks thereby increasing flexibility in bit rate adjustment.
Correction: Association between CASP8 –652 6N Del Polymorphism (rs3834129) and Colorectal Cancer Risk: Results from a Multi-Centric Study
The common -652 6N del variant in the CASP8 promoter (rs3834129) has been described as a putative low-penetrance risk factor for different cancer types. In particular, some studies suggested that the deleted allele (del) was inversely associated with CRC risk while other analyses failed to confirm this. Hence, to better understand the role of this variant in the risk of developing CRC, we performed a multi-centric case-control study. In the study, the variant -652 6N del was genotyped in a total of 6,733 CRC cases and 7,576 controls recruited by six different centers located in Spain, Italy, USA, England, Czech Republic and the Netherlands collaborating to the international consortium COGENT (COlorectal cancer GENeTics). Our analysis indicated that rs3834129 was not associated with CRC risk in the full data set. However, the del allele was under-represented in one set of cases with a family history of CRC (per allele model OR = 0.79, 95% CI = 0.69-0.90) suggesting this allele might be a protective factor versus familial CRC. Since this multi-centric case-control study was performed on a very large sample size, it provided robust clarification of the effect of rs3834129 on the risk of developing CRC in Caucasians.
Efficacy and safety of multiple doses of QGE031 (ligelizumab) versus omalizumab and placebo in inhibiting allergen-induced early asthmatic responses.
BACKGROUND Omalizumab is an established anti-IgE therapy for the treatment of allergic diseases that prevents IgE from binding to its receptor. QGE031 is an investigational anti-IgE antibody that binds IgE with higher affinity than omalizumab. OBJECTIVE This study compared the effects of QGE031 with those of omalizumab on clinical efficacy, IgE levels, and FcεRI expression in a clinical model of allergic asthma. METHODS Thirty-seven patients with mild allergic asthma were randomized to subcutaneous omalizumab, placebo, or QGE031 at 24, 72, or 240 mg every 2 weeks for 10 weeks in a double-blind, parallel-group multicenter study. Inhaled allergen challenges and skin tests were conducted before dosing and at weeks 6, 12, and 18, and blood was collected until 24 weeks after the first dose. RESULTS QGE031 elicited a concentration- and time-dependent change in the provocative concentration of allergen causing a 15% decrease in FEV1 (allergen PC15) that was maximal and approximately 3-fold greater than that of omalizumab (P = .10) and 16-fold greater than that of placebo (P = .0001) at week 12 in the 240-mg cohort. Skin responses reached 85% suppression at week 12 in the 240-mg cohort and were maximal at week 18. The top doses of QGE031 consistently suppressed skin test responses among subjects but had a variable effect on allergen PC15 (2-fold to 500-fold change). QGE031 was well tolerated. CONCLUSION QGE031 has greater efficacy than omalizumab on inhaled and skin allergen responses in patients with mild allergic asthma. These data support the clinical development of QGE031 as a treatment of asthma.
TrafficModeler extensions: A case for rapid VANET simulation using, OMNET++, SUMO, and VEINS
Rapid simulation of Vehicular ad hoc networks (VANETs) is increasingly needed. However, this is not easy to achieve because road traffic and network communication simulators are complex and often hybrid frameworks are required. One such a hybrid approach, Vehicles in Network Simulation (Veins), incorporates the widely used Simulation of Urban Mobility (SUMO) and the renowned discrete event simulation environment OMNET++. Additionally, traffic demand modeling is a laborious task that may be prone to errors. Hence, any new contributions or improvements in traffic modeling are important as well as any effort to help the understanding of these complex frameworks. We present some extensions to TrafficModeler, a contributed program to the SUMO distribution. These extensions allow for easy traffic simulation on Veins using maps imported from OpenStreetMaps (OSM). We present guidelines for rapid VANET simulation with the combined use of OSM, TrafficModeler, SUMO, OMNET++, and Veins. We create and present an example to illustrate how to develop a rapid VANET simulation.
PREDICTING AND DETERMINING THE CONTACT PRESSURE DISTRIBUTION IN JOINTS FORMED BY V-BAND CLAMPS
V-band clamps are utilised in a wide range of industries to connect together a pair of circular flanges, for ducts, pipes, turbocharger housings and even to form a joint between satellites and their delivery vehicle. In this paper, using a previously developed axisymmetric finite element model, the impact of contact pressure on the contact surface of the V-band clamp was studied and surface roughness measurements were used to investigate the distribution of contact pressure around the circumference of the V-band.
Identify Online Store Review Spammers via Social Review Graph
Online shopping reviews provide valuable information for customers to compare the quality of products, store services, and many other aspects of future purchases. However, spammers are joining this community trying to mislead consumers by writing fake or unfair reviews to confuse the consumers. Previous attempts have used reviewers’ behaviors such as text similarity and rating patterns, to detect spammers. These studies are able to identify certain types of spammers, for instance, those who post many similar reviews about one target. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like normal reviewers, and thus cannot be detected by the available techniques. In this article, we propose a novel concept of review graph to capture the relationships among all reviewers, reviews and stores that the reviewers have reviewed as a heterogeneous graph. We explore how interactions between nodes in this graph could reveal the cause of spam and propose an iterative computation model to identify suspicious reviewers. In the review graph, we have three kinds of nodes, namely, reviewer, review, and store. We capture their relationships by introducing three fundamental concepts, the trustiness of reviewers, the honesty of reviews, and the reliability of stores, and identifying their interrelationships: a reviewer is more trustworthy if the person has written more honesty reviews; a store is more reliable if it has more positive reviews from trustworthy reviewers; and a review is more honest if many other honest reviews support it. This is the first time such intricate relationships have been identified for spam detection and captured in a graph model. We further develop an effective computation method based on the proposed graph model. Different from any existing approaches, we do not use an review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.
Software as Capitol: an economic perspective on software engineering [Book Review]
or thousands of years, humankind's ever-rising mastery of technology F has been embodied in various tools, devices, and even physical structures. Of course, knowledge codified in books and diagrams, and human skills passed on from master to apprentice and parent to child, have also served as carriers of practical techniques and prescriptions. But the most elfective carriers of technology, right down to present times, have been capital goods--ranging from the simplest tools to the most intricate machines. The reason is that knowledge integrated in durable physical devices transcends barriers of language, time, and geographic space. In his fascinating book, Software as Capital, Howard Baetjer Jr, describes and analyzes the features and significance of computer !;oftware, which he points out is a very special kind of capital. He begins by briefly reviewing the role of capital goods in industrial production. "Producer durable'' machines and equipment-fixed capital-have the common characteristic that they last over many production cycles and raise the productivity of labor. In such devices, app ication-specific technological knowledge is embodied in the machine or structure, hard-coded in its design and construction. Indeed, Baetjer argues that capital goods a-e embodied knowledge, which, working in tandem with formalized and tacit knou4edge resident in people and their social organizations, makes up the productive capability of industrial society. In Baetjer's view, the evolution of computer softu,are is a direct extension of the development of knowledge by industrial society thr2ugh the design and construction of capital structures of ever-increasing complexity. Just like conventional capital equipment, software embodies well-testec. procedures for carrying out specific operations. In other words, software is knowledge encoded in computer executable logic. In the pre-computer era, such operational logic was designed into the hardware of the machine itself, as in mechanical or electronic switching systems. But with the advent of low-cost computing, control logic detaches itself from hardware and develops a relatively independent existence as sofrware. However, software is used not just in Software as Capital: an Economic Perspective on Software Engineering. Baetjer Jr., Howard, !E€€ Computer Society, L o s Ahmitos, Cdf, 1998. 1 9 4 pp., $25.
Classification of Black Tea Taste and Correlation With Tea Taster's Mark Using Voltammetric Electronic Tongue
Tea quality assessment is a difficult task because of the presence of innumerable compounds and their diverse contribution to tea quality. As a result, instrumental evaluation of tea quality is not practiced in the industry, and tea samples are assessed by experienced tea tasters. There had been a very few reports where an electronic tongue has been used for the discrimination of taste of tea samples. In this paper, a voltammetric electronic tongue instrument is described, which can declare tea-taster-like scores for black tea. The electronic tongue is based on the principle of pulse voltammetry and consists of an array of five working electrodes along with a counter and a reference electrode. The five working electrodes are of gold, iridium, palladium, platinum, and rhodium. The voltage equivalent of the output current from between the working electrode and the counterelectrode generated out of the tea liquor when excited with pulse voltage between the working electrode and the reference electrode has been considered for data analysis. First, the sampled data have been compressed using discrete wavelet transform (DWT) and are then processed using principal component analysis (PCA) and linear discriminant analysis (LDA) for visualization of underlying clusters. Finally, different pattern recognition models based on neural networks are investigated to carry out a correlation study with the tea tasters' score of five different grades of black tea samples obtained from a tea garden in India. The efficacy of the classifier has been established using tenfold cross-validation methods.
Direct carbon dioxide emissions from civil aircraft
Global airlines consume over 5 million barrels of oil per day, and the resulting carbon dioxide (CO2) emitted by aircraft engines is of concern. This article provides a contemporary review of the literature associated with the measures available to the civil aviation industry for mitigating CO2 emissions from aircraft. The measures are addressed under two categories e policy and legal-related measures, and technological and operational measures. Results of the review are used to develop several insights into the challenges faced. The analysis shows that forecasts for strong growth in air-traffic will result in civil aviation becoming an increasingly significant contributor to anthropogenic CO2 emissions. Some mitigation-measures can be left to market-forces as the key-driver for implementation because they directly reduce airlines' fuel consumption, and their impact on reducing fuel-costs will be welcomed by the industry. Other mitigation-measures cannot be left to market-forces. Speed of implementation and stringency of these measures will not be satisfactorily resolved unattended, and the current global regulatory-framework does not provide the necessary strength of stewardship. A global regulator with ‘teeth’ needs to be established, but investing such a body with the appropriate level of authority requires securing an international agreement which history would suggest is going to be very difficult. If all mitigation-measures are successfully implemented, it is still likely that traffic growth-rates will continue to out-pace emissions reduction-rates. Therefore, to achieve an overall reduction in CO2 emissions, behaviour change will be necessary to reduce demand for air-travel. However, reducing demand will be strongly resisted by all stakeholders in the industry; and the ticket price-increases necessary to induce the required reduction in traffic growth-rates place a monetary-value on CO2 emissions of approximately 7e100 times greater than other common valuations. It is clear that, whilst aviation must remain one piece of the transport-jigsaw, environmentally a global regulator with ‘teeth’ is
Validation of MRI-based 3D digital atlas registration with histological and autoradiographic volumes: An anatomofunctional transgenic mouse brain imaging study
Murine models are commonly used in neuroscience to improve our knowledge of disease processes and to test drug effects. To accurately study neuroanatomy and brain function in small animals, histological staining and ex vivo autoradiography remain the gold standards to date. These analyses are classically performed by manually tracing regions of interest, which is time-consuming. For this reason, only a few 2D tissue sections are usually processed, resulting in a loss of information. We therefore proposed to match a 3D digital atlas with previously 3D-reconstructed post mortem data to automatically evaluate morphology and function in mouse brain structures. We used a freely available MRI-based 3D digital atlas derived from C57Bl/6J mouse brain scans (9.4T). The histological and autoradiographic volumes used were obtained from a preliminary study in APP(SL)/PS1(M146L) transgenic mice, models of Alzheimer's disease, and their control littermates (PS1(M146L)). We first deformed the original 3D MR images to match our experimental volumes. We then applied deformation parameters to warp the 3D digital atlas to match the data to be studied. The reliability of our method was qualitatively and quantitatively assessed by comparing atlas-based and manual segmentations in 3D. Our approach yields faster and more robust results than standard methods in the investigation of post mortem mouse data sets at the level of brain structures. It also constitutes an original method for the validation of an MRI-based atlas using histology and autoradiography as anatomical and functional references, respectively.
Inhibition of adipose tissue lipolysis increases intramuscular lipid use in type 2 diabetic patients
In the present study, we investigated the consequences of adipose tissue lipolytic inhibition on skeletal muscle substrate use in type 2 diabetic patients. We studied ten type 2 diabetic patients under the following conditions: (1) at rest; (2) during 60 min of cycling exercise at 50% of maximal workload capacity and subsequent recovery. Studies were done under normal, fasting conditions (control trial: CON) and following administration of a nicotinic acid analogue (low plasma non-esterified fatty acid trial: LFA). Continuous [U-13C]palmitate and [6,6 -2H2]glucose infusions were applied to quantify plasma NEFA and glucose oxidation rates, and to estimate intramuscular triacylglycerol (IMTG) and glycogen use. Muscle biopsies were collected before and after exercise to determine net changes in lipid and glycogen content specific to muscle fibre type. Following administration of the nicotinic acid analogue (Acipimox), the plasma NEFA rate of appearance was effectively reduced, resulting in lower NEFA concentrations in the LFA trial (p<0.001). Plasma NEFA oxidation rates were substantially reduced at rest, during exercise and subsequent recovery in the LFA trial. The lower plasma NEFA oxidation rates were compensated by an increase in IMTG and endogenous carbohydrate use (p<0.05). Plasma glucose disposal rates did not differ between trials. In accordance with the tracer data, a greater net decline in type I muscle fibre lipid content was observed following exercise in the LFA trial (p<0.05). This study shows that plasma NEFA availability regulates IMTG use, and that adipose tissue lipolytic inhibition, in combination with exercise, could be an effective means of augmenting intramuscular lipid and glycogen use in type 2 diabetic patients in an overnight fasted state.
Learning Contextual Inquiry and Distributed Cognition: a case study on technology use in anaesthesia
There have been few studies on how analysts learn or use frameworks to support gathering and analysis of field data. Distributed Cognition for Teamwork (DiCoT) is a framework that has been developed to facilitate the learning of Distributed Cognition (DCog), focusing on analysing small team interactions. DiCoT, in turn, exploits representations from Contextual Inquiry (CI). The present study is a reflective account of the experience of learning first CI and then DiCoT for studying the use of infusion devices in operating theatres. We report on how each framework supported a novice analyst (the first author) in structuring his data gathering and analysis, and the challenges that he faced. There are three contributions of this work: (1) an example of learning CI and DCog in a semi-structured way; (2) an account of the process and outcomes of learning and using CI and DiCoT in a complex setting; and (3) an outline account of information flow in anaesthesia. While CI was easier to learn and consequently gave better initial support to the novice analyst entering a complex work setting, DiCoT gave added value through its focus on information propagation and transformation as well as the roles of people and artefacts in supporting communication and situation awareness. This study makes visible many of the challenges of learning to apply a framework that are commonly encountered but rarely reported.
Joint Power Allocation and Subcarrier Pairing for Cooperative OFDM AF Multi-Relay Networks
For conventional subcarrier pairing scheme in cooperative orthogonal frequency division multiplexing amplify-and-forward multi-relay networks, to avoid interference, each subcarrier pair (SP) is assigned to only one relay, over a specific subcarrier, the destination receives signals transmitted from only one relay. In this letter, we propose to assign each SP to all the relays. Thus, over a specific subcarrier, the destination receives signals transmitted from all the relays. We propose a joint power allocation and subcarrier pairing scheme which maximizes the transmission rate subject to total network power constraint. The problem is simplified and solved by using dual method.
Enhancing well-being and alleviating depressive symptoms with positive psychology interventions: a practice-friendly meta-analysis.
Do positive psychology interventions-that is, treatment methods or intentional activities aimed at cultivating positive feelings, positive behaviors, or positive cognitions-enhance well-being and ameliorate depressive symptoms? A meta-analysis of 51 such interventions with 4,266 individuals was conducted to address this question and to provide practical guidance to clinicians. The results revealed that positive psychology interventions do indeed significantly enhance well-being (mean r=.29) and decrease depressive symptoms (mean r=.31). In addition, several factors were found to impact the effectiveness of positive psychology interventions, including the depression status, self-selection, and age of participants, as well as the format and duration of the interventions. Accordingly, clinicians should be encouraged to incorporate positive psychology techniques into their clinical work, particularly for treating clients who are depressed, relatively older, or highly motivated to improve. Our findings also suggest that clinicians would do well to deliver positive psychology interventions as individual (versus group) therapy and for relatively longer periods of time.
Service quality along the supply chain : implications for purchasing
The overall objective of this research was to explore the association between implementation of cooperative purchasing/supplier relationships, internal service quality, and an organization’s ability to provide quality products and services to its customers. Specifically, purchasing-related factors influencing internal and external product and service quality were identified from the literature and an internal service quality model was developed and then tested using empirical data. Survey data were collected from 118 US purchasing executives in a wide range of industries. The findings from this study indicate the existence of strong positive relationships between implementation of cooperative purchasing/supplier relationships, internal service quality, and the service and product quality provided to external customers. Additionally, the key role of purchasing in the integration and communication of quality expectations and quality performance throughout the firm is demonstrated. © 2001 Elsevier Science B.V. All rights reserved.
A Privacy-Preserving Framework for Personalized, Social Recommendations
We consider the problem of producing item recommendations that are personalized based on a user’s social network, while simultaneously preventing the disclosure of sensitive user-item preferences (e.g., product purchases, ad clicks, web browsing history, etc.). Our main contribution is a privacypreserving framework for a class of social recommendation algorithms that provides strong, formal privacy guarantees under the model of differential privacy. Existing mechanisms for achieving differential privacy lead to an unacceptable loss of utility when applied to the social recommendation problem. To address this, the proposed framework incorporates a clustering procedure that groups users according to the natural community structure of the social network and significantly reduces the amount of noise required to satisfy differential privacy. Although this reduction in noise comes at the cost of some approximation error, we show that the benefits of the former significantly outweigh the latter. We explore the privacy-utility trade-off for several different instantiations of the proposed framework on two real-world data sets and show that useful social recommendations can be produced without sacrificing privacy. We also experimentally compare the proposed framework with several existing differential privacy mechanisms and show that the proposed framework significantly outperforms all of them in this setting.
Global nutrition transition and the pandemic of obesity in developing countries.
Decades ago, discussion of an impending global pandemic of obesity was thought of as heresy. But in the 1970s, diets began to shift towards increased reliance upon processed foods, increased away-from-home food intake, and increased use of edible oils and sugar-sweetened beverages. Reductions in physical activity and increases in sedentary behavior began to be seen as well. The negative effects of these changes began to be recognized in the early 1990s, primarily in low- and middle-income populations, but they did not become clearly acknowledged until diabetes, hypertension, and obesity began to dominate the globe. Now, rapid increases in the rates of obesity and overweight are widely documented, from urban and rural areas in the poorest countries of sub-Saharan Africa and South Asia to populations in countries with higher income levels. Concurrent rapid shifts in diet and activity are well documented as well. An array of large-scale programmatic and policy measures are being explored in a few countries; however, few countries are engaged in serious efforts to prevent the serious dietary challenges being faced.
Retrospective cohort study of diabetes mellitus and antipsychotic treatment in a geriatric population in the United States.
OBJECTIVES The objective of this study was to investigate risk of diabetes among elderly patients during treatment with antipsychotic medications. DESIGN We conducted a longitudinal, retrospective study assessing the incidence of new prescription claims for antihyperglycemic agents during antipsychotic therapy. SETTING Prescription claims from the AdvancePCS claim database were followed for 6 to 9 months. PARTICIPANTS Study participants consisted of patients in the United States aged 60+ and receiving antipsychotic monotherapy. The following cohorts were studied: an elderly reference population (no antipsychotics: n = 1,836,799), those receiving haloperidol (n = 6481) or thioridazine (n = 1658); all patients receiving any conventional antipsychotic monotherapy (n = 11,546), clozapine (n = 117), olanzapine (n = 5382), quetiapine (n = 1664), and risperidone (n = 12,244), and all patients receiving any atypical antipsychotic monotherapy (n = 19,407). MEASUREMENTS We used Cox proportional hazards regression to determine the risk ratio of diabetes for antipsychotic cohorts relative to the reference population. Covariates included sex and exposure duration. RESULTS New antihyperglycemic prescription rates were higher in each antipsychotic cohort than in the reference population. Overall rates were no different between atypical and conventional antipsychotic cohorts. Among individual antipsychotic cohorts, rates were highest among patients treated with thioridazine (95% confidence interval [CI], 3.1- 5.7), lowest with quetiapine (95% CI, 1.3-2.9), and intermediate with haloperidol, olanzapine, and risperidone. Among atypical cohorts, only risperidone users had a significantly higher risk (95% CI, 1.05-1.60; P = 0.016) than for haloperidol. Conclusions about clozapine were hampered by the low number of patients. CONCLUSION These data suggest that diabetes risk is elevated among elderly patients receiving antipsychotic treatment. However, causality remains to be demonstrated. As a group, the risk for atypical antipsychotic users was not significantly different than for users of conventional antipsychotics.
Adjunctive oral ziprasidone in patients with acute mania treated with lithium or divalproex, part 2: influence of protocol-specific eligibility criteria on signal detection.
OBJECTIVES High failure rates of randomized controlled trials (RCTs) are well recognized but poorly understood. We report exploratory analyses from an adjunctive ziprasidone double-blind RCT in adults with bipolar I disorder (reported in part 1 of this article). Data collected by computer interviews and by site-based raters were analyzed to examine the impact of eligibility criteria on signal detection. METHOD Clinical assessments and a remote monitoring system, including a computer-administered Young Mania Rating Scale (YMRS(Comp)) were used to categorize subjects as eligible or ineligible on 3 key protocol-specified eligibility criteria. Data analyses compared treatment efficacy for eligible versus ineligible subgroups. All statistical analyses reported here are exploratory. Criteria were considered "impactful" if the difference between eligible and ineligible subjects on the YMRS change scores was ≥ 1 point. RESULTS 504 subjects had baseline and ≥ 1 post-randomization computer-administered assessments but only 180 (35.7%) met all 3 eligibility criteria based on computer assessments. There were no statistically significant differences between treatment groups in change from baseline YMRS score on the basis of site-based rater or computer assessments. All criteria tested improved signal detection except the entry criteria excluding subjects with ≥ 25% improvement from screen to baseline. CONCLUSIONS On the basis of computer assessments, nearly two-thirds of randomized subjects did not meet at least 1 protocol-specified eligibility criterion. These results suggest enrollment of ineligible subjects is likely to contribute to failure of acute efficacy studies. TRIAL REGISTRATION ClinicalTrials.gov identifier: NCT00312494.
Partitioned Variational Inference: A unified framework encompassing federated and continual learning
Variational inference (VI) has become the method of choice for fitting many modern probabilistic models. However, practitioners are faced with a fragmented literature that offers a bewildering array of algorithmic options. First, the variational family. Second, the granularity of the updates e.g. whether the updates are local to each data point and employ message passing or global. Third, the method of optimization (bespoke or blackbox, closed-form or stochastic updates, etc.). This paper presents a new framework, termed Partitioned Variational Inference (PVI), that explicitly acknowledges these algorithmic dimensions of VI, unifies disparate literature, and provides guidance on usage. Crucially, the proposed PVI framework allows us to identify new ways of performing VI that are ideally suited to challenging learning scenarios including federated learning (where distributed computing is leveraged to process non-centralized data) and continual learning (where new data and tasks arrive over time and must be accommodated quickly). We showcase these new capabilities by developing communication-efficient federated training of Bayesian neural networks and continual learning for Gaussian process models with private pseudo-points. The new methods significantly outperform the state-of-the-art, whilst being almost as straightforward to implement as standard VI.
Qualitative and Quantitative Assessment of Adenosine Triphosphate Stress Whole-Heart Dynamic Myocardial Perfusion Imaging Using 256-Slice Computed Tomography
BACKGROUND The aim of this study was to investigate the correlation of the qualitative transmural extent of hypoperfusion areas (HPA) using stress dynamic whole-heart computed tomography perfusion (CTP) imaging by 256-slice CT with CTP-derived myocardial blood flow (MBF) for the estimation of the severity of coronary artery stenosis. METHODS AND RESULTS Eleven patients underwent adenosine triphosphate (0.16 mg/kg/min, 5 min) stress dynamic CTP by 256-slice CT (coverage: 8 cm, 0.27 s/rotation), and 9 of the 11 patients underwent coronary angiography (CAG). Stress dynamic CTP (whole-heart datasets over 30 consecutive heart beats in systole without spatial and temporal gaps) was acquired with prospective ECG gating (effective radiation dose: 10.4 mSv). The extent of HPAs was visually graded using a 3-point score (normal, subendocardial, transmural). MBF (ml/100g/min) was measured by deconvolution. Differences in MBF (mean ± standard error) according to HPA and CAG results were evaluated. In 27 regions (3 major coronary territories in 9 patients), 11 coronary stenoses (> 50% reduction in diameter) were observed. In 353 myocardial segments, HPA was significantly related to MBF (P < 0.05; normal 295 ± 94; subendocardial 186 ± 67; and transmural 80 ± 53). Coronary territory analysis revealed a significant relationship between coronary stenosis severity and MBF (P < 0.05; non-significant stenosis [< 50%], 284 ± 97; moderate stenosis [50-70%], 184 ± 74; and severe stenosis [> 70%], 119 ± 69). CONCLUSION The qualitative transmural extent of HPA using stress whole-heart dynamic CTP imaging by 256-slice CT exhibits a good correlation with quantitative CTP-derived MBF and may aid in assessing the hemodynamic significance of coronary artery disease.
A model-based design methodology for cyber-physical systems
Model-based design is a powerful design technique for cyber-physical systems, but too often literature assumes knowledge of a methodology without reference to an explicit design process, instead focusing on isolated steps such as simulation, software synthesis, or verification. We combine these steps into an explicit and holistic methodology for model-based design of cyber-physical systems from abstraction to architecture, and from concept to realization. We decompose model-based design into ten fundamental steps, describe and evaluate an iterative design methodology, and evaluate this methodology in the development of a cyber-physical system.
Raman spectroscopic study of a non-heme iron enzyme soybean lipoxygenase - 1 in low hydrated media
Abstract This work is a contribution to the experimental study of the structure-function relationship of enzymes in non conventional media. As it is known that bound water play a determining role on affinity and specificity, we have chosen aqueous media with restricted water content. Water activity of the reaction media was decreased by addition of specific hydrosoluble cosolvents. Laser visible Raman spectroscopy was applied to determine the local (microenvironments of some aromatic residues) and global secondary and/or tertiary conformational changes of the model enzyme in the presence and absence of polyols or sugars at high concentrations.
Bayesian Online Changepoint Detection
Changepoints are abrupt variations in the generative parameters of a data sequence. Online detection of changepoints is useful in modelling and prediction of time series in application areas such as finance, biometrics, and robotics. While frequentist methods have yielded online filtering and prediction techniques, most Bayesian papers have focused on the retrospective segmentation problem. Here we examine the case where the model parameters before and after the changepoint are independent and we derive an online algorithm for exact inference of the most recent changepoint. We compute the probability distribution of the length of the current “run,” or time since the last changepoint, using a simple message-passing algorithm. Our implementation is highly modular so that the algorithm may be applied to a variety of types of data. We illustrate this modularity by demonstrating the algorithm on three different real-world data sets.
Risk assessment of hand washing efficacy using literature and experimental data.
This study simulated factors that influence the levels of bacteria on foodservice workers' hands. Relevant data were collected from the scientific literature and from laboratory experiments. Literature information collected included: initial bacterial counts on hands and water faucet spigots, bacterial population changes during hand washing as effected by soap type, sanitizing agent, drying method, and the presence of rings. Experimental data were also collected using Enterobacter aerogenes as a surrogate for transient bacteria. Both literature and experimental data were translated into appropriate discrete or probability distribution functions. The appropriate statistical distribution for each phase of the hand washing process was determined. These distributions were: initial count on hands, beta (2.82, 2.32, 7.5); washing reduction using regular soap, beta (3.01, 1.91, -3.00, 0.60); washing reduction using antimicrobial soap, beta (4.19, 2.99, -4.50, 1.50); washing reduction using chlorhexidine gluconate (CHG), triangular (-4.75, -1.00, 0); reductions from hot air drying, beta (3.52, 1.92, -0.20, 1.00); reduction from paper towel drying, triangular (-2.25, -0.75, 0); reduction due to alcohol sanitizer, gamma (-1.23, 4.42) -5.8; reduction due to alcohol-free sanitizer, gamma (2.22, 5.38) -5.00; and the effect of rings, beta (8.55, 23.35, 0.10, 0.45). Experimental data were fit to normal distributions (expressed as log percentage transfer rate): hand-to-spigot transfer, normal (-0.80, 1.09); spigot to hand, normal (0.36, 0.90). Soap with an antimicrobial agent (in particular, CHG) was observed to be more effective than regular soap. Hot air drying had the capacity to increase the amount of bacterial contamination on hands, while paper towel drying caused a slight decrease in contamination. There was little difference in the efficacy of alcohol and alcohol-free sanitizers. Ring wearing caused a slight decrease in the efficacy of hand washing. The experimental data validated the simulated combined effect of certain hand washing procedures based on distributions derived from reported studies. The conventional hand washing system caused a small increase in contamination on hands vs. the touch-free system. Sensitivity analysis revealed that the primary factors influencing final bacteria counts on the hand were sanitizer, soap, and drying method. This research represents an initial framework from which sound policy can be promulgated to control bacterial transmission via hand contacts.
Ocean acidification and its potential effects on marine ecosystems.
Ocean acidification is rapidly changing the carbonate system of the world oceans. Past mass extinction events have been linked to ocean acidification, and the current rate of change in seawater chemistry is unprecedented. Evidence suggests that these changes will have significant consequences for marine taxa, particularly those that build skeletons, shells, and tests of biogenic calcium carbonate. Potential changes in species distributions and abundances could propagate through multiple trophic levels of marine food webs, though research into the long-term ecosystem impacts of ocean acidification is in its infancy. This review attempts to provide a general synthesis of known and/or hypothesized biological and ecosystem responses to increasing ocean acidification. Marine taxa covered in this review include tropical reef-building corals, cold-water corals, crustose coralline algae, Halimeda, benthic mollusks, echinoderms, coccolithophores, foraminifera, pteropods, seagrasses, jellyfishes, and fishes. The risk of irreversible ecosystem changes due to ocean acidification should enlighten the ongoing CO(2) emissions debate and make it clear that the human dependence on fossil fuels must end quickly. Political will and significant large-scale investment in clean-energy technologies are essential if we are to avoid the most damaging effects of human-induced climate change, including ocean acidification.
Improved non-Probabilistic Roadmap method for determination of shortest nautical navigation path
Probabilistic Roadmap (PRM) is one of the methods for finding the shortest path between Beginning and destination way-points on maritime shipping routes. The main behavior of the algorithm is based on assigning randomly distributed nodes on search space, finding the alternative routes, and then selecting the shortest one among them. As its name indicates, these path candidates are determined according to the position of the randomly distributed nodes at the beginning of the PRM algorithm. Therefore, it is not possible to obtain the global (or near global) shortest path. Hence, in this paper, two issues are considered i) omit the randomness with the previously proposed methodologies and ii) reduce the total track mileage with the aid of Hooke-Jeeves algorithm which is one of the classical optimization algorithm. To present the performance of this proposed framework, 20 different scenarios are defined. The PRM-HJ couple and PRM are applied to all these test problems and results are compared with respect to the total track mileage.
Galaxy Zoo : morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey
In order to understand the formation and subsequent evolution of galaxies one must first distinguish between the two main morphological classes of massive systems: spirals and early-type systems. This paper introduces a project, Galaxy Zoo, which provides visual morphological classifications for nearly one million galaxies, extracted from the Sloan Digital Sky Survey (SDSS). This achievement was made possible by inviting the general public to visually inspect and classify these galaxies via the internet. The project has obtained more than 4 × 107 individual classifications made by ∼105 participants. We discuss the motivation and strategy for this project, and detail how the classifications were performed and processed. We find that Galaxy Zoo results are consistent with those for subsets of SDSS galaxies classified by professional astronomers, thus demonstrating that our data provide a robust morphological catalogue. Obtaining morphologies by direct visual inspection avoids introducing biases associated with proxies for morphology such as colour, concentration or structural parameters. In addition, this catalogue can be used to directly compare SDSS morphologies with older data sets. The colour–magnitude diagrams for each morphological class are shown, and we illustrate how these distributions differ from those inferred using colour alone as a proxy for
Bayesian Active Remote Sensing Image Classification
In recent years, kernel methods, in particular support vector machines (SVMs), have been successfully introduced to remote sensing image classification. Their properties make them appropriate for dealing with a high number of image features and a low number of available labeled spectra. The introduction of alternative approaches based on (parametric) Bayesian inference has been quite scarce in the more recent years. Assuming a particular prior data distribution may lead to poor results in remote sensing problems because of the specificities and complexity of the data. In this context, the emerging field of nonparametric Bayesian methods constitutes a proper theoretical framework to tackle the remote sensing image classification problem. This paper exploits the Bayesian modeling and inference paradigm to tackle the problem of kernel-based remote sensing image classification. This Bayesian methodology is appropriate for both finite- and infinite-dimensional feature spaces. The particular problem of active learning is addressed by proposing an incremental/active learning approach based on three different approaches: 1) the maximum differential of entropies; 2) the minimum distance to decision boundary; and 3) the minimum normalized distance. Parameters are estimated by using the evidence Bayesian approach, the kernel trick, and the marginal distribution of the observations instead of the posterior distribution of the adaptive parameters. This approach allows us to deal with infinite-dimensional feature spaces. The proposed approach is tested on the challenging problem of urban monitoring from multispectral and synthetic aperture radar data and in multiclass land cover classification of hyperspectral images, in both purely supervised and active learning settings. Similar results are obtained when compared to SVMs in the supervised mode, with the advantage of providing posterior estimates for classification and automatic parameter learning. Comparison with random sampling as well as standard active learning methods such as margin sampling and entropy-query-by-bagging reveals a systematic overall accuracy gain and faster convergence with the number of queries.
Targeting Infeasibility Questions on Obfuscated Codes
Software deobfuscation is a crucial activity in security analysis and especially, in malware analysis. While standard static and dynamic approaches suffer from well-known shortcomings, Dynamic Symbolic Execution (DSE) has recently been proposed has an interesting alternative, more robust than static analysis and more complete than dynamic analysis. Yet, DSE addresses certain kinds of questions encountered by a reverser namely feasibility questions. Many issues arising during reverse, e.g. detecting protection schemes such as opaque predicates fall into the category of infeasibility questions. In this article, we present the Backward-Bounded DSE, a generic, precise, efficient and robust method for solving infeasibility questions. We demonstrate the benefit of the method for opaque predicates and call stack tampering, and give some insight for its usage for some other protection schemes. Especially, the technique has successfully been used on state-of-the-art packers as well as on the government-grade X-Tunnel malware – allowing its entire deobfuscation. Backward-Bounded DSE does not supersede existing DSE approaches, but rather complements them by addressing infeasibility questions in a scalable and precise manner. Following this line, we propose sparse disassembly, a combination of Backward-Bounded DSE and static disassembly able to enlarge dynamic disassembly in a guaranteed way, hence getting the best of dynamic and static disassembly. This work paves the way for robust, efficient and precise disassembly tools for heavily-obfuscated binaries.
GET_PHYLOMARKERS, a Software Package to Select Optimal Orthologous Clusters for Phylogenomics and Inferring Pan-Genome Phylogenies, Used for a Critical Geno-Taxonomic Revision of the Genus Stenotrophomonas
The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia, demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.
Diagnostic applications of high-throughput DNA sequencing.
Advances in DNA sequencing technology have allowed comprehensive investigation of the genetics of human beings and human diseases. Insights from sequencing the genomes, exomes, or transcriptomes of healthy and diseased cells in patients are already enabling improved diagnostic classification, prognostication, and therapy selection for many diseases. Understanding the data obtained using new high-throughput DNA sequencing methods, choices made in sequencing strategies, and common challenges in data analysis and genotype-phenotype correlation is essential if pathologists, geneticists, and clinicians are to interpret the growing scientific literature in this area. This review highlights some of the major results and discoveries stemming from high-throughput DNA sequencing research in our understanding of Mendelian genetic disorders, hematologic cancer biology, infectious diseases, the immune system, transplant biology, and prenatal diagnostics. Transition of new DNA sequencing methodologies to the clinical laboratory is under way and is likely to have a major impact on all areas of medicine.
Novel Approach to Non-Invasive Blood Glucose Monitoring Based on Transmittance and Refraction of Visible Laser Light
Current blood glucose monitoring (BGM) techniques are invasive as they require a finger prick blood sample, a repetitively painful process that creates the risk of infection. BGM is essential to avoid complications arising due to abnormal blood glucose levels in diabetic patients. Laser light-based sensors have demonstrated a superior potential for BGM. Existing near-infrared (NIR)-based BGM techniques have shortcomings, such as the absorption of light in human tissue, higher signal-to-noise ratio, and lower accuracy, and these disadvantages have prevented NIR techniques from being employed for commercial BGM applications. A simple, compact, and cost-effective non-invasive device using visible red laser light of wavelength 650 nm for BGM (RL-BGM) is implemented in this paper. The RL-BGM monitoring device has three major technical advantages over NIR. Unlike NIR, red laser light has ~30 times better transmittance through human tissue. Furthermore, when compared with NIR, the refractive index of laser light is more sensitive to the variations in glucose level concentration resulting in faster response times ~7–10 s. Red laser light also demonstrates both higher linearity and accuracy for BGM. The designed RL-BGM device has been tested for both in vitro and in vivo cases and several experimental results have been generated to ensure the accuracy and precision of the proposed BGM sensor.
A dual-band unidirectional coplanar antenna for 2.4–5-GHz wireless applications
A dual-band unidirectional antenna has been developed. The dual-band antenna consists of a long dipole for the lower frequency band and two short dipoles for the higher frequency band. All dipoles are printed coplanar on a thin substrate. The printed dipole antenna is excited by a microstrip line. The higher-order mode in the higher frequency band has been suppressed, leading to a good unidirectional pattern in the both frequency bands. This dual-band unidirectional antenna may find application in base stations and/or access points for 2.4/5- GHz wireless communications.
Effects of an Internet-Based Intervention for HIV Prevention: The Youthnet Trials
Youth use the Internet and computers in unprecedented numbers. We have yet to identify interventions that can reach and retain large numbers of diverse youth online and demonstrate HIV prevention efficacy. We tested a single session condom promotion Internet intervention for 18–24 year olds in two RCTs: one sample recruited online and one recruited in clinics. All study elements were carried out on the Internet. Using repeated measures structural equation models we analyzed change in proportion of sex acts protected by condoms (PPA) over time. Among sexually active youth in the Internet sample, persons exposed to the intervention had very slight increases in condom norms, and this was the only factor impacting PPA. We saw no intervention effects in the clinic sample. Internet-based interventions need to be more intensive to see greater effects. We need to do more to reach high risk youth online and keep their attention for multiple sessions.
Linux kernel infrastructure for user-level device drivers
Linux 2.5.x has good support now for user-mode device drivers — XFree being the biggest and most obvious — but also there is support for user-mode input devices and for devices that hang off the parallel port. The motivations for user-mode device drivers are many: • Ease of development (all the normal user-space tools can be used to write and debug, not restricted to use of C only (could use Java, C++, Perl, etc), fewer reboot cycles needed, fewer restrictions on what one can do with respect to reentrancy and interrupts, etc., etc.) • Ease of deployment (kernel ↔ user interfaces change much more slowly than in-kernel interfaces; no licensing issues; no need for end-users to recompile to get module versions right, etc., etc.)) • Increased robustness (less likely that a buggy driver can cause a panic) • Increased functionality. Some things are just plain easier to do in user space than in the kernel — e.g., networking. • Increased simplicity (rather than have, say, a generic IDE controller that has to understand the quirks of many different kinds of controllers and drivers, you can afford to pick at run time the controller you really need) There are however some drawbacks, the main ones being performance and security. Three recent developments have made it possible to implement an infrastructure for user-level device drivers that perform almost as well (in some cases better than) in-kernel device drivers. These are 1. the new pthreads library (and corresponding kernel changes: futexes, faster clone and exit, etc); 2. fast system call support; and 3. IOMMUs. Now that many high-end machines have an IOMMU, it becomes possible, at least in theory, to provide secure access to DMA to user processes. Fast system calls allow the kernel ↔ user crossing to be extremely cheap, making user-process interrupt handling feasible. And fast context-switching and IPC for Posix threads, means that multi-threaded device drivers can have the kind of performance that until recently was only available in-kernel. ∗This work was funded by HP, NICTA, the ARC, and the University of NSW through the Gelato programme (http://www.gelato.unsw.edu.au)