title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification | A number of recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large dataset can be adopted as a universal image descriptor, and that doing so leads to impressive performance at a range of image classification tasks. Most of these studies, if not all, adopt activations of the fully-connected layer of a DCNN as the image or region representation and it is believed that convolutional layer activations are less discriminative. This paper, however, advocates that if used appropriately, convolutional layer activations constitute a powerful image representation. This is achieved by adopting a new technique proposed in this paper called cross-convolutional-layer pooling. More specifically, it extracts subarrays of feature maps of one convolutional layer as local features, and pools the extracted features with the guidance of the feature maps of the successive convolutional layer. Compared with existing methods that apply DCNNs in the similar local feature setting, the proposed method avoids the input image style mismatching issue which is usually encountered when applying fully connected layer activations to describe local regions. Also, the proposed method is easier to implement since it is codebook free and does not have any tuning parameters. By applying our method to four popular visual classification tasks, it is demonstrated that the proposed method can achieve comparable or in some cases significantly better performance than existing fully-connected layer based image representations. |
Bearingless slice motor concepts without permanent magnets in the rotor | Bearingless permanent magnet motors are very compact and favorable for high speed designs. Especially the bearingless slice motor (using a permanent magnet excited rotor disc) features a mechanically simple and very cost effective fully magnetically levitated system by stabilizing some degrees of freedom by permanent magnetic reluctance forces. This work focuses on constructional possibilities to apply the slice rotor principle with its passive stabilization without any permanent magnets in the rotor part. As a matter of fact, the permanent magnets are necessary and are therefore located in the stator. Possible stator compositions are outlined and separated into homopolar and heteropolar permanent magnet air gap flux types. The definition of performance parameters allows evaluating the operational behavior of the two considered bearingless reluctance slice motor types (one with hompolar and one with heteropolar air gap flux distribution). The composition, optimization and construction of a prototype are outlined in the last section. Measurements are conducted to show the proper functionality of the bearingless reluctance slice motor prototype. |
Sync kit: a persistent client-side database caching toolkit for data intensive websites | We introduce a client-server toolkit called Sync Kit that demonstrates how client-side database storage can improve the performance of data intensive websites. Sync Kit is designed to make use of the embedded relational database defined in the upcoming HTML5 standard to offload some data storage and processing from a web server onto the web browsers to which it serves content. Our toolkit provides various strategies for synchronizing relational database tables between the browser and the web server, along with a client-side template library so that portions web applications may be executed client-side. Unlike prior work in this area, Sync Kit persists both templates and data in the browser across web sessions, increasing the number of concurrent connections a server can handle by up to a factor of four versus that of a traditional server-only web stack and a factor of three versus a recent template caching approach. |
Prognosis of negative adenosine stress magnetic resonance in patients presenting to an emergency department with chest pain. | OBJECTIVES
This study was designed to determine the diagnostic value of adenosine cardiac magnetic resonance (CMR) in troponin-negative patients with chest pain.
BACKGROUND
We hypothesized that adenosine CMR could determine which troponin-negative patients with chest pain in an emergency department have coronary artery disease (CAD) or future adverse cardiac events.
METHODS
Adenosine stress CMR was performed on 135 patients who presented to the emergency department with chest pain and had acute myocardial infarction (MI) excluded by troponin-I. The main study outcome was detecting any evidence of significant CAD. Patients were contacted at one year to determine the incidence of significant CAD defined as coronary artery stenosis >50% on angiography, abnormal correlative stress test, new MI, or death.
RESULTS
Adenosine perfusion abnormalities had 100% sensitivity and 93% specificity as the single most accurate component of the CMR examination. Both cardiac risk factors and CMR were significant in Kaplan-Meier analysis (log-rank test, p = 0.0006 and p < 0.0001, respectively). However, an abnormal CMR added significant prognostic value in predicting future diagnosis of CAD, MI, or death over clinical risk factors. In receiver operator curve analysis, adenosine CMR was a more accurate predictor than cardiac risk factors (p < 0.002).
CONCLUSIONS
In patients with chest pain who had MI excluded by troponin-I and non-diagnostic electrocardiograms, an adenosine CMR examination predicted with high sensitivity and specificity which patients had significant CAD during one-year follow-up. Furthermore, no patients with a normal adenosine CMR study had a subsequent diagnosis of CAD or an adverse outcome. |
THE THIRD SOUTHERN AFRICAN STUDENTS’ PSYCHOLOGY CONFERENCE JOHANNESBURG, 24 TO 28 JUNE 2013 | I love learning, whether it is through a lecture, practical application or a book. Not only that, I love the opportunity experiential learning brings to the individual. Furthermore, when it is incorporated in a learning programme, the reflective component of experiential learning provides a heightened sense of being part of something bigger than self. It not only enriches the individual with a deeper knowledge of one's self and capabilities, but also an opportunity to peer into this emerging new professional identity rearing to go and contribute! That's the essence of what I came with to the Student's Conference. Although it was my first students' conference to attend, I had a sense of 'this is bigger than me, this is a special moment in time' euphoric feeling. No other experience could have captured this 'awesome' feeling for me than the award-winning WITS choir. |
A general approach to the synthesis of gold-metal sulfide core-shell and heterostructures. | Cores and effect: Water-dispersible core-shell structures and heterostructures incorporating gold nanocrystals of different shapes (polyhedra, cubes, and rods) and a variety of transition metal sulfide semiconductors (ZnS, CdS, NiS, Ag(2)S, and CuS) are synthesized using cetyltrimethylammonium bromide-encapsulated gold nanocrystals and metal thiobenzoates as starting materials. |
The Effect of Stent Porosity and Strut Shape on Saccular Aneurysm and its Numerical Analysis with Lattice Boltzmann Method | The analysis of a flow pattern in cerebral aneurysms and the effect of stent strut shapes are presented in this article. The treatment of cerebral aneurisms with a porous stent has recently been proposed as a minimally invasive way to prevent rupture and favor coagulation mechanism inside the aneurism. The efficiency of stent is related to several parameters, including porosity and stent strut shapes. The goal of this article is to study the effect of the stent strut shape and porosity on the hemodynamic properties of the flow inside an aneurysm using a numerical analysis. In this study, we use the concept of flow reduction to characterize the stent efficiency. Also, we use the lattice Boltzmann method (LBM) of a non-Newtonian blood flow. To resolve the characteristics of a highly complex flow, we use an extrapolation method for the wall and stent boundary. To ease the code development and facilitate the incorporation of new physics, a scientific programming strategy based on object-oriented concepts is developed. Reduced velocity, smaller average vorticity magnitude, smaller average shear rate, and increased viscosity are observed when the proposed stent shapes and porosities are used. The rectangular stent is observed to be optimal and to decrease the magnitude of the velocity by 89.25% in the 2D model and 53.92% in the 3D model in the aneurysm sac. Our results show the role of the porosity and stent strut shape and help us to understand the characteristics of stent strut design. |
Colonoscopy Withdrawal Time and Risk of Neoplasia at 5 Years: Results From VA Cooperative Studies Program 380 | OBJECTIVES:Withdrawal time (WT) has been proposed as a quality indicator for colonoscopy based on evidence that it is directly related to the rate of adenoma detection. Our objective was to test the hypothesis that baseline WT is inversely associated with the risk of finding neoplasia at interval colonoscopy.METHODS:In all, 3,121 subjects, aged 50–75 years, had screening colonoscopy between 1994 and 1997 at 13 Veteran Affairs Medical Centers. In all, 1,193 subjects returned by protocol for surveillance within 5.5 years. In the 304 patients without polyps at baseline, we evaluated the contribution of baseline WT to their risk of interval neoplasia using bivariate and logistic regression analysis. We also examined the correlation between mean WT, baseline adenoma detection rate, and interval neoplasia rate at the medical-center level.RESULTS:The average WT at the baseline exam in subjects with neoplasia on follow-up was 15.3 min as compared with 13.2 min in subjects without neoplasia (P=0.18). In a logistic regression model, WT was not associated with the risk of interval neoplasia (P=0.07). At the medical-center level, mean WT was not correlated with the probability of finding interval neoplasia (P=0.61) but was positively correlated with adenoma detection rate at baseline (P=0.03).CONCLUSIONS:In this study with a mean baseline WT &12 min, there was no detectable association between WT and risk of future neoplasia. The medical center–level WT was positively correlated with adenoma detection. Therefore, above a certain threshold, WT may no longer be an adequate quality measure for screening colonoscopy. |
Virtual reality using games for improving physical functioning in older adults: a systematic review | The use of virtual reality through exergames or active video game, i.e. a new form of interactive gaming, as a complementary tool in rehabilitation has been a frequent focus in research and clinical practice in the last few years. However, evidence of their effectiveness is scarce in the older population. This review aim to provide a summary of the effects of exergames in improving physical functioning in older adults. A search for randomized controlled trials was performed in the databases EMBASE, MEDLINE, PsyInfo, Cochrane data base, PEDro and ISI Web of Knowledge. Results from the included studies were analyzed through a critical review and methodological quality by the PEDro scale. Thirteen studies were included in the review. The most common apparatus for exergames intervention was the Nintendo Wii gaming console (8 studies), followed by computers games, Dance video game with pad (two studies each) and only one study with the Balance Rehabilitation Unit. The Timed Up and Go was the most frequently used instrument to assess physical functioning (7 studies). According to the PEDro scale, most of the studies presented methodological problems, with a high proportion of scores below 5 points (8 studies). The exergames protocols and their duration varied widely, and the benefits for physical function in older people remain inconclusive. However, a consensus between studies is the positive motivational aspect that the use of exergames provides. Further studies are needed in order to achieve better methodological quality, external validity and provide stronger scientific evidence. |
What is the common thread of creativity? Its dialectical relation to intelligence and wisdom. | Creativity refers to the potential to produce novel ideas that are task-appropriate and high in quality. Creativity in a societal context is best understood in terms of a dialectical relation to intelligence and wisdom. In particular, intelligence forms the thesis of such a dialectic. Intelligence largely is used to advance existing societal agendas. Creativity forms the antithesis of the dialectic, questioning and often opposing societal agendas, as well as proposing new ones. Wisdom forms the synthesis of the dialectic, balancing the old with the new. Wise people recognize the need to balance intelligence with creativity to achieve both stability and change within a societal context. |
Design and Implementation of Low-Area and Low-Power AES Encryption Hardware Core | The Advanced Encryption Standard (AES) algorithm has become the default choice for various security services in numerous applications. In this paper we present an AES encryption hardware core suited for devices in which low cost and low power consumption are desired. The core constitutes of a novel 8-bit architecture and supports encryption with 128-bit keys. In a 0.13 mum CMOS technology our area optimized implementation consumes 3.1 kgates. The throughput at the maximum clock frequency of 153 MHz is 121 Mbps, also in feedback encryption modes. Compared to previous 8-bit implementations, we achieve significantly higher throughput with corresponding area. The energy consumption per processed block is also lower |
Device-Free Presence Detection and Localization With SVM and CSI Fingerprinting | Presence detection and localization are of importance to a variety of applications. Most previous approaches require the objects to carry electronic devices, while on many occasions device-free presence detection and localization are in need. This paper proposes a device-free presence detection and localization algorithm based on WiFi channel state information (CSI) and support vector machines (SVM). In the area of interest covered with WiFi, human movements may cause observable alteration of WiFi signals. By analyzing the CSI fingerprint patterns, the proposed algorithm is able to detect human presence through SVM classification. By establishing the nonlinear relationship between CSI fingerprints and locations through SVM regression, the proposed algorithm is able to estimate the object locations according to the measured CSI fingerprints. To cope with the noisy WiFi channels, the proposed algorithm applies density-based spatial clustering of applications with noise to reduce the noise in CSI fingerprints, and applies principal component analysis to extract the most contributing features and reduce the dimensionality of CSI fingerprints. Evaluations in two typical scenarios achieved the presence detection precision of over 97%, and the localization accuracy of 1.22 and 1.39 m, respectively. |
Haploinsufficiency of the gene Quaking (QKI) is associated with the 6q terminal deletion syndrome. | Subtelomeric rearrangements involving chromosome 6q have been reported in a limited number of studies. Although the sizes are very variable, ranging from cytogenetically visible deletions to small submicroscopic deletions, a common recognizable phenotype associated with a 6q deletion could be distilled. The main characteristics are intellectual disabilities, hypotonia, seizures, brain anomalies, and specific dysmorphic features including short neck, broad nose with bulbous tip, large and low-set ears and downturned corners of the mouth. In this article we report on a female patient, carrying a reciprocal balanced translocation t(5;6)(q23.1;q26), presenting with a clinical phenotype highly similar to the common 6q- phenotype. Breakpoint analysis using array painting revealed that the Quaking (QKI) gene that maps in 6q26 is disrupted, suggesting that haploinsufficiency of this gene plays a role in the 6q- clinical phenotype. |
Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi? | BACKGROUND
Routine monitoring of patients on antiretroviral therapy (ART) is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports.
METHODS
Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April-June] and quarter three [July-September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data.
RESULTS
Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among paper-based facilities, an average of 13 4 hours was needed to calculate patient retention for cohort reporting using patient registers as compared to 2.25 hours using pharmacy stock cards.
CONCLUSION
The numbers of patients retained on ART as estimated using pharmacy stock records were largely similar to estimates based on either paper registers or electronic data system. Furthermore, less time and staff effort was needed to estimate ART patient retention using pharmacy stock records versus paper-based registers. Reinforcing ARV stock management may improve the precision of estimates. |
Some Eecient Solutions to the Aane Scheduling Problem Part Ii Multidimensional Time | This paper extends the algorithms which were developed in Part I to cases in which there is no aane schedule, i.e. to problems whose parallel complexity is polynomial but not linear. The natural generalization is to multidimensional schedules with lexicographic ordering as temporal succession. Multidimensional aane schedules, are, in a sense, equivalent to polynomial schedules, and are much easier to handle automatically. Furthermore , there is a strong connection between multidimensional schedules and loop nests, which allows one to prove that a static control program always has a multidimensional schedule. Roughly, a larger dimension indicates less parallelism. In the algorithm which is presented here, this dimension is computed dynamically, and is just suucient for scheduling the source program. The algorithm lends itself to a \divide and conquer" strategy. The paper gives some experimental evidence for the applicability, performances and limitations of the algorithm. R esum e Dans cet article, les algorithmes qui ont et e propos es dans la premi ere partie sont etendus au cas o u le programme source n'a pas de base de temps aane, c'est-a-dire a des algorithmes dont la complexit e parall ele est polynomiale mais non lin eaire. La solution naturelle est l'emploi de bases de temps a plusieurs dimensions, l'ordre de succession temporelle etant l'ordre lexicographique. Les bases de temps multidimensionnelles sont, en un certain sens, equivalentes a des bases de temps polynomiales, et sont beaucoup plus faciles a manipuler algorithmiquement. De plus, il y a une connexion forte entre bases de temps multidimensionnelles et nids de boucles, ce qui permet de d emontrer qu'un programme a contr^ ole statique a toujours une base de temps multidimensionnelle. En gros, plus grande est la dimension et moins il y a de parall elisme. Dans l'algorithme ici pr esent e, cette dimension est d etermin ee dynamiquement; elle est juste suusante pour permettre l'ordonnancement du programme source. Ennn, cet algorithme se pr^ ete a l'application de la strat egie \diviser pour r egner". On pr esente en conclusion quelques r esultats exp erimentaux permettant de juger du domaine d'application, des performances et des limitations de l'algorithme. |
Reducing the time period of steady state does not affect the accuracy of energy expenditure measurements by indirect calorimetry. | Achievement of steady state during indirect calorimetry measurements of resting energy expenditure (REE) is necessary to reduce error and ensure accuracy in the measurement. Steady state is often defined as 5 consecutive min (5-min SS) during which oxygen consumption and carbon dioxide production vary by +/-10%. These criteria, however, are stringent and often difficult to satisfy. This study aimed to assess whether reducing the time period for steady state (4-min SS or 3-min SS) produced measurements of REE that were significantly different from 5-min SS. REE was measured with the use of open-circuit indirect calorimetry in 39 subjects, of whom only 21 (54%) met the 5-min SS criteria. In these 21 subjects, median biases in REE between 5-min SS and 4-min SS and between 5-min SS and 3-min SS were 0.1 and 0.01%, respectively. For individuals, 4-min SS measured REE within a clinically acceptable range of +/-2% of 5-min SS, whereas 3-min SS measured REE within a range of -2-3% of 5-min SS. Harris-Benedict prediction equations estimated REE for individuals within +/-20-30% of 5-min SS. Reducing the time period of steady state to 4 min produced measurements of REE for individuals that were within clinically acceptable, predetermined limits. The limits of agreement for 3-min SS fell outside the predefined limits of +/-2%; however, both 4-min SS and 3-min SS criteria greatly increased the proportion of subjects who satisfied steady state within smaller limits than would be achieved if relying on prediction equations. |
Illumination robust interest point detection | Article history: Received 12 February 2008 Accepted 26 November 2008 Available online xxxx |
Direct volume visualization of three-dimensional vector fields | Current techniques for direct volume visualization offer only the ability to examine scalar fields. However most scientific explorations require the examination of vector and possibly tensor fields as well as numerous scalar fields. This paper describes an algorithm to directly render three-dimensional scalar and vector fields. The algorithm uses a combination of sampling and splatting techniques, that are extended to integrate the display of vector field data within the image. Additional |
One-Class Collaborative Filtering | Many applications of collaborative filtering (CF), such as news item recommendation and bookmark recommendation, are most naturally thought of as one-class collaborative filtering (OCCF) problems. In these problems, the training data usually consist simply of binary data reflecting a user's action or inaction, such as page visitation in the case of news item recommendation or webpage bookmarking in the bookmarking scenario. Usually this kind of data are extremely sparse (a small fraction are positive examples), therefore ambiguity arises in the interpretation of the non-positive examples. Negative examples and unlabeled positive examples are mixed together and we are typically unable to distinguish them. For example, we cannot really attribute a user not bookmarking a page to a lack of interest or lack of awareness of the page. Previous research addressing this one-class problem only considered it as a classification task. In this paper, we consider the one-class problem under the CF setting. We propose two frameworks to tackle OCCF. One is based on weighted low rank approximation; the other is based on negative example sampling. The experimental results show that our approaches significantly outperform the baselines. |
Increased brain natriuretic peptide and atrial natriuretic peptide plasma concentrations in dialysis-dependent chronic renal failure and in patients with elevated left ventricular filling pressure | Brain natriuretic peptide (BNP) and atrial natriuretic peptide (ANP) plasma concentrations were measured in patients with dialysis-dependent chronic renal failure and in patients with coronary artery disease exhibiting normal or elevated left ventricular end-diastolic pressure (LVEDP) (n = 30 each). Blood samples were obtained from the arterial line of the arteriovenous shunt before, 2 h after the beginning of, and at the end of hemodialysis in patients with chronic renal failure. In patients with coronary artery disease arterial blood samples were collected during cardiac catheterization. BNP and ANP concentrations were determined by radioimmunoassay after Sep Pak C18 extraction. BNP and ANP concentrations decreased significantly (P < 0.001) during hemodialysis (BNP: 192.1 ± 24.9, 178.6 ± 23.0, 167.2 ± 21.8 pg/ml; ANP: 240.2 ± 28.7, 166.7 ± 21.3, 133.0 ± 15.5 pg/ml). The decrease in BNP plasma concentrations, however, was less marked than that in ANP plasma levels (BNP 13.5 ± 1.8%, ANP 40.2 ± 3.5%; P < 0.001). Plasma BNP and ANP concentrations were 10.7 ± 1.0 and 60.3 ± 4. 0 pg/ml in patients with normal LVEDP and 31.7 ± 3.6 and 118.3 ± 9.4 pg/ml in patients with elevated LVEDP. These data demonstrate that BNP and ANP levels are strongly elevated in patients with dialysis-dependent chronic renal failure compared to patients with normal LVEDP (BNP 15.6-fold, ANP 2.2-fold, after hemodialysis; P < 0.001 or elevated LVEDP (BNP 6.1-fold, ANP 2.0-fold, before hemodialysis; P < 0.001), and that the elevation in BNP concentrations was more pronounced than that in ANP plasma concentrations. The present results provide support that other factors than volume overload, for example, decreased renal clearance, are also involved in the elevationin BNP and ANP plasma levels in chronic renal failure. The stronger elevation in BNP concentrations in patients with chronic renal failure and in patients with elevated LVEDP and the less pronounced decrease during hemodialysis suggest a different regulation of BNP and ANP plasma concentrations.[/ p] |
Investigation of potential non-HLA rheumatoid arthritis susceptibility loci in a European cohort increases the evidence for nine markers | BACKGROUND
Genetic factors have a substantial role in determining development of rheumatoid arthritis (RA), and are likely to account for 50-60% of disease susceptibility. Genome-wide association studies have identified non-human leucocyte antigen RA susceptibility loci which associate with RA with low-to-moderate risk.
OBJECTIVES
To investigate recently identified RA susceptibility markers using cohorts from six European countries, and perform a meta-analysis including previously published results.
METHODS
3311 DNA samples were collected from patients from six countries (UK, Germany, France, Greece, Sweden and Denmark). Genotype data or DNA samples for 3709 controls were collected from four countries (not Sweden or Denmark). Eighteen single nucleotide polymorphisms (SNPs) were genotyped using Sequenom MassArray technology. Samples with a >95% success rate and only those SNPs with a genotype success rate of >95% were included in the analysis. Scandinavian patient data were pooled and previously published Swedish control data were accessed as a comparison group. Meta-analysis was used to combine results from this study with all previously published data.
RESULTS
After quality control, 3209 patients and 3692 controls were included in the study. Eight markers (ie, rs1160542 (AFF3), rs1678542 (KIF5A), rs2476601 (PTPN22), rs3087243 (CTLA4), rs4810485 (CD40), rs5029937 (6q23), rs10760130 (TRAF1/C5) and rs7574865 (STAT4)) were significantly associated with RA by meta-analysis. All 18 markers were associated with RA when previously published studies were incorporated in the analysis. Data from this study increased the significance for association with RA and nine markers.
CONCLUSIONS
In a large European RA cohort further evidence for the association of 18 markers with RA development has been obtained. |
Fractal video compression in OpenCL: An evaluation of CPUs, GPUs, and FPGAs as acceleration platforms | Fractal compression is an efficient technique for image and video encoding that uses the concept of self-referential codes. Although offering compression quality that matches or exceeds traditional techniques with a simpler and faster decoding process, fractal techniques have not gained widespread acceptance due to the computationally intensive nature of its encoding algorithm. In this paper, we present a real-time implementation of a fractal compression algorithm in OpenCL [1]. We show how the algorithm can be efficiently implemented in OpenCL and optimized for multi-CPUs, GPUs, and FPGAs. We demonstrate that the core computation implemented on the FPGA through OpenCL is 3× faster than a high-end GPU and 114× faster than a multi-core CPU, with significant power advantages. We also compare to a hand coded FPGA implementation to showcase the effectiveness of an OpenCL-to-FPGA compilation tool. |
AUGMENTED REALITY BROWSERS: A PROPOSAL FOR ARCHITECTURAL STANDARDIZATION | The technology evolution of smartphones, systems and the growing of telecommunications, allow the use of Augmented Reality in the exploration of geo-referenced information, complementing the real environment of the users with various types of content displayed on robust mobile cameras through applications called augmented reality browsers. Although this type of application is being in development and growing use by society, the technology, and especially its software architecture do not have any kind of standardization. This work presents concepts about augmented reality browsers for mobile devices. It shows the main aspects and applications of those types of reality. Besides that, the specific features of those types of architecture are discussed and compared, and new architecture, whose most relevant feature is the interoperability of applications in various platforms for mobile devices, are presented. This work has the objective to develop an architectural framework for the development of these browsers. |
Fibronectin regulates Wnt7a signaling and satellite cell expansion. | The influence of the extracellular matrix (ECM) within the stem cell niche remains poorly understood. We found that Syndecan-4 (Sdc4) and Frizzled-7 (Fzd7) form a coreceptor complex in satellite cells and that binding of the ECM glycoprotein Fibronectin (FN) to Sdc4 stimulates the ability of Wnt7a to induce the symmetric expansion of satellite stem cells. Newly activated satellite cells dynamically remodel their niche via transient high-level expression of FN. Knockdown of FN in prospectively isolated satellite cells severely impaired their ability to repopulate the satellite cell niche. Conversely, in vivo overexpression of FN with Wnt7a dramatically stimulated the expansion of satellite stem cells in regenerating muscle. Therefore, activating satellite cells remodel their niche through autologous expression of FN that provides feedback to stimulate Wnt7a signaling through the Fzd7/Sdc4 coreceptor complex. Thus, FN and Wnt7a together regulate the homeostatic levels of satellite stem cells and satellite myogenic cells during regenerative myogenesis. |
Coordination theory Coordination Theory : A Ten-Year Retrospective | Since the initial publication in 1994, Coordination Theory (Malone and Crowston, 1994) has been referenced in nearly 300 journal articles, book chapters, conference papers and theses. This chapter will analyze the contribution of this body of research to determine how Coordination Theory has been used for user task analysis and modelling for HCI. Issues that will be addressed include: I) how the theory has been applied; 2) factors that led to the success of the theory; and 3) identification of areas needing further research. |
High-Speed (MHz) Series Resonant Converter (SRC) Using Multilayered Coreless Printed Circuit Board (PCB) Step-Down Power Transformer | In this paper, design and analysis of an isolated low-profile, series resonant converter (SRC) using multilayered coreless printed circuit board (PCB) power transformer was presented. For obtaining the stringent height switch mode power supplies, a multilayered coreless PCB power transformer of approximately 4:1 turn's ratio was designed in a four-layered PCB laminate that can be operated in megahertz switching frequency. The outermost radius of the transformer is 10 mm with achieved power density of 16 W/cm2. The energy efficiency of the power transformer is found to be in the range of 87-96% with the output power level of 0.1-50 W operated at a frequency of 2.6 MHz. This designed step-down transformer was utilized in the SRC and evaluated. The supply voltage of the converter is varied from 60-120 VDC with a nominal input voltage of 90 V and has been tested up to the power level of 34.5 W. The energy efficiency of the converter under zero-voltage switching condition is found to be in the range of 80-86.5% with the switching frequency range of 2.4-2.75 MHz. By using a constant off-time frequency modulation technique, the converter was regulated to 20 VDC for different load conditions. Thermal profile with converter loss at nominal voltage is presented. |
Social Preferences: Some Simple Tests and a New Model | Departures from pure self interest in economic experiments have recently inspired models of “social preferences” . We conduct experiments on simple two-person and three-person games with binary choices that test these theories more directly than the array of games conventionally considered. Our experiments show strong support for the prevalence of “quasimaximin” preferences: People sacrifice to increase the payoffs for all recipients, but especially for the lowest-payoff recipients. People are also motivated by reciprocity: While people are reluctant to sacrifice to reciprocate good or bad behavior beyond what they would sacrifice for neutral parties, they withdraw willi ngness to sacrifice to achieve a fair outcome when others are themselves unwilli ng to sacrifice. Some participants are averse to getting different payoffs than others, but based on our experiments and reinterpretation of previous experiments we argue that behavior that has been presented as “difference aversion” in recent papers is actually a combination of reciprocal and quasi-maximin motivations. We formulate a model in which each player is willi ng to sacrifice to allocate the quasi-maximin allocation only to those players also believed to be pursuing the quasi-maximin allocation, and may sacrifice to punish unfair players. |
An artificial neural network (p, d, q) model for timeseries forecasting | Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved. |
Dietary and Policy Priorities for Cardiovascular Disease, Diabetes, and Obesity: A Comprehensive Review. | Suboptimal nutrition is a leading cause of poor health. Nutrition and policy science have advanced rapidly, creating confusion yet also providing powerful opportunities to reduce the adverse health and economic impacts of poor diets. This review considers the history, new evidence, controversies, and corresponding lessons for modern dietary and policy priorities for cardiovascular diseases, obesity, and diabetes mellitus. Major identified themes include the importance of evaluating the full diversity of diet-related risk pathways, not only blood lipids or obesity; focusing on foods and overall diet patterns, rather than single isolated nutrients; recognizing the complex influences of different foods on long-term weight regulation, rather than simply counting calories; and characterizing and implementing evidence-based strategies, including policy approaches, for lifestyle change. Evidence-informed dietary priorities include increased fruits, nonstarchy vegetables, nuts, legumes, fish, vegetable oils, yogurt, and minimally processed whole grains; and fewer red meats, processed (eg, sodium-preserved) meats, and foods rich in refined grains, starch, added sugars, salt, and trans fat. More investigation is needed on the cardiometabolic effects of phenolics, dairy fat, probiotics, fermentation, coffee, tea, cocoa, eggs, specific vegetable and tropical oils, vitamin D, individual fatty acids, and diet-microbiome interactions. Little evidence to date supports the cardiometabolic relevance of other popular priorities: eg, local, organic, grass-fed, farmed/wild, or non-genetically modified. Evidence-based personalized nutrition appears to depend more on nongenetic characteristics (eg, physical activity, abdominal adiposity, gender, socioeconomic status, culture) than genetic factors. Food choices must be strongly supported by clinical behavior change efforts, health systems reforms, novel technologies, and robust policy strategies targeting economic incentives, schools and workplaces, neighborhood environments, and the food system. Scientific advances provide crucial new insights on optimal targets and best practices to reduce the burdens of diet-related cardiometabolic diseases. |
A Low-Profile Third-Order Bandpass Frequency Selective Surface | We demonstrate a new class of low-profile frequency selective surfaces (FSS) with an overall thickness of lambda/24 and a third-order bandpass frequency response. The proposed FSS is a three layer structure composed of three metal layers, separated by two electrically thin dielectric substrates. Each layer is a two-dimensional periodic structure with sub-wavelength unit cell dimensions and periodicity. The unit cell of the proposed FSS is composed of a combination of resonant and non-resonant elements. It is shown that this arrangement acts as a spatial bandpass filter with a third-order bandpass response. However, unlike traditional third-order bandpass FSSs, which are usually obtained by cascading three identical first-order bandpass FSSs a quarter wavelength apart from one another and have thicknesses in the order of lambda/2 , the proposed structure has an extremely low profile and an overall thickness of about lambda/24 , making it an attractive choice for conformal FSS applications. As a result of the miniaturized unit cells and the extremely small overall thickness of the structure, the proposed FSS has a reduced sensitivity to the angle of incidence of the EM wave compared to traditional third-order frequency selective surfaces. The principles of operation along with guidelines for the design of the proposed FSS are presented in this paper. A prototype of the proposed third-order bandpass FSS is also fabricated and tested using a free space measurement system at C band. |
Distributed Time Series Analytics | In recent years time series data has become ubiquitous thanks to affordable sensors and advances in embedded technology. Large amount of time-series data are continuously produced in a wide spectrum of applications, such as sensor networks, medical monitoring, finance, IoT applications, news feeds, social networks, data centre monitoring and so on. Availability of such large scale time series data highlights the importance of of scalable data management, efficient querying and analysis. Meanwhile, in the online setting time series carries invaluable information and knowledge about the real-time status of involved entities or monitored phenomena, which calls for online time series data mining for serving timely decision making or event detection. In addition, in many scenarios data generated from sensors (environmental, RFID, GPS, etc.) are noisy and non-stationary in nature. In this thesis we aim to address these important issues pertaining to scalable and distributed analytics techniques for massive time series data. Concretely, this thesis is centered around the following three topics: Managing and Querying Model-View Time Series. As the number of sensors that pervade our lives significantly increases (e.g., environmental sensors, mobile phone sensors, IoT applications, etc.), the efficient management of massive amount of time series from such sensors is becoming increasingly important. The infinite nature of sensor data poses a serious challenge for query processing even in a cloud infrastructure. Traditional raw sensor data management systems based on relational databases lack scalability to accommodate large scale sensor data efficiently. Thus, distributed key-value stores in the cloud are becoming a prime tool to manage sensor data. On the other hand, model-view time series management, which stores the data in the form of modeled segments, brings the additional advantages of data compression and value interpolation. However, currently there are no techniques for indexing and/or query optimization of the model-view sensor time series data in the cloud. In Chapter 2, we propose an innovative index for modeled segments in key-value stores, namely KVI-index. KVI-index consists of two interval indices on the time and sensor value dimensions respectively, each of which has an in-memory search tree and a secondary list materialized in the key-value store. We show that the proposed KVI-index enables to perform efficient query processing upon modeled segments. Mining Correlations in Streaming Time Series. The dramatic increase in the availability of data streams fuels the development of many |
Curcumin Therapy in Inflammatory Bowel Disease: A Pilot Study | Curcumin, a natural compound used as a food additive, has been shown to have anti-inflammatory and antioxidant properties in cell culture and animal studies. A pure curcumin preparation was administered in an open label study to five patients with ulcerative proctitis and five with Crohn's disease. All proctitis patients improved, with reductions in concomitant medications in four, and four of five Crohn's disease patients had lowered CDAI scores and sedimentation rates. This encouraging pilot study suggests the need for double-blind placebo-controlled follow-up studies. |
Understanding Participatory Hashtag Practices on Instagram: A Case Study of Weekend Hashtag Project | Instagram, a popular global mobile photo-sharing platform, involves various user interactions centered on posting images accompanied by hashtags. Participatory hashtagging, one of these diverse tagging practices, has great potential to be a communication channel for various organizations and corporations that would like to interact with users on social media. In this paper, we aim to characterize participatory hashtagging behaviors on Instagram by conducting a case study of its representative hashtagging practice, the Weekend Hashtag Project, or #WHP. By conducting a user study using both quantitative and qualitative methods, we analyzed the way Instagram users respond to participation calls and identified factors that motivate users to take part in the project. Based on these findings, we provide design strategies for any interested parties to interact with users on social media. |
Evaluating Evaluation: Assessing Progress in Computational Creativity Research | Computational creativity research has produced many computational systems that are described as creative. A comprehensive literature survey reveals that although such systems are labelled as creative, there is a distinct lack of evaluation of the creativity of creative systems. As a research community, we should adopt a more scientific approach to evaluation of the creativity of our systems if we are to progress in understanding creativity and modelling it computationally. A methodology for creativity evaluation should accommodate different manifestations of creativity but also require a clear, definitive statement of the standards used for evaluation. This paper proposes Evaluation Guidelines, a standard but flexible approach to evaluation of the creativity of computational systems and argues that this approach should be taken up as standard practice in computational creativity research. The approach is outlined and discussed, then illustrated through a comparative evaluation of the creativity of jazz improvisation systems. |
An 820μW 9b 40MS/s Noise-Tolerant Dynamic-SAR ADC in 90nm Digital CMOS | Current trends in analog/mixed-signal design for battery-powered devices demand the adoption of cheap and power-efficient ADCs. SAR architectures have been recently demonstrated as able to achieve high power efficiency in the moderate-resolution/medium- bandwidth range in Craninckx, J. and Van der Plas, G., (2007). However, when the comparator determines in first instance the overall performance, as in most SAR ADCs, comparator thermal noise can limit the maximum achievable resolution. More than 1 and 2 ENOB reductions are observed in Craninckx, J. and Van der Plas, G., (2007) and Kuttner, F., (2002), respectively, because of thermal noise, and degradations could be even worse with scaled supply voltages and the extensive use of dynamic regenerative latches without pre-amplification. Unlike mismatch, random noise cannot be compensated by calibration and would finally demand a quadratic increase in power consumption unless alternative circuit techniques are devised. |
Temporal structure of syntactic parsing: early and late event-related brain potential effects. | Event-related brain potentials (ERPs) were recorded from participants listening to or reading sentences that were correct, contained a violation of the required syntactic category, or contained a syntactic-category ambiguity. When sentences were presented auditorily (Experiment 1), there was an early left anterior negativity for syntactic-category violations, but not for syntactic-category ambiguities. Both anomaly types elicited a late centroparietally distributed positivity. When sentences were presented visually word by word (Experiment 2), again an early left anterior negativity was found only for syntactic-category violations, and both types of anomalies elicited a late positivity. The combined data are taken to be consistent with a 2-stage model of parsing, including a 1st stage, during which an initial phrase structure is built and a 2nd stage, during which thematic role assignment and, if necessary, reanalysis takes place. Disruptions to the 1st stage of syntactic parsing appear to be correlated with an early left anterior negativity, whereas disruptions to the 2nd stage might be correlated with a late posterior distributed positivity. |
Optic Ataxia: From Balint’s Syndrome to the Parietal Reach Region | Optic ataxia is a high-order deficit in reaching to visual goals that occurs with posterior parietal cortex (PPC) lesions. It is a component of Balint's syndrome that also includes attentional and gaze disorders. Aspects of optic ataxia are misreaching in the contralesional visual field, difficulty preshaping the hand for grasping, and an inability to correct reaches online. Recent research in nonhuman primates (NHPs) suggests that many aspects of Balint's syndrome and optic ataxia are a result of damage to specific functional modules for reaching, saccades, grasp, attention, and state estimation. The deficits from large lesions in humans are probably composite effects from damage to combinations of these functional modules. Interactions between these modules, either within posterior parietal cortex or downstream within frontal cortex, may account for more complex behaviors such as hand-eye coordination and reach-to-grasp. |
Antioxidant and anti-inflammatory properties of curcumin. | Curcumin, a yellow pigment from Curcuma longa, is a major component of turmeric and is commonly used as a spice and food-coloring agent. It is also used as a cosmetic and in some medical preparations. The desirable preventive or putative therapeutic properties of curcumin have also been considered to be associated with its antioxidant and anti-inflammatory properties. Because free-radical-mediated peroxidation of membrane lipids and oxidative damage of DNA and proteins are believed to be associated with a variety of chronic pathological complications such as cancer, atherosclerosis, and neurodegenerative diseases, curcumin is thought to play a vital role against these pathological conditions. The anti-inflammatory effect of curcumin is most likely mediated through its ability to inhibit cyclooxygenase-2 (COX-2), lipoxygenase (LOX), and inducible nitric oxide synthase (iNOS). COX-2, LOX, and iNOS are important enzymes that mediate inflammatory processes. Improper upregulation of COX-2 and/or iNOS has been associated with the pathophysiology of certain types of human cancer as well as inflammatory disorders. Because inflammation is closely linked to tumor promotion, curcumin with its potent anti-inflammatory property is anticipated to exert chemopreventive effects on carcinogenesis. Hence, the past few decades have witnessed intense research devoted to the antioxidant and anti-inflammatory properties of curcumin. In this review, we describe both antioxidant and anti-inflammatory properties of curcumin, the mode of action of curcumin, and its therapeutic usage against different pathological conditions. |
Fog Based Intelligent Transportation Big Data Analytics in The Internet of Vehicles Environment: Motivations, Architecture, Challenges, and Critical Issues | The intelligent transportation system (ITS) concept was introduced to increase road safety, manage traffic efficiently, and preserve our green environment. Nowadays, ITS applications are becoming more data-intensive and their data are described using the “5Vs of Big Data”. Thus, to fully utilize such data, big data analytics need to be applied. The Internet of vehicles (IoV) connects the ITS devices to cloud computing centres, where data processing is performed. However, transferring huge amount of data from geographically distributed devices creates network overhead and bottlenecks, and it consumes the network resources. In addition, following the centralized approach to process the ITS big data results in high latency which cannot be tolerated by the delay-sensitive ITS applications. Fog computing is considered a promising technology for real-time big data analytics. Basically, the fog technology complements the role of cloud computing and distributes the data processing at the edge of the network, which provides faster responses to ITS application queries and saves the network resources. However, implementing fog computing and the lambda architecture for real-time big data processing is challenging in the IoV dynamic environment. In this regard, a novel architecture for real-time ITS big data analytics in the IoV environment is proposed in this paper. The proposed architecture merges three dimensions including intelligent computing (i.e. cloud and fog computing) dimension, real-time big data analytics dimension, and IoV dimension. Moreover, this paper gives a comprehensive description of the IoV environment, the ITS big data characteristics, the lambda architecture for real-time big data analytics, several intelligent computing technologies. More importantly, this paper discusses the opportunities and challenges that face the implementation of fog computing and real-time big data analytics in the IoV environment. Finally, the critical issues and future research directions section discusses some issues that should be considered in order to efficiently implement the proposed architecture. |
Use of ultrahigh frequency ventilation in patients with ARDS. A preliminary report. | STUDY OBJECTIVE
Our objective was to compare the efficacy of ultrahigh frequency ventilation (UHFV) (frequencies > 3 Hz) with respect to oxygenation, airway pressures, and hemodynamic parameters in patients with adult respiratory distress syndrome (ARDS) who were not responding to conventional ventilation.
DESIGN
We used a prospective, multicenter, nonrandomized study design in which each patient served as his own control.
SETTING
Three university-affiliated, tertiary-care medical centers participated.
PATIENTS
Persons aged 16 to 79 years old with ARDS and unresponsive to conventional ventilation, as defined by a Food and Drug Administration (FDA) approved protocol, were included.
INTERVENTIONS
Ninety patients who were not responding to conventional ventilation were changed to UHFV using a microcomputer-controlled device.
MEASUREMENTS AND RESULTS
The patient's blood gas, hemodynamic, and airway pressure variables were measured just before, and at 1 and 24 h after the switch to UHFV. We demonstrated clinically significant improvements in arterial oxygen tension (PaO2) and reductions in peak and mean inspiratory pressures.
CONCLUSIONS
In a multicenter study, UHFV improved respiratory gas exchange and reduced airway pressure variables at both 1 h and 24 h after the onset of UHFV when compared with conventional ventilation just prior to the change and without hemodynamic deterioration, in patients with severe ARDS. |
The stressed brain of humans and rodents | After stress, the brain is exposed to waves of stress mediators, including corticosterone (in rodents) and cortisol (in humans). Corticosteroid hormones affect neuronal physiology in two time-domains: rapid, non-genomic actions primarily via mineralocorticoid receptors; and delayed genomic effects via glucocorticoid receptors. In parallel, cognitive processing is affected by stress hormones. Directly after stress, emotional behaviour involving the amygdala is strongly facilitated with cognitively a strong emphasis on the "now" and "self," at the cost of higher cognitive processing. This enables the organism to quickly and adequately respond to the situation at hand. Several hours later, emotional circuits are dampened while functions related to the prefrontal cortex and hippocampus are promoted. This allows the individual to rationalize the stressful event and place it in the right context, which is beneficial in the long run. The brain's response to stress depends on an individual's genetic background in interaction with life events. Studies in rodents point to the possibility to prevent or reverse long-term consequences of early life adversity on cognitive processing, by normalizing the balance between the two receptor types for corticosteroid hormones at a critical moment just before the onset of puberty. |
Potentials of Gamification in Learning Management Systems: A Qualitative Evaluation | Besides game-based learning, gamification is an upcoming trend in education, studied in various empirical studies and found in many major learning management systems. Employing a newly developed qualitative instrument for assessing gamification in a system, we studied five popular LMS for their specific implementations. The instrument enabled experts to extract affordances for gamification in the five categories experiential, mechanics, rewards, goals, and social. Results show large similarities in all of the systems studied and few varieties in approaches to gamification. |
The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets | Binary classifiers are routinely evaluated with performance measures such as sensitivity and specificity, and performance is frequently illustrated with Receiver Operating Characteristics (ROC) plots. Alternative measures such as positive predictive value (PPV) and the associated Precision/Recall (PRC) plots are used less frequently. Many bioinformatics studies develop and evaluate classifiers that are to be applied to strongly imbalanced datasets in which the number of negatives outweighs the number of positives significantly. While ROC plots are visually appealing and provide an overview of a classifier's performance across a wide range of specificities, one can ask whether ROC plots could be misleading when applied in imbalanced classification scenarios. We show here that the visual interpretability of ROC plots in the context of imbalanced datasets can be deceptive with respect to conclusions about the reliability of classification performance, owing to an intuitive but wrong interpretation of specificity. PRC plots, on the other hand, can provide the viewer with an accurate prediction of future classification performance due to the fact that they evaluate the fraction of true positives among positive predictions. Our findings have potential implications for the interpretation of a large number of studies that use ROC plots on imbalanced datasets. |
Intervention Studies on Forgiveness : A Meta-Analysis | 79 A promising area of counseling research that emerged in the 1990s is the scientific investigation of forgiveness interventions. Although the notion of forgiving is ancient (Enright & the Human Development Study Group, 1991), it has not been systematically studied until relatively recently (Enright, Santos, & Al-Mabuk, 1989). Significant to counseling because of its interpersonal nature, forgiveness issues are relevant to the contexts of marriage and dating relationships, parent–child relationships, friendships, professional relationships, and others. In addition, forgiveness is integral to emotional constructs such as anger. As forgiveness therapies (Ferch, 1998; Fitzgibbons, 1986) and the empirical study of these therapies (Freedman & Enright, 1996) begin to unfold, it is important to ask if these interventions can consistently demonstrate salient positive effects on levels of forgiveness and on the mental health of targeted clients. The purpose of this article is to analyze via meta-analysis the existing published interventions on forgiveness. Meta-analysis is a popular vehicle of synthesizing results across multiple studies. Recent successful uses of this method include the study by McCullough (1999), who analyzed five studies that compared the efficacy for depression of standard approaches with counseling with religion-accommo-dative approaches. Furthermore, in order to reach conclusions about the influence of hypnotherapy on treatment for clients with obesity, Allison and Faith (1996) used meta-analysis to examine six studies that compared the efficacy of using cognitive-behavioral therapy (CBT) alone with the use of CBT combined with hypnotherapy. Finally, Morris, Audet, Angelillo, Chalmers, and Mosteller (1992) used meta-analysis to combine the results of 10 studies with contradictory findings to show that the benefits of chlorinating drinking water far outweighed the risks. Although there may be some concern that using forgiveness as an intervention in counseling is in too early a stage of development and that too few studies exist for a proper meta-analysis, the effectiveness of these recent meta-analyses supports this meta-analytic investigation. Certainly any findings must be tempered with due caution. However, this analysis may serve as important guidance for the structure and development of future counseling studies of forgiveness. We first examine the early work in forgiveness interventions by examining the early case studies. From there, we define forgiveness, discuss the models of forgiveness in counseling and the empirically based interventions, and then turn to the meta-analysis. The early clinical case studies suggested that forgiveness might be helpful for people who have experienced deep emotional pain because of unjust treatment. For … |
Information-Driven Dynamic Sensor Collaboration for Tracking Applications | This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a " sensor collaboration " by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications. I. INTRODUCTION The technology of wirelessly networked micro-sensors promises to revolutionize the way we live, work, and interact with the physical environment. For example, tiny, inexpensive sensors can be " sprayed " onto roads, walls, or machines to monitor and detect a variety of interesting events such as highway traffic, wildlife habitat condition, forest fire, manufacturing job flow, and military battlefield situation. |
Hypofractionation should be the new 'standard' for radiation therapy after breast conserving surgery. | Hypofractionated whole breast radiation therapy following breast conserving surgery (BCS) has been used in many institutions for several decades. Four randomized trials with 5-10-year follow-up, have demonstrated equivalent local control, cosmetic and normal tissue outcomes between 50 Gy in 25 fractions and various hypofractionated RT prescriptions employing 13-16 fractions. Indirect evidence suggests that hypofractionated RT may also be safe and effective for regional nodal RT. In the face of equivalent outcomes, patient convenience and health care utilization benefits, hypofractionated RT should be the new 'standard' following BCS. |
Ambulatory oesophageal pH monitoring: a comparison between antimony, ISFET, and glass pH electrodes. | BACKGROUND AND AIM
Ambulatory oesophageal pH-impedance monitoring is a widely used test to evaluate patients with reflux symptoms. Several types of pH electrodes are available: antimony, ion sensitive field effect transistor (ISFET), and glass electrodes. These pH electrodes have not been compared directly, and it is uncertain whether these different types of pH electrodes result in similar outcome.
METHODS
In an in-vitro model the response time, sensitivity, and drift of an antimony, ISFET, and glass pH electrode were assessed simultaneously after calibration at 22 degrees C and at 37 degrees C. All measurements were performed at 37 degrees C and repeated five times with new catheters of each type. Fifteen patients with reflux symptoms underwent 24-h pH monitoring off PPI therapy using antimony, ISFET, and glass pH electrodes simultaneously.
RESULTS
After calibration at 22 degrees C, pH electrodes had similar response times, sensitivity and drift. In contrast to glass electrodes, antimony electrodes performed less accurately after calibration at 37 degrees C than after calibration at 22 degrees C. Calibration temperature did not affect ISFET electrodes significantly. During in-vivo experiments, significant differences were found in acid exposure times derived from antimony (4.0+/-0.8%), ISFET (5.7+/-1.1%), and glass pH electrodes (9.0+/-1.7%).
CONCLUSION
In vitro, antimony and glass pH electrodes are affected by different buffer components and temperature, respectively. In vivo, significant higher acid exposure times are obtained with glass electrodes compared with antimony and ISFET pH electrodes. ISFET electrodes produce stable in-vitro measurements and result in the most accurate in-vivo measurements of acid exposure time. |
Fuzzy C-Means Clustering With Local Information and Kernel Metric For Image Segmentation | Image segmentation is one of the key techniques in image understanding and computer vision. The task of image segmentation is to divide an image into a number of non overlapping regions, which have same characteristics such as gray level, color, tone, texture, etc. A lot of clustering based methods have been proposed for image segmentation. we define a new trade-off weighted fuzzy factor to adaptively control the local neighbor relationship. This factor depends on space distance of all neighbor pixels and their gray level discrepancy simultaneously, Image processing operations can be roughly divided into three major categories, Image Compression, Image Enhancement and Restoration, and Measurement Extraction. It involves reducing the amount of memory needed to store a digital image. Image defects which could be caused by the digitization process or by faults in the imaging setup (for example, bad lighting) can be corrected using Image Enhancement techniques. Once the image is in good condition, the Measurement Extraction operations can be used to obtain useful information from the image. Some examples of Image Enhancement and Measurement Extraction are given below. The examples shown all operate on 256 grey-scale images. This means that each pixel in the image is stored as a number between 0 to 255, where 0 represents a black pixel, 255 represents a white pixel and values in-between represent shades of grey. These operations can be extended to operate on colour images. The examples below represent only a few of the many techniques available for operating on images. Details about the inner workings of the operations have not been given, but some references to books containing this information are given at the end for the interested reader. |
Morphological Analysis and Disambiguation for Dialectal Arabic | The many differences between Dialectal Arabic and Modern Standard Arabic (MSA) pose a challenge to the majority of Arabic natural language processing tools, which are designed for MSA. In this paper, we retarget an existing state-of-the-art MSA morphological tagger to Egyptian Arabic (ARZ). Our evaluation demonstrates that our ARZ morphology tagger outperforms its MSA variant on ARZ input in terms of accuracy in part-of-speech tagging, diacritization, lemmatization and tokenization; and in terms of utility for ARZ-toEnglish statistical machine translation. |
Impeded alveolar-capillary gas transfer with saline infusion in heart failure. | The microvascular pulmonary endothelium barrier is critical in preventing interstitial fluid overflow and deterioration in gas diffusion. The role of endothelium in transporting small solutes in pathological conditions, such as congestive heart failure (CHF), has not been studied. Monitoring of pulmonary gas transfer during saline infusion in CHF was used to probe this issue. Carbon monoxide diffusion (DL(CO)), its membrane diffusion (D(M)) and capillary blood volume (V(C)) subcomponents, and mean right atrial (rap) and mean pulmonary wedge (wpp) pressures after saline or 5% D-glucose solution infusions were compared with baseline in 26 moderate CHF patients. Saline was also tested in 13 healthy controls. In patients, 750 mL of saline lowered DL(CO) (-8%, P<0.01 versus baseline), D(M) (-10%, P<0.01 versus baseline), aldosterone (-29%, P<0.01 versus baseline), renin (-52%, P<0.01 versus baseline), and hematocrit (-6%, P<0.05 versus baseline) and increased V(C) (20%, P<0.01 versus baseline), without changing rap and wpp. Saline at 150 mL produced qualitatively similar results regarding DL(CO) (-5%, P<0.01 versus baseline), D(M) (-7%, P<0.01 versus baseline), V(C) (9%, P<0.01 versus baseline), rap, wpp, aldosterone (-9%, P<0.05 versus baseline), and renin (-14%, P<0.05 versus baseline). Glucose solution (750 mL), on the contrary, increased DL(CO) (5%, P<0.01 versus 750 mL of saline) and D(M) (11%, P<0.01 versus 750 mL of saline) and decreased V(C) (-9, P<0.01 versus 750 mL of saline); aldosterone (-40%), renin (-41%), hematocrit (-3%), rap, and wpp behaved as they did after saline infusion. In controls, responses to both saline amounts were similar to responses in CHF patients regarding aldosterone, renin, hematocrit, rap, and wpp, whereas DL(CO), D(M), and V(C) values tended to rise. Hindrance to gas transfer (reduced DL(CO) and D(M)) with salt infusion in CHF, despite an increase in V(C) and no variations in pulmonary hydrostatic forces, indicates an upregulation in sodium transport from blood to interstitium with interstitial edema. Redistribution of blood from the lungs, facilitating interstitial fluid reabsorption, or sodium uptake from the alveolar lumen by the sodium-glucose cotransport system might underlie the improved alveolar-capillary conductance with glucose. |
Association between endothelial dysfunction and hyperuricaemia. | OBJECTIVE
We used high-resolution peripheral vascular ultrasound imaging to assess endothelial function in hyperuricaemic patients.
METHODS
Hyperuricaemia was defined as a serum uric acid concentration of > 7.7 mg/dl in men or > 6.6 mg/dl in women. Measurements of endothelium-dependent flow-mediated vasodilation (FMD) and endothelium-independent nitroglycerin-mediated vasodilation were performed in 46 hyperuricaemic patients and an equal number of healthy age- and gender-matched normal controls by high-resolution two-dimensional ultrasonographic imaging of the brachial artery. The serum levels of glucose, creatinine, alanine aminotransferase (ALT), lipid profiles and high-sensitivity CRP were measured for both the study groups.
RESULTS
The serum uric acid levels averaged 9.24 (1.16) and 6.18 (0.99) mg/dl in the hyperuricaemic and control groups, respectively. Body weight and BMI were significantly higher in the hyperuricaemic group than in the control group. The serum levels of creatinine, ALT, triglyceride and high-sensitivity CRP were significantly different between the two groups. The FMD values were significantly lower in the hyperuricaemic patients than in the controls [4.45% (3.13%) vs 7.10% (2.48%); P < 0.001]. The FMD values were negatively associated with serum uric acid levels (r = -0.273; P = 0.009). Multivariate regression analysis showed that the presence of hyperuricaemia (β = -0.384; P < 0.001) and body weight (β = 0.215; P = 0.017) were independent determinants of low FMD values.
CONCLUSION
Hyperuricaemia is associated with endothelial dysfunction. Decreased nitric oxide bioavailability may be the main reason. |
Combined use of cryoballoon and focal open-irrigation radiofrequency ablation for treatment of persistent atrial fibrillation: results from a pilot study. | BACKGROUND
Pulmonary vein isolation (PVI) achieved using a cryoballoon has been shown to be safe and effective. This treatment modality has limited effectiveness for treatment of persistent atrial fibrillation (AF).
OBJECTIVE
The purpose of this study was to evaluate a combined approach using a cryoballoon for treatment of PVI and focal radiofrequency (RF) left atrial substrate ablation for treatment of persistent AF.
METHODS
Twenty-two consecutive patients with persistent AF were included in the study. PVI initially was performed with a cryoballoon. Left atrial complex fractionated atrial electrograms (CFAEs) then were ablated using an RF catheter. Finally, linear ablations using the RF catheter were performed.
RESULTS
Eighty-three PVs, including five with left common ostia, were targeted and isolated (100%). Seventy-seven (94%) of 82 PVs targeted with the cryoballoon were isolated, and 5 (6%) required use of RF energy to complete isolation. A mean of 9.7 +/- 2.6 cryoablation applications per patient was needed to achieve PVI. Median time required for cryoablation per vein was 600 seconds, and mean number of balloon applications per vein was 2.5 +/- 1.0. In 19 (86%) patients in whom AF persisted after PVI, CFAE areas were ablated using the RF catheter. Two cases of transient phrenic nerve paralysis occurred. After a single procedure and mean follow-up of 6.0 +/- 2.9 months, 86.4% of patients were AF-free without antiarrhythmic drugs.
CONCLUSION
A combined approach of cryoablation and RF ablation for treatment of persistent AF is feasible and is associated with a favorable short-term outcome. |
Effects of Thermal Radiation and Magnetic Field on Unsteady Stretching Permeable Sheet in Presence of Free Stream Velocity | The aim of this paper is to investigate twodimensional unsteady flow of a viscous incompressible fluid about stagnation point on permeable stretching sheet in presence of time dependent free stream velocity. Fluid is considered in the influence of transverse magnetic field in the presence of radiation effect. Rosseland approximation is use to model the radiative heat transfer. Using time-dependent stream function, partial differential equations corresponding to the momentum and energy equations are converted into non-linear ordinary differential equations. Numerical solutions of these equations are obtained by using Runge-Kutta Fehlberg method with the help of Newton-Raphson shooting technique. In the present work the effect of unsteadiness parameter, magnetic field parameter, radiation parameter, stretching parameter and the Prandtl number on flow and heat transfer characteristics have been discussed. Skin-friction coefficient and Nusselt number at the sheet are computed and discussed. The results reported in the paper are in good agreement with published work in literature by other researchers. Keywords—Magneto hydrodynamics, Stretching sheet, Thermal radiation, Unsteady flow. |
Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? | As concerns about unfairness and discrimination in “black box” machine learning systems rise, a legal “right to an explanation” has emerged as a compellingly attractive approach for challenge and redress. We outline recent debates on the limited provisions in European data protection law, and introduce and analyze newer explanation rights in French administrative law and the draft modernized Council of Europe Convention 108. While individual rights can be useful, in privacy law they have historically unreasonably burdened the average data subject. “Meaningful information” about algorithmic logics is more technically possible than commonly thought, but this exacerbates a new “transparency fallacy”—an illusion of remedy rather than anything substantively helpful. While rights-based approaches deserve a firm place in the toolbox, other forms of governance, such as impact assessments, “soft law,” judicial review, and model repositories deserve more attention, alongside catalyzing agencies acting for users to control algorithmic system design. |
Fingerprint-Quality Index Using Gradient Components | Fingerprint image-quality checking is one of the most important issues in fingerprint recognition because recognition is largely affected by the quality of fingerprint images. In the past, many related fingerprint-quality checking methods have typically considered the condition of input images. However, when using the preprocessing algorithm, ridge orientation may sometimes be extracted incorrectly. Unwanted false minutiae can be generated or some true minutiae may be ignored, which can also affect recognition performance directly. Therefore, in this paper, we propose a novel quality-checking algorithm which considers the condition of the input fingerprints and orientation estimation errors. In the experiments, the 2-D gradients of the fingerprint images were first separated into two sets of 1-D gradients. Then, the shapes of the probability density functions of these gradients were measured in order to determine fingerprint quality. We used the FVC2002 database and synthetic fingerprint images to evaluate the proposed method in three ways: 1) estimation ability of quality; 2) separability between good and bad regions; and 3) verification performance. Experimental results showed that the proposed method yielded a reasonable quality index in terms of the degree of quality degradation. Also, the proposed method proved superior to existing methods in terms of separability and verification performance. |
A Process Calculus with Logical Operators | In order to combine operational and logical styles of specifications in one unified framework, the notion of logic labelled transition systems (Logic LTS, for short) has been presented and explored by L\"{u}ttgen and Vogler in [TCS 373(1-2):19-40; Inform. & Comput. 208:845-867]. In contrast with usual LTS, two logical constructors $\wedge$ and $\vee$ over Logic LTSs are introduced to describe logical combinations of specifications. Hitherto such framework has been dealt with in considerable depth, however, process algebraic style way has not yet been involved and the axiomatization of constructors over Logic LTSs is absent. This paper tries to develop L\"{u}ttgen and Vogler's work along this direction. We will present a process calculus for Logic LTSs (CLL, for short). The language CLL is explored in detail from two different but equivalent views. Based on behavioral view, the notion of ready simulation is adopted to formalize the refinement relation, and the behavioral theory is developed. Based on proof-theoretic view, a sound and ground-complete axiomatic system for CLL is provided, which captures operators in CLL through (in)equational laws. |
The metabolism and availability of essential fatty acids in animal and human tissues. | Essential fatty acids (EFA), which are not synthesized in animal and human tissues, belong to the n-6 and n-3 families of polyunsaturated fatty acids (PUFA), derived from linoleic acid (LA, 18:2n-6) and alpha-linolenic acid (LNA, 18:3n-3). Optimal requirements are 3-6% of ingested energy for LA and 0.5-1% for LNA in adults. Requirements in LNA are higher in development. Dietary sources of LA and LNA are principally plants, while arachidonic acid (AA, 20:4n-6) is found in products from terrestrian animals, and eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are found in products from marine animals. EFA are principally present in dietary triacylglycerols, which should be hydrolyzed by lipases in gastric and intestinal lumen. DHA seems to be released more slowly than the others. Its intestinal absorption is delayed but not decreased. Long-chain PUFAs are incorporated in noticeable amounts in chylomicron phospholipids. However, their uptake by tissues is no more rapid than uptake of shorter chain PUFA. In tissues, LA and LNA, which constitute the major part of dietary EFA, should be converted into fatty acids of longer and more unsaturated chain by alternate desaturation (delta 6, delta 5, delta 4)-elongation reactions. Animal tissues are more active in this biosynthesis than human tissues. Liver is one of the most active organs and its role is critical in providing less active tissues, particularly the brain, with long-chain PUFA secreted in VLDL (very low density lipoprotein). In liver, many nutritional, hormonal and physiological factors act on the PUFA biosynthesis. Dietary fatty acids exert a great influence and are often inhibitory. Dietary LNA inhibits delta 6 desaturation of LA. The desaturation products AA, EPA, and DHA inhibit delta 6 desaturation of LA and delta 5 desaturation of DGLA (dihomo-gamma-linolenic acid). With regard to hormones, insulin and thyroxin are necessary to delta 6 and delta 5 desaturation activities, whereas other hormones (glucagon, epinephrine, ACTH, glucocorticoids) inhibit desaturation. Concerning the physiological factors, the age of individuals is critical. In the fetus, the liver and the brain are capable of converting LA and LNA into longer-chain EFA, but these are also delivered by the mother, after synthesis in the maternal liver and placenta. Just after birth, in animals, the delta 6 desaturation activity increases in the liver and decreases in the brain. In aging, the capacity of the whole liver to desaturate LA and DGLA is equal at 1.5 and 25 months of age in rats fed a balanced diet throughout their life and the AA and DHA content of tissue phospholipids is unchanged in aging.(ABSTRACT TRUNCATED AT 400 WORDS) |
Paraphrasing: Generating Parallel Programs Using Refactoring | Refactoring is the process of changing the structure of a program without changing its behaviour. Refactoring has so far only really been deployed effectively for sequential programs. However, with the increased availability of multicore (and, soon, manycore) systems, refactoring can play an important role in helping both expert and non-expert parallel programmers structure and implement their parallel programs. This paper describes the design of a new refactoring tool that is aimed at increasing the programmability of parallel systems. To motivate our design, we refactor a number of examples in C, C++ and Erlang into good parallel implementations, using a set of formal pattern rewrite rules. |
Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling | Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia. |
Political Preference Formation : Competition , Deliberation , and the ( Ir ) relevance of Framing Effects | One of the most contested questions in the social sciences is whether people behave rationally. A large body of work assumes that individuals do in fact make rational economic, political, and social decisions. Yet hundreds of experiments suggest that this is not the case. Framing effects constitute one of the most stunning and influential demonstrations of irrationality. The effects not only challenge the foundational assumptions of much of the social sciences (e.g., the existence of coherent preferences or stable attitudes), but also lead many scholars to adopt alternative approaches (e.g., prospect theory). Surprisingly, virtually no work has sought to specify the political conditions under which framing effects occur. I fill this gap by offering a theory and experimental test. I show how contextual forces (e.g., elite competition, deliberation) and individual attributes (e.g., expertise) affect the success of framing. The results provide insight into when rationality assumptions apply and, also, have broad implications for political psychology and experimental methods. |
Perceptual coding of digital audio | During the last decade, CD-quality digital audio has essentially replaced analog audio. Emerging digital audio applications for network, wireless, and multimedia computing systems face a series of constraints such as reduced channel bandwidth, limited storage capacity, and low cost. These new applications have created a demand for high-quality digital audio delivery at low bit rates. In response to this need, considerable research has been devoted to the development of algorithms for perceptually transparent coding of high-fidelity (CD-quality) digital audio. As a result, many algorithms have been proposed, and several have now become international and/or commercial product standards. This paper reviews algorithms for perceptually transparent coding of CD-quality digital audio, including both research and standardization activities. This paper is organized as follows. First, psychoacoustic principles are described, with the MPEG psychoacoustic signal analysis model 1 discussed in some detail. Next, filter bank design issues and algorithms are addressed, with a particular emphasis placed on the modified discrete cosine transform, a perfect reconstruction cosine-modulated filter bank that has become of central importance in perceptual audio coding. Then, we review methodologies that achieve perceptually transparent coding of FM- and CD-quality audio signals, including algorithms that manipulate transform components, subband signal decompositions, sinusoidal signal components, and linear prediction parameters, as well as hybrid algorithms that make use of more than one signal model. These discussions concentrate on architectures and applications of those techniques that utilize psychoacoustic models to exploit efficiently masking characteristics of the human receiver. Several algorithms that have become international and/or commercial standards receive in-depth treatment, including the ISO/IEC MPEG family (-1, -2, -4), the Lucent Technologies PAC/EPAC/MPAC, the Dolby AC-2/AC-3, and the Sony ATRAC/SDDS algorithms. Then, we describe subjective evaluation methodologies in some detail, including the ITU-R BS.1116 recommendation on subjective measurements of small impairments. This paper concludes with a discussion of future research directions. |
Secure Crowdsourced Indoor Positioning Systems | Indoor positioning systems (IPSes) can enable many location-based services in large indoor environments where GPS is not available or reliable. Mobile crowdsourcing is widely advocated as an effective way to construct IPS maps. This paper presents the first systematic study of security issues in crowd-sourced WiFi-based IPSes to promote security considerations in designing and deploying crowdsourced IPSes. We identify three attacks on crowdsourced WiFi-based IPSes and propose the corresponding countermeasures. The efficacy of the attacks and also our countermeasures are experimentally validated on a prototype system. The attacks and countermeasures can be easily extended to other crowdsourced IPSes. |
Statistical Pattern Recognition | This course will provide an introduction to statistical pattern recognition. The lectures will focus on different techniques including methods for feature extraction, dimensionality reduction, data clustering and pattern classification. State-of-art approaches such as ensemble learning and sparse modelling will be introduced. Selected real-world applications will illustrate how the techniques are applied in practice. |
Microalgae for biodiesel production and other applications : A review | Sustainable production of renewable energy is being hotly debated globally since it is increasingly understood that first generation biofuels, primarily produced from food crops and mostly oil seeds are limited in their ability to achieve targets for biofuel production, climate change mitigation and economic growth. These concerns have increased the interest in developing second generation biofuels produced from non-food feedstocks such as microalgae, which potentially offer greatest opportunities in the longer term. This paper reviews the current status of microalgae use for biodiesel production, including their cultivation, harvesting, and processing. The microalgae species most used for biodiesel production are presented and their main advantages described in comparison with other available biodiesel feedstocks. The various aspects associated with the design of microalgae production units are described, giving an overview of the current state of development of algae cultivation systems (photo-bioreactors and open ponds). Other potential applications and products from microalgae are also presented such as for biological sequestration of CO2, wastewater treatment, in human health, as food additive, and for aquaculture. 2009 Elsevier Ltd. All rights reserved. |
Association of Extracellular Membrane Vesicles with Cutaneous Wound Healing | Extracellular vesicles (EVs) are membrane-enclosed vesicles that are released into the extracellular environment by various cell types, which can be classified as apoptotic bodies, microvesicles and exosomes. EVs have been shown to carry DNA, small RNAs, proteins and membrane lipids which are derived from the parental cells. Recently, several studies have demonstrated that EVs can regulate many biological processes, such as cancer progression, the immune response, cell proliferation, cell migration and blood vessel tube formation. This regulation is achieved through the release and transport of EVs and the transfer of their parental cell-derived molecular cargo to recipient cells. This thereby influences various physiological and sometimes pathological functions within the target cells. While intensive investigation of EVs has focused on pathological processes, the involvement of EVs in normal wound healing is less clear; however, recent preliminarily investigations have produced some initial insights. This review will provide an overview of EVs and discuss the current literature regarding the role of EVs in wound healing, especially, their influence on coagulation, cell proliferation, migration, angiogenesis, collagen production and extracellular matrix remodelling. |
Effects of nurse-delivered home visits combined with telephone calls on medication adherence and quality of life in HIV-infected heroin users in Hunan of China. | AIMS AND OBJECTIVES
This study aimed to examine the effects of nurse-delivered home visits combined with telephone intervention on medication adherence, and quality of life in HIV-infected heroin users.
BACKGROUND
Drug use is consistently reported as a risk factor for medication non-adherence in HIV-infected people.
DESIGN
An experimental, pretests and post-tests, design was used: baseline and at eight months.
METHODS
A sample of 116 participants was recruited from three antiretroviral treatment sites, and 98 participants completed the study. They were randomly assigned to two groups: 58 in the experimental group and 58 in the control group. The experimental group received nurse-delivered home visits combined with telephone intervention over eight months, while the control group only received routine care. The questionnaire of Community Programs for Clinical Research on AIDS (CPCRA) Antiretroviral Medication Self-Report was used to assess levels of adherence, while quality of life and depression were evaluated using Chinese versions of World Health Organization Quality of Life Instrument-Abbreviated version (WHOQOL-BREF) and Self-rating Depression Scale, respectively. Data were obtained at baseline and eight months.
RESULTS
At the end of eight months, participants in the experimental group were more likely to report taking 100% of pills (Fisher's exact = 14.3, p = 0.0001) and taking pills on time (Fisher's exact = 18.64, p = 0.0001) than those in the control group. There were significant effects of intervention in physical (F = 10.47, p = 0.002), psychological (F = 9.41, p = 0.003), social (F = 4.09, p = 0.046) and environmental (F = 4.80, p = 0.031) domains of WHOQOL and depression (F = 5.58, p = 0.02).
CONCLUSIONS
Home visits and telephone calls are effective in promoting adherence to antiretroviral treatment and in improving the participants' quality of life and depressive symptoms in HIV-infected heroin users.
RELEVANCE TO CLINICAL PRACTICE
It is important for nurses to recognise the issues of non-adherence to antiretroviral treatment in heroin users. Besides standard care, nurses should consider conducting home visits and telephone calls to ensure better health outcome of antiretroviral treatment in this population. |
Effect of atorvastatin on the incidence of acute kidney injury following valvular heart surgery: a randomized, placebo-controlled trial | Statins, 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors have the potential to reduce acute kidney injury (AKI) after cardiac surgery through their pleiotropic properties. Here we studied the preventive effect of atorvastatin on AKI after valvular heart surgery. Two-hundred statin-naïve patients were randomly allocated to receive either statin or placebo. Atorvastatin was administered orally to the statin group according to a dosage schedule (80 mg single dose on the evening prior to surgery; 40 mg on the morning of surgery; three further doses of 40 mg on the evenings of postoperative days 0, 1, and 2). AKI incidence was assessed during the first 48 postoperative hours on the basis of Acute Kidney Injury Network criteria. The incidence of AKI was similar in the statin and control groups (21 vs. 26 %, respectively, p = 0.404). Biomarkers of renal injury including plasma neutrophil gelatinase-associated lipocalin and interleukin-18 were also similar between the groups. The statin group required significantly less norepinephrine and vasopressin during surgery, and fewer patients in the statin group required vasopressin. There were no significant differences in postoperative outcomes. Acute perioperative statin treatment was not associated with a lower incidence of AKI or improved clinical outcome in patients undergoing valvular heart surgery. (ClinicalTrials.gov NCT01909739). |
Privacy-Preserving Classification on Deep Neural Network | Neural Networks (NN) are today increasingly used in Machine Learning where they have become deeper and deeper to accurately model or classify high-level abstractions of data. Their development however also gives rise to important data privacy risks. This observation motives Microsoft researchers to propose a framework, called Cryptonets. The core idea is to combine simplifications of the NN with Fully Homomorphic Encryptions (FHE) techniques to get both confidentiality of the manipulated data and efficiency of the processing. While efficiency and accuracy are demonstrated when the number of non-linear layers is small (eg 2), Cryptonets unfortunately becomes ineffective for deeper NNs which let the problem of privacy preserving matching open in these contexts. This work successfully addresses this problem by combining the original ideas of Cryptonets’ solution with the batch normalization principle introduced at ICML 2015 by Ioffe and Szegedy. We experimentally validate the soundness of our approach with a neural network with 6 non-linear layers. When applied to the MNIST database, it competes the accuracy of the best non-secure versions, thus significantly improving Cryptonets. |
Treatment of active Crohn's disease with MLN0002, a humanized antibody to the alpha4beta7 integrin. | BACKGROUND & AIMS
Selective blockade of lymphocyte-vascular endothelium interactions in the gastrointestinal tract is a promising therapeutic strategy for inflammatory bowel disease. This randomized, double-blind, controlled trial assessed the efficacy and safety of MLN0002, a monoclonal antibody targeting the alpha4beta7 integrin, in patients with active Crohn's disease.
METHODS
Patients were randomized to receive MLN0002 2.0 mg/kg (n = 65), MLN0002 0.5 mg/kg (n = 62), or placebo (n = 58) by intravenous infusion on days 1 and 29. The primary efficacy end point was clinical response (>or=70-point decrement in the Crohn's Disease Activity Index [CDAI] score) on day 57. Secondary end points were the proportions of patients with clinical remission (CDAI score <or=150) and with an enhanced clinical response (>or=100-point decrement in CDAI). Human anti-human antibody levels were measured.
RESULTS
Clinical response rates at day 57 were 53%, 49%, and 41% in the MLN0002 2.0 mg/kg, MLN0002 0.5 mg/kg, and placebo groups. Clinical remission rates at day 57 were 37%, 30%, and 21%, respectively (P = .04 for the 2.0 mg/kg vs placebo comparison). At day 57, 12% and 34% of patients in the 2.0- and 0.5-mg/kg groups had clinically significant human anti-human antibody levels (titers > 1:125). There was one infusion-related hypersensitivity reaction. The most common serious adverse event was worsening of Crohn's disease.
CONCLUSIONS
This phase 2 study was suggestive of a dose-dependent beneficial effect of MLN0002 therapy on clinical remission. MLN0002 was well tolerated in patients with active Crohn's disease. |
Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model | We show how to consider similarity between features for calculation of similarity of objects in the Vector Space Model (VSM) for machine learning algorithms and other classes of methods that involve similarity between objects. Unlike LSA, we assume that similarity between features is known (say, from a synonym dictionary) and does not need to be learned from the data. We call the proposed similarity measure soft similarity. Similarity between features is common, for example, in natural language processing: words, n-grams, or syntactic n-grams can be somewhat different (which makes them different features) but still have much in common: for example, words “play” and “game” are different but related. When there is no similarity between features then our soft similarity measure is equal to the standard similarity. For this, we generalize the well-known cosine similarity measure in VSM by introducing what we call “soft cosine measure”. We propose various formulas for exact or approximate calculation of the soft cosine measure. For example, in one of them we consider for VSM a new feature space consisting of pairs of the original features weighted by their similarity. Again, for features that bear no similarity to each other, our formulas reduce to the standard cosine measure. Our experiments show that our soft cosine measure provides better performance in our case study: entrance exams question answering task at CLEF. In these experiments, we use syntactic n-grams as features and Levenshtein distance as the similarity between n-grams, measured either in characters or in elements of n-grams. |
Total Colorings Of Degenerate Graphs | A total coloring of a graph G is a coloring of all elements of G, i.e. vertices and edges, such that no two adjacent or incident elements receive the same color. A graph G is sdegenerate for a positive integer s if G can be reduced to a trivial graph by successive removal of vertices with degree ≤ s. We prove that an s-degenerate graph G has a total coloring with ∆+1 colors if the maximum degree ∆ of G is sufficiently large, say ∆≥4s+3. Our proof yields an efficient algorithm to find such a total coloring. We also give a lineartime algorithm to find a total coloring of a graph G with the minimum number of colors if G is a partial k-tree, that is, the tree-width of G is bounded by a fixed integer k. |
Hyperbolic Random Graphs: Separators and Treewidth | Hyperbolic random graphs share many common properties with complex real-world networks; e.g., small diameter and average distance, large clustering coefficient, and a power-law degree sequence with adjustable exponent β. Thus, when analyzing algorithms for large networks, potentially more realistic results can be achieved by assuming the input to be a hyperbolic random graph of size n. The worst-case run-time is then replaced by the expected run-time or by bounds that hold with high probability (whp), i.e., with probability 1−O(1/n). Though many structural properties of hyperbolic random graphs have been studied, almost no algorithmic results are known. Divide-and-conquer is an important algorithmic design principle that works particularly well if the instance admits small separators. We show that hyperbolic random graphs in fact have comparatively small separators. More precisely, we show that they can be expected to have balanced separator hierarchies with separators of size O( √ n3−β), O(logn), and O(1) if 2 < β < 3, β = 3, and 3 < β, respectively. We infer that these graphs have whp a treewidth of O( √ n3−β), O(log2 n), and O(logn), respectively. For 2 < β < 3, this matches a known lower bound. To demonstrate the usefulness of our results, we give several algorithmic applications. 1998 ACM Subject Classification G.2.1 Combinatorics, F.2.2 Nonnumerical Algorithms and Problems |
Multilabel SVM active learning for image classification | Image classification is an important task in computer vision. However, how to assign suitable labels to images is a subjective matter, especially when some images can be categorized into multiple classes simultaneously. Multilabel image classification focuses on the problem that each image can have one or multiple labels. It is known that manually labelling images is time-consuming and expensive. In order to reduce the human effort of labelling images, especially multilabel images, we proposed a multilabel SVM active learning method. We also proposed two selection strategies: Max Loss strategy and Mean Max Loss strategy. Experimental results on both artificial data and real-world images demonstrated the advantage of proposed method. |
Generating Concept Map Exercises from Textbooks | In this paper we present a methodology for creating concept map exercises for students. Concept mapping is a common pedagogical exercise in which students generate a graphical model of some domain. Our method automatically extracts knowledge representations from a textbook and uses them to generate concept maps. The purpose of the study is to generate and evaluate these concept maps according to their accuracy, completeness, and pedagogy. |
Automatic Inference and Enforcement of Kernel Data Structure Invariants | Kernel-level rootkits affect system security by modifying key kernel data structures to achieve a variety of malicious goals. While early rootkits modified control data structures, such as the system call table and values of function pointers, recent work has demonstrated rootkits that maliciously modify non-control data. Prior techniques for rootkit detection fail to identify such rootkits either because they focus solely on detecting control data modifications or because they require elaborate, manually-supplied specifications to detect modifications of non-control data. This paper presents a novel rootkit detection technique that automatically detects rootkits that modify both control and non-control data. The key idea is to externally observe the execution of the kernel during a training period and hypothesize invariants on kernel data structures. These invariants are used as specifications of data structure integrity during an enforcement phase; violation of these invariants indicates the presence of a rootkit. We present the design and implementation of Gibraltar, a tool that uses the above approach to infer and enforce invariants. In our experiments, we found that Gibraltar can detect rootkits that modify both control and non-control data structures, and that its false positive rate and monitoring overheads are negligible. |
Sentiment Analysis for Social Media Images | In this proposal, we study the problem of understanding human sentiments from large scale collection of Internet images based on both image features and contextual social network information (such as friend comments and user description). Despite the great strides in analyzing user sentiment based on text information, the analysis of sentiment behind the image content has largely been ignored. Thus, we extend the significant advances in text-based sentiment prediction tasks to the higher level challenge of predicting the underlying sentiments behind the images. We show that neither visual features nor the textual features are by themselves sufficient for accurate sentiment labeling. Thus, we provide a way of using both of them, and formulate sentiment prediction problem in two scenarios: supervised and unsupervised. We develop an optimization algorithm for finding a local-optima solution under the proposed framework. With experiments on two large-scale datasets, we show that the proposed method improves significantly over existing state-of-the-art methods. In the future, we are going to incorporating more information on the social network and explore sentiment on signed social network. |
Speaker identification features extraction methods: A systematic review | Speaker Identification (SI) is the process of identifying the speaker from a given utterance by comparing the voice biometrics of the utterance with those utterance models stored beforehand. SI technologies are taken a new direction due to the advances in artificial intelligence and have been used widely in various domains. Feature extraction is one of the most important aspects of SI, which significantly influences the SI process and performance. This systematic review is conducted to identify, compare, and analyze various feature extraction approaches, methods, and algorithms of SI to provide a reference on feature extraction approaches for SI applications and future studies. The review was conducted according to Kitchenham systematic review methodology and guidelines, and provides an in-depth analysis on proposals and implementations of SI feature extraction methods discussed in the literature between year 2011 and 2106. Three research questions were determined and an initial set of 535 publications were identified to answer the questions. After applying exclusion criteria 160 related publications were shortlisted and reviewed in this paper; these papers were considered to answer the research questions. Results indicate that pure Mel-Frequency Cepstral Coefficients (MFCCs) based feature extraction approaches have been used more than any other approach. Furthermore, other MFCC variations, such as MFCC fusion and cleansing approaches, are proven to be very popular as well. This study identified that the current SI research trend is to develop a robust universal SI framework to address the important problems of SI such as adaptability, complexity, multi-lingual recognition, and noise robustness. The results presented in this research are based on past publications, citations, and number of implementations with citations being most relevant. This paper also presents the general process of SI. © 2017 Elsevier Ltd. All rights reserved. |
Process scheduling under uncertainty: Review and challenges | Uncertainty is a very important concern in production scheduling since it can cause infeasibilities and production disturbances. Thus scheduling nder uncertainty has received a lot of attention in the open literature in recent years from chemical engineering and operations research communities. he purpose of this paper is to review the main methodologies that have been developed to address the problem of uncertainty in production cheduling as well as to identify the main challenges in this area. The uncertainties in process scheduling are first analyzed, and the different athematical approaches that exist to describe process uncertainties are classified. Based on the different descriptions for the uncertainties, lternative scheduling approaches and relevant optimization models are reviewed and discussed. Further research challenges in the field of process cheduling under uncertainty are identified and some new ideas are discussed. 2007 Elsevier Ltd. All rights reserved. |
Women and women of color in leadership: complexity, identity, and intersectionality. | This article describes the challenges that women and women of color face in their quest to achieve and perform in leadership roles in work settings. We discuss the barriers that women encounter and specifically address the dimensions of gender and race and their impact on leadership. We identify the factors associated with gender evaluations of leaders and the stereotypes and other challenges faced by White women and women of color. We use ideas concerning identity and the intersection of multiple identities to understand the way in which gender mediates and shapes the experience of women in the workplace. We conclude with suggestions for research and theory development that may more fully capture the complex experience of women who serve as leaders. |
Exercise training in HIV-1-infected individuals with dyslipidemia and lipodystrophy. | PURPOSE
Highly active antiretroviral therapy has improved the prognosis of human immuno deficiency virus type 1 (HIV-1)-infected individuals, but it has been associated with the development of metabolic and fat distribution abnormalities known as the lipodystrophy syndrome. This study tested the hypothesis that aerobic exercise training added to a low-lipid diet may have favorable effects in HIV-1-infected individuals with dyslipidemia and lipodystrophy.
METHODS
Thirty healthy subjects, carriers of HIV-1, with dyslipidemia and lipodystrophy, all of whom were using protease inhibitors and/or non-nucleoside reverse transcriptase inhibitors, were randomly assigned to participate in either a 12-wk program of aerobic exercise or a 12-wk stretching and relaxation program. All subjects received recommendations for a low-lipid diet. Before and after intervention, peak oxygen uptake, body composition, CD4, viral load, lipid profile, and plasma endothelin-1 levels were measured.
RESULTS
Peak oxygen uptake increased significantly in the diet and exercise group (mean +/- SD: 32 +/- 5 mL x kg(-1) x min(-1) before; 40 +/- 8 mL x kg(-1) x min(-1) after) but not in the diet only group (34 +/- 7 mL x kg(-1) x min(-1) before; 35 +/- 8 mL x kg(-1) x min(-1) after). Body weight, body fat, and waist-to-hip ratio decreased significantly and similarly in the two groups. There were no significant changes in immunologic variables in either group. Likewise, plasma triglycerides, total cholesterol, and HDL cholesterol levels did not change significantly in either group. Plasma endothelin-1 levels were elevated in both groups and presented no significant changes during the study.
CONCLUSION
HIV-seropositive individuals with lipodystrophy and dyslipidemia submitted to a short-term intervention of low-lipid diet and aerobic exercise training are able to increase their functional capacity without any consistent changes in plasma lipid levels. |
Navigation System Heading and Position Accuracy Improvement through GPS and INS Data Fusion | Commercial navigation systems currently in use have reduced position and heading error but are usually quite expensive. It is proposed that extended Kalman filter (EKF) and Unscented Kalman Filter (UKF) be used in the integration of a global positioning system (GPS) with an inertial navigation system (INS). GPS and INS individually exhibit large errors but they do complement each other by maximizing the advantage of each in calculating the heading angle and position through EKF and UKF. The proposed method was tested using low cost GPS, a cheap electronic compass (EC), and an inertial management unit (IMU) which provided accurate heading and position information, verifying the efficacy of the proposed algorithm. |
Relationship between postoperative clopidogrel use and subsequent angiographic and clinical outcomes following coronary artery bypass grafting | Dual antiplatelet therapy with both aspirin and clopidogrel is increasingly used after coronary artery bypass grafting (CABG); however, little is known about the safety or efficacy. We sought to determine the relationship between postoperative clopidogrel and clinical and angiographic outcomes following CABG. We evaluated 3,014 patients from PREVENT IV who underwent CABG at 107 US sites. Postoperative antiplatelet therapy was left to physician discretion. Risk-adjusted angiographic and clinical outcomes were compared in patients taking and not taking clopidogrel 30 days post-CABG. At 30 days, 633 (21 %) patients were taking clopidogrel. Clopidogrel users were more likely to have peripheral vascular (15 vs. 11 %) and cerebrovascular disease (17 vs. 11 %), prior myocardial infarction (MI) (46 vs. 41 %), and off-pump surgery (33 vs. 18 %). Clopidogrel use was associated with statistically insignificant higher graft failure (adjusted odds ratio 1.3; 95 % confidence interval [CI] [1.0, 1.7]; P = 0.05). At 5-year follow-up, clopidogrel use was associated with similar composite rates of death, MI, or revascularization (27 vs. 24 %; adjusted hazard ratio 1.1; 95 % CI [0.9, 1.4]; P = 0.38) compared with those not using clopidogrel. There was an interaction between use of cardiopulmonary bypass and clopidogrel with a trend toward lower 5-year clinical events with clopidogrel in patients undergoing off-pump CABG. In this observational analysis, clopidogrel use was not associated with better 5-year outcomes following CABG. There may be better outcomes with clopidogrel among patients having off-pump surgery. Adequately powered randomized clinical trials are needed to determine the role of dual antiplatelet therapy after CABG. |
Hybrid sampling for imbalanced data | Decision tree learning in the presence of imbalanced data is an issue of great practical importance, as such data is ubiquitous in a wide variety of application domains. We propose hybrid data sampling, which uses a combination of two sampling techniques such as random oversampling and random undersampling, to create a balanced dataset for use in the construction of decision tree classification models. The results demonstrate that our methodology is often able to improve the performance of a C4.5 decision tree learner in the context of imbalanced data. |
Transaction Fraud Detection Using GRU-centered Sandwich-structured Model | Rapid growth of modern technologies is bringing dramatically increased e-commerce payments, as well as the explosion in transaction fraud. Many data mining methods have been proposed for fraud detection. Nevertheless, there is always a contradiction that most methods are irrelevant to transaction sequence, yet sequence-related methods usually cannot learn information at single-transaction level well. In this paper, a new “within7between7within” sandwich-structured sequence learning architecture has been proposed by stacking an ensemble model, a deep sequential learning model and another top-layer ensemble classifier in proper order. Moreover, attention mechanism has also been introduced in to further improve performance. Models in this structure have been manifested to be very efficient in scenarios like fraud detection, where the information sequence is made up of vectors with complex interconnected features. |
Design and Experimental Analysis of AC Linear Generator With Halbach PM Arrays for Direct-Drive Wave Energy Conversion | To convert wave energy into more suitable forms efficiently, a single-phase permanent magnet (PM) ac linear generator directly coupled to wave energy conversion is presented in this paper. Magnetic field performance of Halbach PM arrays is compared with that of radially magnetized structure. Then, the change of parameters in the geometry of slot and Halbach PM arrays' effect on the electromagnetic properties of the generator are investigated, and the optimization design guides are established for key design parameters. Finally, the simulation results are compared with test results of the prototype in wave energy conversion experimental system. Due to test and theory analysis results of prototype concordant with the finite-element analysis results, the proposed model and analysis method are correct and meet the requirements of direct-drive wave energy conversion system. |
STRAW - An Integrated Mobility and Traffic Model for VANETs | Ad-hoc wireless communication among highly dynamic, mobile nodes in a urban network is a critical capability for a wide range of important applications including automated vehicles, real-time traffic monitoring, and battleground communication. When evaluating application performance through simulation, a realistic mobility model for vehicular ad-hoc networks (VANETs) is critical for accurate results. This technical report discusses the implementation of STRAW, a new mobility model for VANETs in which nodes move according to a realistic vehicular traffic model on roads defined by real street map data. The challenge is to create a traffic model that accounts for individual vehicle motion without incurring significant overhead relative to the cost of performing the wireless network simulation. We identify essential and optional techniques for modeling vehicular motion that can be integrated into any wireless network simulator. We then detail choices we made in implementing STRAW. |
Full 3-D Printed Microwave Termination: A Simple and Low-Cost Solution | This paper describes the realization and characterization of microwave 3-D printed loads in rectangular waveguide technology. Several commercial materials were characterized at X-band (8-12 GHz). Their dielectric properties were extracted through the use of a cavity-perturbation method and a transmission/reflection rectangular waveguide method. A lossy carbon-loaded Acrylonitrile Butadiene Styrene (ABS) polymer was selected to realize a matched load between 8 and 12 GHz. Two different types of terminations were realized by fused deposition modeling: a hybrid 3-D printed termination (metallic waveguide + pyramidal polymer absorber + metallic short circuit) and a full 3-D printed termination (self-consistent matched load). Voltage standing wave ratio of less than 1.075 and 1.025 were measured over X-band for the hybrid and full 3-D printed terminations, respectively. Power behavior of the full 3-D printed termination was investigated. A very linear evolution of reflected power as a function of incident power amplitude was observed at 10 GHz up to 11.5 W. These 3-D printed devices appear as a very low cost solution for the realization of microwave matched loads in rectangular waveguide technology. |
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization | Conventional wisdom in deep learning states that increasing depth improves expressiveness but complicates optimization. This paper suggests that, sometimes, increasing depth can speed up optimization. The effect of depth on optimization is decoupled from expressiveness by focusing on settings where additional layers amount to overparameterization – linear neural networks, a wellstudied model. Theoretical analysis, as well as experiments, show that here depth acts as a preconditioner which may accelerate convergence. Even on simple convex problems such as linear regression with `p loss, p > 2, gradient descent can benefit from transitioning to a non-convex overparameterized objective, more than it would from some common acceleration schemes. We also prove that it is mathematically impossible to obtain the acceleration effect of overparametrization via gradients of any regularizer. |
Implementation and Experimental Results of Superposition Coding on Software Radio | Superposition coding is a well-known capacity-achieving coding scheme for stochastically degraded broadcast channels. Although well-studied in theory, it is important to understand issues that arise when implementing this scheme in a practical setting. In this paper, we present a software-radio based design of a superposition coding system on the GNU Radio platform with the Universal Software Radio Peripheral acting as the transceiver frontend. We also study the packet error performance and discuss some issues that arise in its implementation. |
Preferences for Pink and Blue: The Development of Color Preferences as a Distinct Gender-Typed Behavior in Toddlers. | Many gender differences are thought to result from interactions between inborn factors and sociocognitive processes that occur after birth. There is controversy, however, over the causes of gender-typed preferences for the colors pink and blue, with some viewing these preferences as arising solely from sociocognitive processes of gender development. We evaluated preferences for gender-typed colors, and compared them to gender-typed toy and activity preferences in 126 toddlers on two occasions separated by 6-8 months (at Time 1, M = 29 months; range 20-40). Color preferences were assessed using color cards and neutral toys in gender-typed colors. Gender-typed toy and activity preferences were assessed using a parent-report questionnaire, the Preschool Activities Inventory. Color preferences were also assessed for the toddlers' parents using color cards. A gender difference in color preferences was present between 2 and 3 years of age and strengthened near the third birthday, at which time it was large (d > 1). In contrast to their parents, toddlers' gender-typed color preferences were stronger and unstable. Gender-typed color preferences also appeared to establish later and were less stable than gender-typed toy and activity preferences. Gender-typed color preferences were largely uncorrelated with gender-typed toy and activity preferences. These results suggest that the factors influencing gender-typed color preferences and gender-typed toy and activity preferences differ in some respects. Our findings suggest that sociocognitive influences and play with gender-typed toys that happen to be made in gender-typed colors contribute to toddlers' gender-typed color preferences. |
Treating a maxillary midline diastema in adult patients: a general dentist's perspective. | BACKGROUND
A maxillary midline diastema (MMD) often is a primary concern of patients during a dental consultation. Although an MMD can be transient owing to the developing dentition and, thus, requires no active treatment, management of MMDs in the permanent dentition requires a detailed examination and appropriate care.
CASE DESCRIPTION
. The authors present five cases of MMDs in adults to illustrate a range of restorative and orthodontic options. In the first case, the clinician used resin-based composite buildup to close an MMD resulting from small teeth and generalized spacing in the dental arch. In the second case, the clinician placed porcelain veneers to treat an MMD in a patient with discolored dentition. In the third case, the clinician fitted a removable appliance to close an MMD by tipping the incisors palatally. In the fourth case, the clinician fitted a sectional fixed appliance to promote mesial bodily movement of the incisors. In the fifth case, the clinician placed a full-arch fixed appliance to treat an MMD caused by tilted incisors.
CONCLUSIONS
and
CLINICAL IMPLICATIONS
Effective treatment requires an accurate diagnosis and appropriate intervention. General dentists can perform a range of restorative and orthodontic treatments in appropriate clinical situations to address patients' concerns. |
Electricity Generation Using an Air-Cathode Single Chamber Microbial Fuel Cell in the Presence and Absence of a Proton Exchange Membrane | Microbial fuel cells (MFCs) are typically designed as a two-chamber system with the bacteria in the anode chamber separated from the cathode chamber by a polymeric proton exchange membrane (PEM). Most MFCs use aqueous cathodes where water is bubbled with air to provide dissolved oxygen to electrode. To increase energy output and reduce the cost of MFCs, we examined power generation in an air-cathode MFC containing carbon electrodes in the presence and absence of a polymeric proton exchange membrane (PEM). Bacteria present in domestic wastewater were used as the biocatalyst, and glucose and wastewater were tested as substrates. Power density was found to be much greater than typically reported for aqueous-cathode MFCs, reaching a maximum of 262 ( 10 mW/m2 (6.6 ( 0.3 mW/L; liquid volume) using glucose. Removing the PEM increased the maximum power density to 494 ( 21 mW/m2 (12.5 ( 0.5 mW/L). Coulombic efficiency was 40-55% with the PEM and 9-12% with the PEM removed, indicating substantial oxygen diffusion into the anode chamber in the absence of the PEM. Power output increased with glucose concentration according to saturation-type kinetics, with a half saturation constant of 79 mg/L with the PEM-MFC and 103 mg/L in the MFC without a PEM (1000 Ω resistor). Similar results on the effect of the PEM on power density were found using wastewater, where 28 ( 3 mW/m2 (0.7 ( 0.1 mW/L) (28% Coulombic efficiency) was produced with the PEM, and 146 ( 8 mW/m2 (3.7 ( 0.2 mW/L) (20% Coulombic efficiency) was produced when the PEM was removed. The increase in power output when a PEM was removed was attributed to a higher cathode potential as shown by an increase in the open circuit potential. An analysis based on available anode surface area and maximum bacterial growth rates suggests that mediatorless MFCs may have an upper order-of-magnitude limit in power density of 103 mW/m2. A cost-effective approach to achieving power densities in this range will likely require systems that do not contain a polymeric PEM in the MFC and systems based on direct oxygen transfer to a carbon cathode. Introduction Bacteria can be used to catalyze the conversion of organic matter into electricity (1-7). Fuel cells that use bacteria are loosely classified here as two different types: biofuel cells that generate electricity from the addition of artificial electron shuttles (mediators) (8-12) and microbial fuel cells (MFCs) that do not require the addition of a mediator (5, 7, 13-15). It has recently been shown that certain metal-reducing bacteria, belonging primarily to the family Geobacteraceae, can directly transfer electrons to electrodes using electrochemically active redox enzymes such as cytochromes on their outer membrane (16-18). These so-called mediatorless MFCs are considered to have more commercial application potential than biofuel cells because the mediators used in biofuel cells are expensive and toxic to the microorganisms (7). MFCs typically produce power at a density of less than 50 mW/m2 (normalized to anode projected surface area) (7, 13, 19). In a MFC, two electrodes (anode and cathode) are each placed in water in two chambers joined by a proton exchange membrane (PEM). The main disadvantage of a twochamber MFC is that the solution cathode must be aerated to provide oxygen to the cathode. It is known that the power output of an MFC can be improved by increasing the efficiency of the cathode. For example, power is increased by adding ferricyanide (20) to the cathode chamber. It is possible, however, to design a MFC that does not require that the cathode be placed in water. In hydrogen fuel cells, the cathode is bonded directly to the PEM so that oxygen in the air can directly react at the electrode (21). This technique was successfully used to produce electricity from wastewater in a single chamber MFC by Liu et al. (15). Park and Zeikus (20) produced a maximum of 788 mW/m2 using a unique system with a Mn4+ graphite anode and a direct-air Fe3+ graphite cathode (20). Because the power output of MFCs is low relative to other types of fuel cells, reducing their cost is essential if power generation using this technology is to be an economical method of energy production. Most studies have used relatively expensive solid graphite electrodes (7, 19), but graphite-felt (14) and carbon cloth (15) can also be used. The use of air-driven cathodes can reduce MFC costs because passive oxygen transfer to the cathode using air does not require energy intensive air sparging of the water. Finally, PEMs such as Nafion are quite expensive. We wondered if this material was essential for power production in an MFC. We therefore designed and constructed a carbon-cloth, aircathode fuel cell to try to increase power density to levels not previously achieved with aqueous cathode systems. To study the effect of the PEM on power production, we compared power density for glucose and wastewater feeds with the system in the presence and absence of a polymeric PEM. |
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data | Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. |
Filter Before You Parse: Faster Analytics on Raw Data with Sparser | Exploratory big data applications often run on raw unstructured or semi-structured data formats, such as JSON files or text logs. These applications can spend 80–90% of their execution time parsing the data. In this paper, we propose a new approach for reducing this overhead: apply filters on the data’s raw bytestream before parsing. This technique, which we call raw filtering, leverages the features of modern hardware and the high selectivity of queries found in many exploratory applications. With raw filtering, a user-specified query predicate is compiled into a set of filtering primitives called raw filters (RFs). RFs are fast, SIMD-based operators that occasionally yield false positives, but never false negatives. We combine multiple RFs into an RF cascade to decrease the false positive rate and maximize parsing throughput. Because the best RF cascade is datadependent, we propose an optimizer that dynamically selects the combination of RFs with the best expected throughput, achieving within 10% of the global optimum cascade while adding less than 1.2% overhead. We implement these techniques in a system called Sparser, which automatically manages a parsing cascade given a data stream in a supported format (e.g., JSON, Avro, Parquet) and a user query. We show that many real-world applications are highly selective and benefit from Sparser. Across diverse workloads, Sparser accelerates state-of-the-art parsers such as Mison by up to 22× and improves end-to-end application performance by up to 9×. PVLDB Reference Format: S. Palkar, F. Abuzaid, P. Bailis, M. Zaharia. Filter Before You Parse: Faster Analytics on Raw Data with Sparser. PVLDB, 11(11): xxxx-yyyy, 2018. DOI: https://doi.org/10.14778/3236187.3236207 |
Parental choice of primary school in England: what 'type' of school do parents choose? | We investigate the central premise of the theory of markets in education, namely that parents value academic standards. We ask what parents really want from schools and whether different types of parents have similar preferences. We examine parents’ stated preferences and revealed preferences for schools (their actual choice of school as opposed to what they say they value in a school). More educated and higher socio-economic status (SES) parents are more likely to cite academic standards, whilst less educated and lower SES parents are more likely to cite proximity. More advantaged parents choose better performing schools, particularly in areas with many schools and therefore a lot of potential school choice. More advantaged parents also choose schools with much lower proportions of pupils eligible for free school meals, relative to other schools available to them. Hence whilst parents do not admit to choosing schools on the basis of their social composition, this happens in practice. Most parents get their first choice of school (94%) and this holds both for more and less advantaged parents, though this is partially because poorer parents make more ‘realistic’, i.e. less ambitious, choices. If, in areas where there is a lot of potential competition between schools, more advantaged families have a higher chance of achieving their more ambitious choices that do poorer parents, this could tend to exacerbate social segregation in our schools. |
Full-text citation analysis: A new method to enhance scholarly networks | In this article, we use innovative full-text citation analysis along with supervised topic modeling and networkanalysis algorithms to enhance classical bibliometric analysis and publication/author/venue ranking. By utilizing citation contexts extracted from a large number of full-text publications, each citation or publication is represented by a probability distribution over a set of predefined topics, where each topic is labeled by an author-contributed keyword. We then used publication/ citation topic distribution to generate a citation graph with vertex prior and edge transitioning probability distributions. The publication importance score for each given topic is calculated by PageRank with edge and vertex prior distributions. To evaluate this work, we sampled 104 topics (labeled with keywords) in review papers. The cited publications of each review paper are assumed to be “important publications” for the target topic (keyword), and we use these cited publications to validate our topic-ranking result and to compare different publication-ranking lists. Evaluation results show that full-text citation and publication content prior topic distribution, along with the classical PageRank algorithm can significantly enhance bibliometric analysis and scientific publication ranking performance, comparing with term frequency–inverted document frequency (tf–idf), language model, BM25, PageRank, and PageRank + language model (p < .001), for academic information retrieval (IR) systems. Introduction and Motivation |
A Method for Focused Crawling Using Combination of Link Structure and Content Similarity | The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines, A focused crawler aims at selectively seek out pages that are relevant to a pre-defined set of topics. Besides specifying topics by some keywords, it is customary also to use some exemplary documents to compute the similarity of a given Web document to the topic, in this paper we introduce a new hybride focused crawler, which uses link structure of documents as well as similarity of pages to the topic to crawl the Web |
Specialist intervention is associated with improved patient outcomes in patients with decompensated heart failure: evaluation of the impact of a multidisciplinary inpatient heart failure team | OBJECTIVE
The study aimed to evaluate the impact of a multidisciplinary inpatient heart failure team (HFT) on treatment, hospital readmissions and mortality of patients with decompensated heart failure (HF).
METHODS
A retrospective service evaluation was undertaken in a UK tertiary centre university hospital comparing 196 patients admitted with HF in the 6 months prior to the introduction of the HFT (pre-HFT) with all 211 patients seen by the HFT (post-HFT) during its first operational year.
RESULTS
There were no significant differences in patient baseline characteristics between the groups. Inpatient mortality (22% pre-HFT vs 6% post-HFT; p<0.0001) and 1-year mortality (43% pre-HFT vs 27% post-HFT; p=0.001) were significantly lower in the post-HFT cohort. Post-HFT patients were significantly more likely to be discharged on loop diuretics (84% vs 98%; p=<0.0001), ACE inhibitors (65% vs 76%; p=0.02), ACE inhibitors and/or angiotensin receptor blockers (83% vs 91%; p=0.02), and mineralocorticoid receptor antagonists (44% vs 68%; p<0.0001) pre-HFT versus post-HFT, respectively. There was no difference in discharge prescription rates of beta-blockers (59% pre-HFT vs 63% post-HFT; p=0.45). The mean length of stay (17±19 days pre-HFT vs 19±18 days post-HFT; p=0.06), 1-year all-cause readmission rates (46% pre-HFT vs 47% post-HFT; p=0.82) and HF readmission rates (28% pre-HFT vs 20% post-HFT; p=0.09) were not different between the groups.
CONCLUSIONS
The introduction of a specialist inpatient HFT was associated with improved patient outcome. Inpatient and 1-year mortality were significantly reduced. Improved use of evidence-based drug therapies, more intensive diuretic use and multidisciplinary care may contribute to these differences in outcome. |
Enabling Mitochondrial Uptake of Lipophilic Dications Using Methylated Triphenylphosphonium Moieties. | Triphenylphosphonium (TPP+) species comprising multiple charges, i.e., bis-TPP+, are predicted to be superior mitochondrial-targeting vectors and are expected to have mitochondrial accumulations 1000-fold greater than TPP+, the current "gold standard". However, bis-TPP+ vectors linked by short hydrocarbon chains ( n < 5) are unable to be taken up by the mitochondria, thus hindering their development as mitochondrial delivery vectors. Through the incorporation of methylated TPP+ moieties (T*PP+), we successfully enabled the accumulation of bis-TPP+ with a short linker chain in isolated mitochondria, as measured by high performance liquid chromatography. These experimental results are further supported by molecular dynamics and ab initio calculations, revealing the strong correlations between mitochondria uptake and molecular volume, surface area, and chemical hardness. Most notably, the molecular volume has been shown to be a strong predictor of accumulation for both mono- and bis-TPP+ salts. Our study underscores the potential of T*PP+ moieties as alternative mitochondrial vectors to overcome low permeation into the mitochondria. |
Template Driven Performance Modeling of Enterprise Java Beans | System designers find it difficult to obtain insight into the potential performance, and performance problems, of enterprise applications based on component technologies like Enterprise Java Beans (EJBs) or .NET. One problem is the presence of layered resources, which have complicated effects on bottlenecks. Layered queueing network (LQN) performance models are able to capture these effects, and have a modular structure close to that of the system. This work describes templates for EJB components that can be instantiated from the platform-independent description of an application, and composed in a component-based LQN. It describes the process of instantiation, and the interpretation of the model predictions. |
Subsets and Splits