title
stringlengths
8
300
abstract
stringlengths
0
10k
Where is the evidence in our sport psychology practice? A United Kingdom perspective on the underpinnings of action.
Practitioners place the importance of engaging in evidence-based practice at the forefront of issues regarding the provision of applied sport psychology. Accordingly, the present study sought to contextualize the process of theory–research–practice. Specifically, 4 attentional-based techniques established within the sport psychology literature were depicted as applied scenarios and presented as a survey task. Experienced United Kingdom–based practitioners (n = 14) and individuals currently undergoing training (n = 14) were recruited to ascertain their theoretical and mechanistic knowledge and whether the techniques were being used in the applied environment. Results suggested that application of the techniques, in addition to theoretical and mechanistic knowledge, may decrease from trainee to experienced practitioner. The study highlights the need for an increase in research designed to be effective in the applied setting and that addresses the needs of sport psychology practitioners if our discipline is to advance and remain evidence based.
Plasma homocysteine concentration in children with chronic renal failure
Hyperhomocysteinemia, a risk factor for vascular disease, is commonly found in adult patients with end-stage renal disease. Major determinants of elevated plasma homocysteine levels in these patients include deficiencies in folate and vitamin B12, methylenetetrahydrofolate reductase (MTHFR) genotype and renal function. Little information is available for children with chronic renal failure (CRF). The prevalence and the factors that affect plasma homocysteine concentration were determined in children. Twenty-nine children with various degrees of CRF (15 were dialyzed, 14 were not dialyzed) were compared with 57 age- and sex-matched healthy children. Homocysteine concentrations were higher in patients than controls (17.3 µmol/l vs 6.8 µmol/l, P<0.0001) and hyperhomocysteinemia (>95th percentile for controls: 14.0 µmol/l) was seen in 62.0% of patients and 5.2% of controls. Folate concentrations were lower in patients (9.9 nmol/l) than controls (13.5 nmol/l), P<0.01. Vitamin B12 was similar in patients (322 pmol/l) and controls (284 pmol/l). Dialyzed patients have a higher prevalence of hyperhomocysteinemia than nondialyzed patients (87% vs 35%). Dialyzed patients with MTHFR mutation have higher plasma homocysteine (28.5 µmol/l) than nondialyzed patients with the mutation (10.7 µmol/l), P<0.002. In our study, differences between controls and patients in plasma homocysteine concentrations are observed when age is greater then 92 months, folate less than 21.6 nmol/l and vitamin B12 less than 522 pmol/l.Our study shows that hyperhomocysteinemia is common in children with CRF and is associated with low folate and normal vitamin B12 status, compared to normal children. Among the patients, the dialyzed patients with the MTHFR mutation are particularly at risk for hyperhomocysteinemia. Further studies are needed to investigate therapeutic interventions and the potential link with vascular complications in these patients.
Gossip: Automatically Identifying Malicious Domains from Mailing List Discussions
Domain names play a critical role in cybercrime, because they identify hosts that serve malicious content (such as malware, Trojan binaries, or malicious scripts), operate as command-and-control servers, or carry out some other role in the malicious network infrastructure. To defend against Internet attacks and scams, operators widely use blacklisting to detect and block malicious domain names and IP addresses. Existing blacklists are typically generated by crawling suspicious domains, manually or automatically analyzing malware, and collecting information from honeypots and intrusion detection systems. Unfortunately, such blacklists are difficult to maintain and are often slow to respond to new attacks. Security experts set up and join mailing lists to discuss and share intelligence information, which provides a better chance to identify emerging malicious activities. In this paper, we design Gossip, a novel approach to automatically detect malicious domains based on the analysis of discussions in technical mailing lists (particularly on security-related topics) by using natural language processing and machine learning techniques. We identify a set of effective features extracted from email threads, users participating in the discussions, and content keywords, to infer malicious domains from mailing lists, without the need to actually crawl the suspect websites. Our result shows that Gossip achieves high detection accuracy. Moreover, the detection from our system is often days or weeks earlier than existing public blacklists.
Compact thermal model of a three-phase IGBT inverter power module
A compact thermal model of a three-phase IGBT inverter power module utilised in most of variable speed drivers has been described in this paper. The compact thermal model equals to an electrical RC network model is assembled from thermal resistances and thermal capacitances so that it can be easily implemented in a circuit simulator. Transient thermal 3D finite element (FE) model of the IGBT module has been carried out using commercially available FLOTHERM software; the 3D simulation results are then utilised to extract the compact thermal network parameters of the IGBT power modular. Good agreement has been achieved between simulation and experimental measurement.
Evaluation of cursive and non-cursive scripts using recurrent neural networks
Character recognition has been widely used since its inception in applications involved processing of scanned or camera-captured documents. There exist multiple scripts in which the languages are written. The scripts could broadly be divided into cursive and non-cursive scripts. The recurrent neural networks have been proved to obtain state-of-the-art results for optical character recognition. We present a thorough investigation of the performance of recurrent neural network (RNN) for cursive and non-cursive scripts. We employ bidirectional long short-term memory (BLSTM) networks, which is a variant of the standard RNN. The output layer of the architecture used to carry out our investigation is a special layer called connectionist temporal classification (CTC) which does the sequence alignment. The CTC layer takes as an input the activations of LSTM and aligns the target labels with the inputs. The results were obtained at the character level for both cursive Urdu and non-cursive English scripts are significant and suggest that the BLSTM technique is potentially more useful than the existing OCR algorithms.
Ketamine Inhibits Ultrasound Stimulation-Induced Neuromodulation by Blocking Cortical Neuron Activity.
Ultrasound (US) can be used to noninvasively stimulate brain activity. However, reproducible motor responses evoked by US are only elicited when the animal is in a light state of anesthesia. The present study investigated the effects of ketamine on US-induced motor responses and cortical neuronal activity. US was applied to the motor cortex of mice, and motor responses were evaluated based on robustness scores. Cortical neuronal activity was observed by fluorescence calcium imaging. US-induced motor responses were inhibited more than 20 min after ketamine injection, and US-triggered Ca2+ transients in cortical neurons were effectively blocked by ketamine. Our results indicate that ketamine suppresses US-triggered Ca2+ transients in cortical neurons and, therefore, inhibits US-induced motor responses in a deep anesthetic state.
Preventing drive-by download via inter-module communication monitoring
Drive-by download attack is one of the most severe threats to Internet users. Typically, only visiting a malicious page will result in compromise of the client and infection of malware. By the end of 2008, drive-by download had already become the number one infection vector of malware [5]. The downloaded malware may steal the users' personal identification and password. They may also join botnet to send spams, host phishing site or launch distributed denial of service attacks. Generally, these attacks rely on successful exploits of the vulnerabilities in web browsers or their plug-ins. Therefore, we proposed an inter-module communication monitoring based technique to detect malicious exploitation of vulnerable components thus preventing the vulnerability being exploited. We have implemented a prototype system that was integrated into the most popular web browser Microsoft Internet Explorer. Experimental results demonstrate that, on our test set, by using vulnerability-based signature, our system could accurately detect all attacks targeting at vulnerabilities in our definitions and produced no false positive. The evaluation also shows the performance penalty is kept low.
The growing use of herbal medicines: issues relating to adverse reactions and challenges in monitoring safety
The use of herbal medicinal products and supplements has increased tremendously over the past three decades with not less than 80% of people worldwide relying on them for some part of primary healthcare. Although therapies involving these agents have shown promising potential with the efficacy of a good number of herbal products clearly established, many of them remain untested and their use are either poorly monitored or not even monitored at all. The consequence of this is an inadequate knowledge of their mode of action, potential adverse reactions, contraindications, and interactions with existing orthodox pharmaceuticals and functional foods to promote both safe and rational use of these agents. Since safety continues to be a major issue with the use of herbal remedies, it becomes imperative, therefore, that relevant regulatory authorities put in place appropriate measures to protect public health by ensuring that all herbal medicines are safe and of suitable quality. This review discusses toxicity-related issues and major safety concerns arising from the use of herbal medicinal products and also highlights some important challenges associated with effective monitoring of their safety.
A New track for unifying general relativity with quantum field theories
In the perspective of unifying quantum field theories with general relativity,the equations of the internal dynamics of the vacuum and mass structures of a set of interacting particles are proved to be in one-to-one correspondence with the equations of general relativity. This leads us to envisage a high value for the cosmological constant,as expected theoretically.
Maximizing Speedup through Self-Tuning of Processor Allocation
We address the problem of maximizing application speedup through runtime, self-selection of an appropriate number of processors on which to run. Automatic, runtime selection of processor allocations is important because many parallel applications exhibit peak speedups at allocations that are data or time dependent. We propose the use of a runtime system that: (a) dynamically measures job efficiencies at different allocations, (b) uses these measurements to calculate speedups, and (c) automatically adjusts a job’s processor allocation to maximize its speedup. Using a set of 10 applications that includes both hand-coded parallel programs and compiler-parallelized sequential programs, we show that our runtime system can reliably determine dynamic allocations that match the best possible static allocation, and that it has the potential to find dynamic allocations that outperform any static allocation.
The logic of quantum mechanics - Take II
We put forward a new take on the logic of quantum mechanics, following Schroedinger's point of view that it is composition which makes quantum theory what it is, rather than its particular propositional structure due to the existence of superpositions, as proposed by Birkhoff and von Neumann. This gives rise to an intrinsically quantitative kind of logic, which truly deserves the name `logic' in that it also models meaning in natural language, the latter being the origin of logic, that it supports automation, the most prominent practical use of logic, and that it supports probabilistic inference.
Estimating the Success of Unsupervised Image to Image Translation
While in supervised learning, the validation error is an unbiased estimator of the generalization (test) error and complexity-based generalization bounds are abundant, no such bounds exist for learning a mapping in an unsupervised way. As a result, when training GANs and specifically when using GANs for learning to map between domains in a completely unsupervised way, one is forced to select the hyperparameters and the stopping epoch by subjectively examining multiple options. We propose a novel bound for predicting the success of unsupervised cross domain mapping methods, which is motivated by the recently proposed Simplicity Principle. The bound can be applied both in expectation, for comparing hyperparameters and for selecting a stopping criterion, or per sample, in order to predict the success of a specific cross-domain translation. The utility of the bound is demonstrated in an extensive set of experiments employing multiple recent algorithms. Our code is available at https: //github.com/sagiebenaim/gan bound.
Performance validation of NS3-LTE emulation for live video streaming under QoS parameters
Currently, 4G mobile communication systems are supported by the 3GPP standard. In view of the significant increase in mobile data traffic, it is necessary to characterize it to improve the performance of current wireless networks. Indeed, video transmission and video streaming are fundamental assets for the upcoming smart cities and urban environments. Due to the high costs of deploying a real LTE system, emulation systems that consider real operating conditions emerge as a successful alternative. On the other hand, many studies with LTE simulations and emulations do not present information of basic adjustment parameters like the propagation model, nor of validation of the results with real conditions. This paper shows the validation with an ANOVA statistical analysis of an LTE emulation system developed in NS-3 for the live video streaming service. For the validation, different QoS parameters and real conditions have been used. Also, two protocols, namely RTMP and RTSP, have been tested. It is demonstrated that the emulation scenario is appropriate to characterize the traffic that will later allow to carry out a proper performance analysis of the service and technology under study.
Unsupervised classification of music genre using hidden Markov model
Music genre classification can be of great utility to musical database management. Most current classification methods are supervised and tend to be based on contrived taxonomies. However, due to the ambiguities and inconsistencies in the chosen taxonomies, these methods are not applicable for a much larger database. We proposed an unsupervised clustering method, based on a given measure of similarity which can be provided by hidden Markov models. In addition, in order to better characterize music content, a novel segmentation scheme is proposed, based on music intrinsic rhythmic structure analysis and features are extracted based on these segments. The performance of this feature segmentation scheme performs better than the traditional fixed-length method, according to experimental results. Our preliminary results also suggest that the proposed method is comparable to the supervised classification method.
Plant regeneration: cellular origins and molecular mechanisms.
Compared with animals, plants generally possess a high degree of developmental plasticity and display various types of tissue or organ regeneration. This regenerative capacity can be enhanced by exogenously supplied plant hormones in vitro, wherein the balance between auxin and cytokinin determines the developmental fate of regenerating organs. Accumulating evidence suggests that some forms of plant regeneration involve reprogramming of differentiated somatic cells, whereas others are induced through the activation of relatively undifferentiated cells in somatic tissues. We summarize the current understanding of how plants control various types of regeneration and discuss how developmental and environmental constraints influence these regulatory mechanisms.
Efficacy of parecoxib, sumatriptan, and rizatriptan in the treatment of acute migraine attacks.
Triptans and analgetic nonsteroidal inflammatory drugs reduce acute pain syndromes in migraine. A further treatment option for an acute headache attack in patients with migraine may be the application of cyclooxygenase-2-specific inhibitors, as they have anti-inflammatory and analgesic properties. The objective of this pilot study was to investigate the effects of an oral fast-dissolving tablet of 10 mg of rizatriptan, an intravenous infusion of 40 mg of parecoxib, and a subcutaneous pen injection of sumatriptan (6 mg/0.5 mL) on pain relief in 3 cohorts of patients with episodic migraine. They were treated owing to the acute onset of a pain attack as a case of emergency. They were randomized to treatment with sumatriptan, rizatriptan, or parecoxib. The participants completed a visual analog scale for pain intensity at baseline before the drug administration and then after intervals of 20, 30, 60, and 120 minutes. Rizatriptan, parecoxib, and sumatriptan reduced pain symptoms. Twenty and 30 minutes after drug intake, rizatriptan was more efficacious than parecoxib and sumatriptan, and parecoxib was more effective than sumatriptan. Only a significant difference between rizatriptan and sumatriptan was found after 60 and 120 minutes. This trial demonstrates the effectiveness of a parecoxib infusion in the treatment of acute migraine and that the circumvention of the first pass effect of the liver by rizatriptan may be beneficial for fast pain relief.
Object modelling by registration of multiple range images
We study the problem of creating a complete model of a physical object. Although this may be possible using intensity images, we use here range images which directly provide access t o three dimensional information. T h e first problem that we need t o solve is t o find the transformation between the different views. Previous approaches have either assumed this transformation t o be known (which is extremely difficult for a complete model), or computed it with feature matching (which is not accurate enough for integration). In this paper, we propose a new approach which works on range d a t a directly, and registers successive views with enough overlapping area t o get an accurate transformation between views. This is performed by minimizing afunctional which does not require point t o point matches. We give the details of the registration method and modeling procedure, and illustrate them on real range images of complex objects. 1 Introduction Creating models of physical objects is a necessary component machine of biological vision modules. Such models can then be used in object recognition, pose estimation or inspection tasks. If the object of interest has been precisely designed, then such a model exists in the form of a CAD model. In many applications, however, it is either not possible or not practical to have access to such CAD models, and we need to build models from the physical object. Some researchers bypass the problem by using a model which consist,s of multiple views ([4], [a]), but t,liis is not, always enough. If one needs a complete model of an object, the following steps are necessary: 1. data acquisition, 2. registration between views, 3. integration of views. By view we mean the 3D surface information of the object from specific point of view. While the integration process is very dependent on the representation scheme used, the precondition for performing integration consists of knowing the transformation between the data from different views. The goal of registrat,ion is to find such a transformat.ion, which is also known as t,he covrespon,den,ce problem. This problem has been a t the core of many previous research efforts: Bhanu [a] developed an object modeling system for object recognition by rotating object through known angles to acquire multiple views. Chien et al. [3] and Ahuja and Veen-stra [l] used orthogonal views to construct octree object ' models. With these methods, …
Impacts of Phase Noise on Digital Self-Interference Cancellation in Full-Duplex Communications
In full-duplex (FD) radios, phase noise leads to random phase mismatch between the self-interference (SI) and the reconstructed cancellation signal, resulting in possible performance degradation during SI cancellation. To explicitly analyze its impacts on the digital SI cancellation, an orthogonal frequency division multiplexing (OFDM)-modulated FD radio is considered with phase noises at both the transmitter and receiver. The closed-form expressions for both the digital cancellation capability and its limit for the large interference-to-noise ratio (INR) case are derived in terms of the power of the common phase error, INR, desired signal-to-noise ratio (SNR), channel estimation error and transmission delay. Based on the obtained digital cancellation capability, the achievable rate region of a two-way FD OFDM system with phase noise is characterized. Then, with a limited SI cancellation capability, the maximum outer bound of the rate region is proved to exist for sufficiently large transmission power. Furthermore, a minimum transmission power is obtained to achieve $\beta$ -portion of the cancellation capability limit and to ensure that the outer bound of the rate region is close to its maximum.
Design of Fuzzy PID controller for Brushless DC motor
Brushless DC (BLDC) motors are widely used for many industrial applications because of their high efficiency, high torque and low volume. This paper proposed a improved Fuzzy PID controller to control speed of Brushless DC motor. The proposed controller is called proportional-integral-derivative (PID) controller and Fuzzy proportional-integral-derivative controller. This paper provides an overview of performance conventional PID controller and Fuzzy PID controller. It is difficult to tune the parameters and get satisfied control characteristics by using normal conventional PID controller. As the Fuzzy has the ability to satisfied control characteristics and it is easy for computing, In order to control the BLDC motor, a Fuzzy PID controller is designed as the controller of the BLDC motor. The experimental results verify that a Fuzzy PID controller has better control performance than the conventional PID controller. The modeling, control and simulation of the BLDC motor have been done using the software package MATLAB/SIMULINK.
The BAF: a corpus of english-french bitext
The BAF is a corpus of English and French translations, hand-aligned at the sentence level, which was developed by the University of Montreal's RALI laboratory, within the "Action de recherche concertée" (ARC) A2, a cooperative research project initiated and financed by the AUPELF-UREF. The corpus, which totals approximately 800 000 words, is primarily intended as an evaluation tool in the development of automatic bilingual text alignment method. In this paper, we discuss why this corpus was assembled, how it was produced, and what it contains. We also describe some of the computer tools that were developed and used in the process.
Structured Learning of Tree Potentials in CRF for Image Segmentation
We propose a new approach to image segmentation, which exploits the advantages of both conditional random fields (CRFs) and decision trees. In the literature, the potential functions of CRFs are mostly defined as a linear combination of some predefined parametric models, and then, methods, such as structured support vector machines, are applied to learn those linear coefficients. We instead formulate the unary and pairwise potentials as nonparametric forests—ensembles of decision trees, and learn the ensemble parameters and the trees in a unified optimization problem within the large-margin framework. In this fashion, we easily achieve nonlinear learning of potential functions on both unary and pairwise terms in CRFs. Moreover, we learn classwise decision trees for each object that appears in the image. Experimental results on several public segmentation data sets demonstrate the power of the learned nonlinear nonparametric potentials.
An Internet based wireless home automation system for multifunctional devices
The aim of home automation is to control home devices from a central control point. In this paper, we present the design and implementation of a low cost but yet flexible and secure Internet based home automation system. The communication between the devices is wireless. The protocol between the units in the design is enhanced to be suitable for most of the appliances. The system is designed to be low cost and flexible with the increasing variety of devices to be controlled.
Cheilitis Glandularis Treated with Intralesional Steroids : A Rare Case Report and Review
Cheilitis glandularis (CG) is a rare disorder characterized by swelling of the lip with hyperplasia of the labial salivary glands. CG is most commonly seen in the lower lip, in middle-aged to older Caucasian men; however, rarely reported in Asians. The exact cause of CG is unknown, but various agents have been attributed like smoking, chronic irritation, poor oral hygiene, allergy, bacterial infections, syphilis, chronic exposure to sunlight and wind, compromised immune system, and genetic transmission. In this paper, we report a rare case of CG of the superfi cial suppurative type in a 43-year-old Indian female affecting lower lip which was diagnosed based on clinical and histopathological fi ndings and was treated with intralesional steroid injections. Even though the incidence and occurrence of CG in clinical practice is rare, it presents a diagnostic challenge to dentists because the etiologic factors are uncommon with variations in clinical presentation. Early diagnosis with defi nitive treatment and frequent monitoring should be carried out to prevent further complications.
Internal Organization of the Alpha 21164, a 300-MHz 64-bit Quad-issue CMOS RISC Microprocessor
A new CMOS microprocessor, the Alpha 21164, reaches 1,200 mips/600 MFLOPS (peak performance). This new implementation of the Alpha architecture achieves SPECint92/SPECfp92 performance of 345/505 (estimated). At these performance levels, the Alpha 21164 has delivered the highest performance of any commercially available microprocessor in the world as of January 1995. It contains a quad-issue, superscalar instruction unit; two 64-bit integer execution pipelines; two 64-bit floating-point execution pipelines; and a high-performance memory subsystem with multiprocessor-coherent write-back caches. OVERVIEW OF THE ALPHA 21164 The Alpha 21164 microprocessor is now a product of Digital Semiconductor. The chip is the second completely new microprocessor to implement the Alpha instruction set architecture. It was designed in Digital's 0.5-micrometer (um) complementary metal-oxide semiconductor (CMOS) process. First silicon was powered on in February 1994; the part has been commercially available since January 1995. At SPECint92/SPECfp92 ratings of 345/505 (estimated), the Alpha 21164 achieved new heights of performance. The performance of this new implementation results from aggressive circuit design using the latest 0.5-um CMOS technology and significant architectural improvements over the first Alpha implementation.[1] The chip is designed to operate at 300 MHz, an operating frequency 10 percent faster than the previous implementation (the DECchip 21064 chip) would have if it were scaled into the new 0.5-um CMOS technology.[2] Relative to the previous implementation, the key improvements in machine organization are a doubling of the superscalar dimension to four-way superscalar instruction issue; reduction of many operational latencies, including the latency in the primary data cache; a memory subsystem that does not block other operations after a cache miss; and a large, on-chip, second-level, write-back cache. The 21164 microprocessor implements the Alpha instruction set architecture. It runs existing Alpha programs without modification. It supports a 43-bit virtual address and a 40-bit physical address. The page size is 8 kilobytes (KB). In the following sections, we describe the five functional units of the Alpha 21164 microprocessor and relate some of the design decisions that improved the performance of the microprocessor. First, we give an overview of the chip's internal organization and pipeline layout. Internal Organization Figure 1 shows a block diagram of the chip's five functional units: the instruction unit, the integer function unit, the floating-point unit, the memory unit, and the cache control and bus interface unit (called the C-box). The three on-chip caches are also shown. The instruction cache and data cache are primary, direct-mapped caches. They are backed by the second-level cache, which is a set-associative cache that holds instructions and data. [Figure 1 (Five Functional Units on the Alpha 21164 Microprocessor) is not available in ASCII format.] Alpha 21164 Pipeline The Alpha 21164 pipeline length is 7 stages for integer execution, 9 stages for floating-point execution, and as many as 12 stages for on-chip memory instruction execution. Additional stages are required for off-chip memory instruction execution. Figure 2 depicts the pipeline for integer, floating-point, and memory operations. [Figure 2 (Alpha 21164 Pipeline Stages) is not available in ASCII format.]
Improved Neural Machine Translation with a Syntax-Aware Encoder and Decoder
Most neural machine translation (NMT) models are based on the sequential encoder-decoder framework, which makes no use of syntactic information. In this paper, we improve this model by explicitly incorporating source-side syntactic trees. More specifically, we propose (1) a bidirectional tree encoder which learns both sequential and tree structured representations; (2) a tree-coverage model that lets the attention depend on the source-side syntax. Experiments on Chinese-English translation demonstrate that our proposed models outperform the sequential attentional model as well as a stronger baseline with a bottom-up tree encoder and word coverage.1
An expert-supported monitoring system for patients with chronic obstructive pulmonary disease in general practice: results of a cluster randomised controlled trial.
OBJECTIVE To investigate the long-term effectiveness of a general practice monitoring system with respiratory expert recommendations for general practitioners' management of patients with chronic obstructive pulmonary disease (COPD), compared with usual care. DESIGN, SETTINGS AND PARTICIPANTS A multicentre randomised controlled trial of patients with COPD, clustered by general practices; 200 participants were recruited to maintain at least 75 participants per group for analysis. The trial took place from July 2005 to February 2008 in the south-western region of the Netherlands. INTERVENTION Ongoing half-yearly monitoring of COPD patients with respiratory expert recommendations for the GP was compared with usual care. MAIN OUTCOME MEASURES Primary outcome - Chronic Respiratory Questionnaire (CRQ) score; secondary outcomes - CRQ domain scores, generic health-related quality of life (Short-Form 12 and EuroQol-5D), breathlessness (Modified Medical Research Council score), exacerbations, and decline in forced expiratory volume in 1 second. A detailed process evaluation was performed along with the trial. RESULTS Data from 170 participants were analysed. Based on repeated measurement analyses, the additional gain in CRQ score during follow-up was 0.004 points for monitoring compared with usual care (95% CI, - 0.172 to 0.180). Also, no important differences between monitoring and the usual care group were found for secondary outcomes. Half the monitoring visits resulted in disease management recommendations by a respiratory expert, and 46% of these recommendations were implemented by the GPs. Patient adherence to lifestyle recommendations was low. CONCLUSION An expert-supported monitoring system for patients with COPD was not clinically effective. As patients had a pre-existing entry in the monitoring system, the population may be well regulated, with reduced room for improvement. TRIAL REGISTRATION www.clinicaltrials.gov NCT00542061.
NLRP3 inflammasome: Its regulation and involvement in atherosclerosis.
Inflammasomes are intracellular complexes involved in the innate immunity that convert proIL-1β and proIL-18 to mature forms and initiate pyroptosis via cleaving procaspase-1. The most well-known inflammasome is NLRP3. Several studies have indicated a decisive and important role of NLRP3 inflammasome, IL-1β, IL-18, and pyroptosis in atherosclerosis. Modern hypotheses introduce atherosclerosis as an inflammatory/lipid-based disease and NLRP3 inflammasome has been considered as a link between lipid metabolism and inflammation because crystalline cholesterol and oxidized low-density lipoprotein (oxLDL) (two abundant components in atherosclerotic plaques) activate NLRP3 inflammasome. In addition, oxidative stress, mitochondrial dysfunction, endoplasmic reticulum (ER) stress, and lysosome rupture, which are implicated in inflammasome activation, have been discussed as important events in atherosclerosis. In spite of these clues, some studies have reported that NLRP3 inflammasome has no significant effect in atherogenesis. Our review reveals that some molecules such as JNK-1 and ASK-1 (upstream regulators of inflammasome activation) can reduce atherosclerosis through inducing apoptosis in macrophages. Notably, NLRP3 inflammasome can also cause apoptosis in macrophages, suggesting that NLRP3 inflammasome may mediate JNK-induced apoptosis, and the apoptotic function of NLRP3 inflammasome may be a reason for the conflicting results reported. The present review shows that the role of NLRP3 in atherogenesis can be significant. Here, the molecular pathways of NLRP3 inflammasome activation and the implications of this activation in atherosclerosis are explained.
Design and control of a GaN-based, 13-level, flying capacitor multilevel inverter
Multilevel topologies are an appealing method to achieve higher power density inverters for both mobile and stationary systems. This work discusses the design and development of a 13-level, flying capacitor multilevel (FCML) inverter. Operating from an 800 V bus, this inverter requires switches with a voltage blocking capability of less than 80 V. A 120 kHz switching frequency is enabled through the use of GaN FETs and the development of custom integrated switching cells, which reduce commutation loop inductance and allow for a modular design. Additionally, the frequency multiplication effect of FCML inverters allows the output inductor of the inverter to be made exceptionally small (4.7 μH) while maintaining a 0.7 % THD due to the 1.44 MHz effective inductor ripple frequency.
A Tale of Two Cultures: Bringing Literary Analysis and Computational Linguistics Together
There are cultural barriers to collaborative effort between literary scholars and computational linguists. In this work, we discuss some of these problems in the context of our ongoing research project, an exploration of free indirect discourse in Virginia Woolf’s To The Lighthouse, ultimately arguing that the advantages of taking each field out of its “comfort zone” justifies the inherent difficulties.
Real-Time Dense Geometry from a Handheld Camera
We present a novel variational approach to estimate dense depth maps from multiple images in real-time. By using robust penalizers for both data term and regularizer, our method preserves discontinuities in the depth map. We demonstrate that the integration of multiple images substantially increases the robustness of estimated depth maps to noise in the input images. The integration of our method into recently published algorithms for camera tracking allows dense geometry reconstruction in real-time using a single handheld camera. We demonstrate the performance of our algorithm with real-world data.
A systematic review of the effectiveness of health promotion aimed at improving oral health.
OBJECTIVE To examine the quality of oral health promotion research evidence and to assess the effectiveness of health promotion, aimed at improving oral health using a systematic and scientifically defensible methodology. BASIC RESEARCH DESIGN Systematic review of oral health promotion research evidence using electronic searching, iterative hand-searching, critical appraisal and data synthesis. CLINICAL SETTING The settings of the primary research reviewed were clinical, community, schools or other institutions. The participants were children, the elderly, adults and people with handicaps and disabilities. INTERVENTIONS Only studies which reported an evaluative component were included. Theoretical and purely descriptive papers were excluded. MAIN OUTCOME MEASURES The review examined the evidence of effectiveness of oral health promotion on caries, oral hygiene, oral health related knowledge, attitudes and behaviours. RESULTS Very few definitive conclusions about the effectiveness of oral health promotion can be drawn from the currently available evidence. Caries and periodontal disease can be controlled by regular toothbrushing with a fluoride toothpaste but a cost-effective method for reliably promoting such behaviour has not yet been established. Knowledge levels can almost always be improved by oral health promotion initiatives but whether these shifts in knowledge and attitudes can be causally related to changes in behaviour or clinical indices of disease has also not been established. CONCLUSIONS Oral health promotion which brings about the use of fluoride is effective for reducing caries. Chairside oral health promotion has been shown to be effective more consistently than other methods of health promotion. Mass media programmes have not been shown to be effective. The quality of oral health promotion evaluation research needs to be improved.
Phase II Trial: Undifferentiated Versus Differentiated Autologous Mesenchymal Stem Cells Transplantation in Egyptian Patients with HCV Induced Liver Cirrhosis
The study was aimed to evaluate the effect of autologous transplantation of BM-derived undifferentiated and differentiated MSCs in cirrhotic patients following chronic hepatitis C virus infection. Twenty-five patients with Child C liver cirrhosis, MELD score >12 were included. They were divided into 2 groups. Group I, the MSCs group (n = 15), this group was subdivided into two subgroups: Ia & Ib (undifferentiated and differentiated respectively). Group II (control group; n = 10) involved patients with cirrhotic liver under conventional supportive treatment. Ninety ml BM was aspirated from the iliac bone for separation of MSCs. Surface expression of CD271, CD29 and CD34 were analyzed using flowcytometry. Hepatogenesis was assessed by immunohistochemical expression of OV6, AFP and albumin. Finally approximately 1 million MSCs/Kg were suspended in saline and were placed in blood bag and injected slowly intravenously over 15 min at a rate of 5 drops/min in one session. Follow up of patients at 3 and 6 months postinfusion revealed partial improvement of liver function tests with elevation of prothrombin concentration and serum albumin levels, decline of elevated bilirubin and MELD score in MSCs group. Statistical comparisons between the two subgroups (group Ia & Ib) did not merit any significant difference regarding clinical and laboratory findings. In conclusion: Bone marrow MSCs transplantation either undifferentiated or differentiated can be used as a potential treatment for liver cirrhosis.
Reducing blood culture contamination by a simple informational intervention.
Compared to truly negative cultures, false-positive blood cultures not only increase laboratory work but also prolong lengths of patient stay and use of broad-spectrum antibiotics, both of which are likely to increase antibiotic resistance and patient morbidity. The increased patient suffering and surplus costs caused by blood culture contamination motivate substantial measures to decrease the rate of contamination, including the use of dedicated phlebotomy teams. The present study evaluated the effect of a simple informational intervention aimed at reducing blood culture contamination at Skåne University Hospital (SUS), Malmö, Sweden, during 3.5 months, focusing on departments collecting many blood cultures. The main examined outcomes of the study were pre- and postintervention contamination rates, analyzed with a multivariate logistic regression model adjusting for relevant determinants of contamination. A total of 51,264 blood culture sets were drawn from 14,826 patients during the study period (January 2006 to December 2009). The blood culture contamination rate preintervention was 2.59% and decreased to 2.23% postintervention (odds ratio, 0.86; 95% confidence interval, 0.76 to 0.98). A similar decrease in relevant bacterial isolates was not found postintervention. Contamination rates at three auxiliary hospitals did not decrease during the same period. The effect of the intervention on phlebotomists' knowledge of blood culture routines was also evaluated, with a clear increase in level of knowledge among interviewed phlebotomists postintervention. The present study shows that a relatively simple informational intervention can have significant effects on the level of contaminated blood cultures, even in a setting with low rates of contamination where nurses and auxiliary nurses conduct phlebotomies.
Indeterministic quantum gravity. 2. Refinements and developments
This paper is a continuation of the paper [V.S.Mashkevich, gr-qc/9409010]. Indeterministic quantum gravity is a theory that unifies general relativity and quantum theory involving indeterministic conception, i.e., quantum jumps. By the same token the theory claims to describe all the universe. Spacetime is the direct product of cosmic time and space. The state of the universe is given by metric, its derivative with respect to cosmic time, and the number of an energy level. A quantum jump occurs at the tangency of two levels. Equations of motion are the restricted Einstein equation (the cosmic space part thereof) and a probability rule for the quantum jump. Keywords: indeterminism, quantum jumps, state vector reduction, cosmology, cosmic spacetime
A Practitioners' Guide to Transfer Learning for Text Classification using Convolutional Neural Networks
Transfer Learning (TL) plays a crucial role when a given dataset has insufficient labeled examples to train an accurate model. In such scenarios, the knowledge accumulated within a model pre-trained on a source dataset can be transferred to a target dataset, resulting in the improvement of the target model. Though TL is found to be successful in the realm of imagebased applications, its impact and practical use in Natural Language Processing (NLP) applications is still a subject of research. Due to their hierarchical architecture, Deep Neural Networks (DNN) provide flexibility and customization in adjusting their parameters and depth of layers, thereby forming an apt area for exploiting the use of TL. In this paper, we report the results and conclusions obtained from extensive empirical experiments using a Convolutional Neural Network (CNN) and try to uncover thumb rules to ensure a successful positive transfer. In addition, we also highlight the flawed means that could lead to a negative transfer. We explore the transferability of various layers and describe the effect of varying hyper-parameters on the transfer performance. Also, we present a comparison of accuracy value and model size against state-of-the-art methods. Finally, we derive inferences from the empirical results and provide best practices to achieve a successful positive transfer.
A practical guide to treatment of infantile hemangiomas of the head and neck.
Infantile hemangiomas are the most common benign vascular tumors in infancy and childhood. As hemangioma could regress spontaneously, it generally does not require treatment unless proliferation interferes with normal function or gives rise to risk of serious disfigurement and complications unlikely to resolve without treatment. Various methods for treating infant hemangiomas have been documented, including wait and see policy, laser therapy, drug therapy, sclerotherapy, radiotherapy, surgery and so on, but none of these therapies can be used for all hemangiomas. To obtain the best treatment outcomes, the treatment protocol should be individualized and comprehensive as well as sequential. Based on published literature and clinical experiences, we established a treatment guideline in order to provide criteria for the management of head and neck hemangiomas. This protocol will be renewed and updated to include and reflect any cutting-edge medical knowledge, and provide the newest treatment modalities which will benefit our patients.
A Machine Learning Approach for Phenotype Name Recognition
Extracting biomedical named entities is one of the major challenges in automatic processing of biomedical literature. This paper proposes a machine learning approach for finding phenotype names in text. Features are included in a machine learning infrastructure to implement the rules found in our previously developed rule-based system. The system also uses two available resources: MetaMap and HPO. As we are not aware of any available corpus for phenotype names, a corpus has been constructed. Since manual tagging of the corpus was not possible for us, we started tagging only HPO phenotypes in the corpus and then using a semi-supervised learning method, the tagging process improved. The evaluation results (F-Score 92.25) suggest that the system achieved good performance and it outperforms the rule-based system.
A hierarchical similarity based job recommendation service framework for university students
When people want to move to a new job, it is often difficult since there is too much job information available. To select an appropriate job and then submit a resume is tedious. It is particularly difficult for university students since they normally do not have any work experience and also are unfamiliar with the job market. To deal with the information overload for students during their transition into work, a job recommendation system can be very valuable. In this research, after fully investigating the pros and cons of current job recommendation systems for university students, we propose a student profiling based re-ranking framework. In this system, the students are recommended a list of potential jobs based on those who have graduated and obtained job offers over the past few years. Furthermore, recommended employers are also used as input for job recommendation result re-ranking. Our experimental study on real recruitment data over the past four years has shown this method’s potential.
Efficacy and tolerability of telmisartan plus amlodipine in added-risk hypertensive patients.
OBJECTIVES Added-risk hypertensive patients with co-morbidities such as diabetes and metabolic syndrome often require two or more antihypertensives to achieve blood pressure (BP) targets. The aim of this sub-analysis was to determine the efficacy and safety of telmisartan 40 or 80 mg plus amlodipine 5 or 10 mg in patients with hypertension, stratified according to certain criteria such as type 2 diabetes mellitus and metabolic syndrome. METHODS Patients were treated for 8 weeks with telmisartan 20-80 mg plus amlodipine 2.5-10 mg. This post-hoc analysis included patients treated with higher doses, and stratified according to a number of sub-populations (age, race, diabetes, obesity, metabolic syndrome, elevated baseline systolic BP (SBP), renal impairment). RESULTS Eight weeks' treatment with telmisartan plus amlodipine combinations provided consistent reductions in mean SBP/diastolic BP (DBP) across the different sub-populations, similar to the overall population. SBP/DBP reductions ranged from -13.5 to -34.7/-12.6 to -26.1 mmHg and BP goal rates (<140/90 mmHg) ranged from 29.8-100% for the four key dose combinations of telmisartan plus amlodipine. For the highest dose combination of telmisartan 80 mg plus amlodipine 10 mg, SBP/DBP reduction ranged from -19.1 to -34.7/-16.4 to -22.8 mmHg and goal attainment rate from 66.7% to 87.0%. Across the sub-populations, high SBP and DBP response rates were seen with combination treatment (83.3-97.7% and 75.0-95.7%, respectively, with telmisartan 80 mg plus amlodipine 10 mg). The combination was safe and well tolerated across all sub-populations and the incidence of peripheral oedema with telmisartan 40-80 mg plus amlodipine 10 mg was generally lower than with A10 monotherapy. CONCLUSIONS Despite small patient numbers in some sub-populations and the post-hoc nature of the analysis, this does show that the combination of telmisartan plus amlodipine provides an effective, safe and well-tolerated antihypertensive treatment for added-risk hypertensive patients.
Pincer-Search: An Efficient Algorithm for Discovering the Maximum Frequent Set
Discovering frequent itemsets is a key problem in important data mining applications, such as the discovery of association rules, strong rules, episodes, and minimal keys. Typical algorithms for solving this problem operate in a bottom-up, breadth-first search direction. The computation starts from frequent 1-itemsets (the minimum length frequent itemsets) and continues until all maximal (length) frequent itemsets are found. During the execution, every frequent itemset is explicitly considered. Such algorithms perform well when all maximal frequent itemsets are short. However, performance drastically decreases when some of the maximal frequent itemsets are relatively long. We present a new algorithm which combines both the bottom-up and the top-down searches. The primary search direction is still bottom-up, but a restricted search is also conducted in the top-down direction. This search is used only for maintaining and updating a new data structure, the maximum frequent candidate set. It is used to prune early candidates that would normally encountered in the bottom-up search. A very important characteristic of the algorithm is that it does not require explicite examination of every frequent itemset. Therefore the algorithm performs well even when some maximal frequent itemsets are long. As its output, the algorithm produces the maximum frequent set, i.e., the set containing all maximal frequent itemsets, thus specifying immediately all frequent itemsets. We evaluate the performance of the algorithm using well-known synthetic benchmark databases and real-life census and ∗Applied Research, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960 +1 973 829 4740, tlin@research.telcordia.com. †Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, 251 Mercer St., New York, NY 100121185, +1 212 998 3101, kedem@cs.nyu.edu.
Selected major risk factors and global and regional burden of disease
BACKGROUND Reliable and comparable analysis of risks to health is key for preventing disease and injury. Causal attribution of morbidity and mortality to risk factors has traditionally been in the context of individual risk factors, often in a limited number of settings, restricting comparability. Our aim was to estimate the contributions of selected major risk factors to global and regional burden of disease in a unified framework. METHODS For 26 selected risk factors, expert working groups undertook a comprehensive review of published work and other sources--eg, government reports and international databases--to obtain data on the prevalence of risk factor exposure and hazard size for 14 epidemiological regions of the world. Population attributable fractions were estimated by applying the potential impact fraction relation, and applied to the mortality and burden of disease estimates from the global burden of disease (GBD) database. FINDINGS Childhood and maternal underweight (138 million disability adjusted life years [DALY], 9.5%), unsafe sex (92 million DALY, 6.3%), high blood pressure (64 million DALY, 4.4%), tobacco (59 million DALY, 4.1%), and alcohol (58 million DALY, 4.0%) were the leading causes of global burden of disease. In the poorest regions of the world, childhood and maternal underweight, unsafe sex, unsafe water, sanitation, and hygiene, indoor smoke from solid fuels, and various micronutrient deficiencies were major contributors to loss of healthy life. In both developing and developed regions, alcohol, tobacco, high blood pressure, and high cholesterol were major causes of disease burden. INTERPRETATION Substantial proportions of global disease burden are attributable to these major risks, to an extent greater than previously estimated. Developing countries suffer most or all of the burden due to many of the leading risks. Strategies that target these known risks can provide substantial and underestimated public-health gains.
Evaluation of treatment of zygomatic bone and zygomatic arch fractures: a retrospective study of 10 years.
OBJECTIVE The aim of this study was to investigate the treatment of zygomatic bone and zygomatic arch fractures without other facial fractures. PATIENTS AND METHODS A 10 year (2000-2010) retrospective study involving 310 patients admitted and treated for zygomatic bone and zygomatic arch fractures at the department of oral and maxillofacial surgery was done. The data collection protocol included: age, gender, site, type of fracture. Other data presented included clinical diagnosis, radiographic examination findings as well as preoperative and postoperative imaging for evaluation of the fracture. Descriptive statistics was performed with SPSS version 16. RESULTS The ages of the patients ranged from 10 to 76 years old, mean age was 32.33 years. 237(80.6%) of the patients were males and 73 (19.4%) were females (Table 1). According to the site of fracture, the patients were divided into three groups: group A, with zygomatic bone fracture, group B with zygomatic arch fracture and group C with co-existing zygomatic bone and zygomatic arch fracture. Regarding the site of fracture 57.7% of the patients had fractures of the zygomatic bone, 13.8% had fractures of the zygomatic arch and 28.4% had fractures of both zygomatic bone and zygomatic arch. [Table: see text] The treatment of both fractures was: closed reduction for isolated zygomatic arch fractures; open reduction and internal rigid fixation through a coronal incision was performed in comminuted arch fractures and displaced fractures. CONCLUSION In this study, the majority of the patients were young adult men; road traffic accidents were the leading cause of fractures. According to the site of fracture, various modalities of treatment were used and all the patients achieved satisfactory results without any complications after operation.
A Smart Cloud Robotic System Based on Cloud Computing Services
In this paper, we present a smart service robotic system based on cloud computing services. The design and implementation of infrastructure, computation components and communication components are introduced. The proposed system can alleviate the complex computation and storage load of robots to cloud and provide various services to the robots. The computation components can dynamically allocate resources to the robots. The communication components allow easy access of the robots and provide flexible resource management. Furthermore, we modeled the task-scheduling problem and proposed a max-heaps algorithm. The simulation results demonstrate that the proposed algorithm minimized the overall task costs.
A wearable device for fall detection elderly people using tri dimensional accelerometer
A fall detection device is needed to provide information to paramedics or family members when an elderly is falling. Helping for elderly falling can avoid fatal injuries or loss of life. In order for the falling device comfortably taken by the elderly, we proposed a wearable device that lightweight, using battery for power supply, and a low-energy consumption. Our proposed device consists of: 3-dimensional accelerometer as a sensor, a microcontroller and a communication device. The sensor provides accelerations of elderly body movements. Then, the microcontroller identifies position body and a falling from three-axis accelerations. We use parameter threshold in our proposed fall detection as a method that has success detect 75% in fall forward and 95% in fall backward. The proposed device also has a 100% success in providing information on normal activities, such as: standing or sitting, supine, face down, left and right, while the success rate for the e-health device by cooking hack is 92%.
One and Five Ideas: On Conceptual Art and Conceptualism
In One and Five Ideas eminent critic, historian, and former member of the Art & Language collective Terry Smith explores the artistic, philosophical, political, and geographical dimensions of Conceptual Art and conceptualism. These four essays and a conversation with Mary Kelly—published between 1974 and 2012—contain Smith's most essential work on Conceptual Art and his argument that conceptualism was key to the historical transition from modern to contemporary art. Nothing less than a distinctive theory of Conceptual and contemporary art, One and Five Ideas showcases the critical voice of one of the major art theorists of our time.
Feasibility study of stereotactic body radiotherapy for peripheral lung tumors with a maximum dose of 100 Gy in five fractions and a heterogeneous dose distribution in the planning target volume
We evaluated toxicity and outcomes for patients with peripheral lung tumors treated with stereotactic body radiation therapy (SBRT) in a dose-escalation and dose-convergence study. A total of 15 patients were enrolled. SBRT was performed with 60 Gy in 5 fractions (fr.) prescribed to the 60% isodose line of maximum dose, which was 100 Gy in 5 fr., covering the planning target volume (PTV) surface (60 Gy/5 fr. - (60%-isodose)) using dynamic conformal multiple arc therapy (DCMAT). The primary endpoint was radiation pneumonitis (RP) ≥ Grade 2 within 6 months. Toxicities were graded according to the Common Terminology Criteria for Adverse Events, version 4.0. Using dose-volumetric analysis, the trial regimen of 60 Gy/5 fr. - (60%-isodose) was compared with our institutional conventional regimen of 50 Gy/5 fr. - (80%-isodose). The enrolled consecutive patients had either a solitary peripheral tumor or two ipsilateral tumors. The median follow-up duration was 22.0 (12.0-27.0) months. After 6 months post-SBRT, the respective number of RP Grade 0, 1 and 2 cases was 5, 9 and 1. In the Grade 2 RP patient, the image showed an organizing pneumonia pattern at 6.0 months post-SBRT. No other toxicity was found. At last follow-up, there was no evidence of recurrence of the treated tumors. The target volumes of 60 Gy/ 5 fr. - (60%-isodose) were irradiated with a significantly higher dose than those of 50 Gy/5 fr. - (80%-isodose), while the former dosimetric parameters of normal lung were almost equivalent to the latter. SBRT with 60 Gy/5 fr. - (60%-isodose) using DCMAT allowed the delivery of very high and convergent doses to peripheral lung tumors with feasibility in the acute and subacute phases. Further follow-up is required to assess for late toxicity.
Data driven prognostics using a Kalman filter ensemble of neural network models
This paper details the winning method in the IEEE GOLD category of the PHM psila08 Data Challenge. The task was to estimate the remaining useable life left of an unspecified complex system using a purely data driven approach. The method involves the construction of Multi-Layer Perceptron and Radial Basis Function networks for regression. A suitable selection of these networks has been successfully combined in an ensemble using a Kalman filter. The Kalman filter provides a mechanism for fusing multiple neural network model predictions over time. The essential initial stages of pre-processing and data exploration are also discussed.
Intrusion detection system for high-speed network
The increasing network throughput challenges the current Network Intrusion Detection Systems (NIDS) to have compatible highperformance data processing. In this paper, we describe an in-depth research on the related techniques of high-performance network intrusion detection and an implementation of a Rule-based High-performance Network Intrusion Detection System (RHPNIDS) for high-speed networks. By integrating several performance optimizing methods, the performance of RHPNIDS is very impressive compared with the popular open source NIDS Snort. q 2004 Elsevier B.V. All rights reserved.
Privacy in Interaction: Exploring Disclosure and Social Capital in Facebook
In this paper, we explore the relationship between Facebook users’ privacy concerns, relationship maintenance strategies, and social capital outcomes. Previous research has found a positive relationship between various measures of Facebook use and perceptions of social capital, i.e., one’s access to social and information-based resources. Other research has found that social network site users with high privacy concerns modify their disclosures on the site. However, no research to date has empirically tested how privacy concerns and disclosure strategies interact to influence social capital outcomes. To address this gap in the literature, we explored these questions with survey data (N=230). Findings indicate that privacy concerns and behaviors predict disclosures on Facebook, but not perceptions of social capital. In addition, when looking at predictors of social capital, we identify interaction effects between users’ network composition and their use of privacy features.
The lasso-loop, lasso-mattress and simple-cinch stitch for arthroscopic rotator cuff repair: are there biomechanical differences?
Various stitching techniques have been described to facilitate arthroscopic repair of rotator cuff tears. The aim of the present study was to compare the biomechanical properties of the lasso-loop, lasso-mattress and simple-cinch stitch for rotator cuff repair. Twelve infraspinatus tendons were harvested from sheep and split in half. The tendons were randomized into three different stitch configuration groups for biomechanical testing: lasso-loop, lasso-mattress and simple-cinch stitch. Each specimen was first cyclically loaded on a universal materials testing machine under force control from 5 to 30 N at 0.25 Hz for twenty cycles. Then, each specimen was loaded to failure under displacement control at a rate of 1 mm/s. Cyclic elongation, peak-to-peak displacement and ultimate tensile load were reported as mean ± standard error and compared using one way analysis of variance. The type of failure was recorded. No differences in cyclic elongation (1.31 ± 0.09 mm for the simple-cinch vs. 1.49 ± 0.07 mm for the lasso-mattress vs. 1.61 ± 0.09 mm for the lasso-loop stitch, p = 0.063) or peak-to-peak displacement (0.58 ± 0.04 mm for the simple-cinch, 0.50 ± 0.03 mm for the lasso-mattress and 0.62 ± 0.06 mm for the lasso-loop stitch, p = 0.141) were seen between all tested stitch configurations. In the load-to-failure test, the simple cinch stitch (149.38 ± 11.89 N) and the lasso-mattress (149.38 ± 10.33 N) stitch demonstrated significantly higher ultimate load than the lasso-loop stitch (65.88 ± 4.75 N, p < 0.001). All stitch configurations failed with suture pull out. The lasso-mattress and the simple-cinch stitch showed similar biomechanical properties with significant higher tensile loads needed for failure than the lasso-loop stitch.
Dialog state tracking, a machine reading approach using Memory Network
In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a questionanswering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset and the definition of this dialog state task with specific reasoning capabilities like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.
Coping with uncertainty: police strategies for resilient decision-making and action implementation
This study uses a hostage negotiation setting to demonstrate how a team of strategic police officers can utilize specific coping strategies to minimize uncertainty at different stages of their decision-making in order to foster resilient decision-making to effectively manage a high-risk critical incident. The presented model extends the existing research on coping with uncertainty by (1) applying the RAWFS heuristic (Lipshitz and Strauss in Organ Behav Human Decis Process 69:149–163, 1997) of individual decision-making under uncertainty to a team critical incident decision-making domain; (2) testing the use of various coping strategies during “in situ” team decision-making by using a live simulated hostage negotiation exercise; and (3) including an additional coping strategy (“reflection-in-action”; Schön in The reflective practitioner: how professionals think in action. Temple Smith, London, 1983) that aids naturalistic team decision-making. The data for this study were derived from a videoed strategic command meeting held within a simulated live hostage training event; these video data were coded along three themes: (1) decision phase; (2) uncertainty management strategy; and (3) decision implemented or omitted. Results illustrate that, when assessing dynamic and high-risk situations, teams of police officers cope with uncertainty by relying on “reduction” strategies to seek additional information and iteratively update these assessments using “reflection-in-action” (Schön 1983) based on previous experience. They subsequently progress to a plan formulation phase and use “assumption-based reasoning” techniques in order to mentally simulate their intended courses of action (Klein et al. 2007), and identify a preferred formulated strategy through “weighing the pros and cons” of each option. In the unlikely event that uncertainty persists to the plan execution phase, it is managed by “reduction” in the form of relying on plans and standard operating procedures or by “forestalling” and intentionally deferring the decision while contingency planning for worst-case scenarios.
CEO Overcon fi dence and Corporate Investment ∗
Zh h{soruh ehkdylrudo h{sodqdwlrqv iru vxe0rswlpdo frusrudwh lqyhvwphqw ghflvlrqv1 Irfxvlqj rq wkh vhqvlwlylw| ri lqyhvwphqw wr fdvk rz/ zh dujxh wkdw shuvrqdo fkdudfwhulvwlfv ri fklhi h{hfxwlyh r fhuv/ lq sduwlfxodu ryhufrq ghqfh/ fdq dffrxqw iru wklv zlghvsuhdg dqg shuvlvwhqw lqyhvwphqw glvwruwlrq1 Ryhufrq ghqw FHRv ryhuhvwlpdwh wkh txdolw| ri wkhlu lqyhvwphqw surmhfwv dqg ylhz h{whuqdo qdqfh dv xqgxo| frvwo|1 Dv d uhvxow/ wkh| lqyhvw pruh zkhq wkh| kdyh lqwhuqdo ixqgv dw wkhlu glvsrvdo1 Zh whvw wkh ryhufrq ghqfh k|srwkhvlv/ xvlqj gdwd rq shuvrqdo sruwirolr dqg frusrudwh lqyhvwphqw ghflvlrqv ri FHRv lq Iruehv 833 frpsdqlhv1 Zh fodvvli| FHRv dv ryhufrq ghqw li wkh| uhshdwhgo| idlo wr h{huflvh rswlrqv wkdw duh kljko| lq wkh prqh|/ ru li wkh| kdelwxdoo| dftxluh vwrfn ri wkhlu rzq frpsdq|1 Wkh pdlq uhvxow lv wkdw lqyhvwphqw lv vljql fdqwo| pruh uhvsrqvlyh wr fdvk rz li wkh FHR glvsod|v ryhufrq ghqfh1 Lq dgglwlrq/ zh lghqwli| shuvrqdo fkdudfwhulvwlfv rwkhu wkdq ryhufrq ghqfh +hgxfdwlrq/ hpsor|phqw edfnjurxqg/ frkruw/ plolwdu| vhuylfh/ dqg vwdwxv lq wkh frpsdq|, wkdw vwurqjo| d hfw wkh fruuhodwlrq ehwzhhq lqyhvwphqw dqg fdvk rz1
Passive and active battery balancing comparison based on MATLAB simulation
Battery systems are affected by many factors, the most important one is the cells unbalancing. Without the balancing system, the individual cell voltages will differ over time, battery pack capacity will decrease quickly. That will result in the fail of the total battery system. Thus cell balancing acts an important role on the battery life preserving. Different cell balancing methodologies have been proposed for battery pack. This paper presents a review and comparisons between the different proposed balancing topologies for battery string based on MATLAB/Simulink® simulation. The comparison carried out according to circuit design, balancing simulation, practical implementations, application, balancing speed, complexity, cost, size, balancing system efficiency, voltage/current stress … etc.
Multimodal Probabilistic Model-Based Planning for Human-Robot Interaction
This paper presents a method for constructing human-robot interaction policies in settings where multimodality, i.e., the possibility of multiple highly distinct futures, plays a critical role in decision making. We are motivated in this work by the example of traffic weaving, e.g., at highway on-ramps/off-ramps, where entering and exiting cars must swap lanes in a short distance-a challenging negotiation even for experienced drivers due to the inherent multimodal uncertainty of who will pass whom. Our approach is to learn multimodal probability distributions over future human actions from a dataset of human-human exemplars and perform real-time robot policy construction in the resulting environment model through massively parallel sampling of human responses to candidate robot action sequences. Direct learning of these distributions is made possible by recent advances in the theory of conditional variational autoencoders (CVAEs), whereby we learn action distributions simultaneously conditioned on the present interaction history, as well as candidate future robot actions in order to take into account response dynamics. We demonstrate the efficacy of this approach with a human-in-the-loop simulation of a traffic weaving scenario.
3D Reconstruction of Incomplete Archaeological Objects Using a Generative Adversarial Network
We introduce a data-driven approach to aid the repairing and conservation of archaeological objects: ORGAN, an object reconstruction generative adversarial network (GAN). By using an encoder-decoder 3D deep neural network on a GAN architecture, and combining two loss objectives: a completion loss and an Improved Wasserstein GAN loss, we can train a network to effectively predict the missing geometry of damaged objects. As archaeological objects can greatly differ between them, the network is conditioned on a variable, which can be a culture, a region or any metadata of the object. In our results, we show that our method can recover most of the information from damaged objects, even in cases where more than half of the voxels are missing, without producing many errors.
Tissue expansion in the treatment of giant congenital melanocytic nevi of the upper extremity
The aim of our study was to use tissue expansion for the treatment of giant congenital melanocytic nevi of the upper extremity and examine potential advantages over traditional techniques.There were 3 stages in the treatment of giant congenital melanocytic nevi of the upper extremities using tissue expansion: first, the expander was inserted into the subcutaneous pocket; second, the expander was removed, lesions were excised, and the wound of the upper extremity was placed into the pocket to delay healing; third, the residual lesion was excised and the pedicle was removed. The pedicle flap was then unfolded to resurface the wound.During the period between June 2007 and December 2015, there were 11 patients with giant congenital melanocytic nevi of the upper extremities who underwent reconstruction at our department with skin expansion. Few complications were noted in each stage of treatment. The functional and aesthetic results were observed and discussed in this study.Optimal aesthetic and functional results were obtained using tissue expansion to reconstruct the upper extremities due to the giant congenital melanocytic nevi.
Nipah virus infection in bats (order Chiroptera) in peninsular Malaysia.
Nipah virus, family Paramyxoviridae, caused disease in pigs and humans in peninsular Malaysia in 1998-99. Because Nipah virus appears closely related to Hendra virus, wildlife surveillance focused primarily on pteropid bats (suborder Megachiroptera), a natural host of Hendra virus in Australia. We collected 324 bats from 14 species on peninsular Malaysia. Neutralizing antibodies to Nipah virus were demonstrated in five species, suggesting widespread infection in bat populations in peninsular Malaysia.
Analyzing the Behavior of Visual Question Answering Models
Recently, a number of deep-learning based models have been proposed for the task of Visual Question Answering (VQA). The performance of most models is clustered around 60-70%. In this paper we propose systematic methods to analyze the behavior of these models as a first step towards recognizing their strengths and weaknesses, and identifying the most fruitful directions for progress. We analyze the best performing models from two major classes of VQA models – with-attention and without-attention and show the similarities and differences in the behavior of these models. Our behavior analysis reveals that despite recent progress, today’s VQA models are “myopic” (tend to fail on sufficiently novel instances), often “jump to conclusions” (converge on a predicted answer after ‘listening’ to just half the question), and are “stubborn” (do not change their answers across images).
DeepMask: Masking DNN Models for robustness against adversarial samples
Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepMask. By identifying and removing unnecessary features in a DNN model, DeepMask limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepMask is easy to implement and computationally efficient. Experimental results show that DeepMask can increase the performance of state-of-the-art DNN models against adversarial samples.
Using a Liquid Democracy Tool for End-user Involvement in Continuous RE
REFSQ 2017: Joint Proceedings of 23rd International Working Conference on Requirements Engineering: Foundation for Software Quality, Essen, Germany, February 27, 2017
Monitoring during and after antiviral therapy for hepatitis B.
UNLABELLED Recent studies suggest that long-term suppression of viral replication is critical to reducing the complications of chronic hepatitis B virus (HBV) infection. Monitoring for continued virological response during and after treatment is essential because current treatment options have limited success in achieving durable endpoints, and antiviral resistance may emerge during long-term therapy. Methods of monitoring treatment response include tests for serum aminotransferase levels, HBV DNA level, hepatitis B e antigen (HBeAg) and antibody (anti-HBe), hepatitis B surface antigen (HBsAg) or antibody (anti-HBs), and liver histology. Virological suppression and loss of HBeAg or HBsAg with or without seroconversion play a prominent role in decision-making regarding the success and duration of antiviral therapy. Guidelines recommend that testing for serum markers be repeated every 12-24 weeks during antiviral therapy and every 6-12 months afterward. Recent data also suggest that serum HBV DNA levels should be assessed at weeks 12 and 24 of therapy, because early viral response may predict the likelihood of sustained response and antiviral resistance. The use of serum HBV DNA levels for this purpose requires an assay with a wide range of quantification, such as real-time polymerase chain reaction assays, which have a 7-8 log(10) dynamic range. Newer, investigational methods for monitoring treatment response include quantitative measurement of HBsAg, HBeAg, and intrahepatic covalently closed circular DNA. CONCLUSIONS Better methods for defining durable treatment endpoints are needed. Other areas requiring further research include the optimal treatment duration and the establishment of the optimal use of early viral kinetics for decision-making during antiviral therapy.
Occupational kneeling and squatting: development and validation of an assessment method combining measurements and diaries
OBJECTIVES As knee-straining postures such as kneeling and squatting are known to be risk factors for knee disorders, there is a need for effective exposure assessment at the workplace. Therefore, the aim of this study was to develop a method to capture knee-straining postures for entire work shifts by combining measurement techniques with the information obtained from diaries, and thus avoiding measuring entire work shifts. This approach was applied to various occupational tasks to obtain an overview of typical exposure values in current specific occupations. METHODS The analyses were carried out in the field using an ambulatory measuring system (CUELA) to assess posture combined with one-day self-reported occupational diaries describing the durations of various work tasks. In total, 242 work shifts were measured, representing 81 typical tasks from 16 professions. Knee-straining postures were analysed as daily time intervals for five different postures. The accuracy of the method was examined by comparing the results to measurements of entire work shifts. RESULTS Unsupported kneeling was the most widely used knee posture in our sample (median 11.4 % per work shift), followed by supported kneeling (3.0 %), sitting on heels (1.1 %), squatting (0.7 %), and crawling (0.0 %). The daily time spent in knee-straining postures varied considerably, both between the individual occupations, within an occupation (e.g. parquet layers: 0.0-88.9 %), and to some extent even within a single task (e.g. preparation work of floor layers (22.0 ± 23.0 %). The applied measuring method for obtaining daily exposure to the knee has been proven valid and efficient randomly compared with whole-shift measurements (p = 0.27). CONCLUSIONS The daily degree of postural exposure to the knee showed a huge variation within the analysed job categories and seemed to be dependent on the particular tasks performed. The results of this study may help to develop an exposure matrix with respect to occupational knee-straining postures. The tested combination of task-based measurement and diary information may be a promising option for providing a cost-effective assessment tool.
Validation of HPLC-UV method for determination of amoxicillin Trihydrate in capsule
The intention of the present work is to validate an easy, better and reasonable approach for estimation of amoxicillin trihydrate in tablet formulation by opposite segment(reverse phase) HPLC –UV with advanced conditions and parameters for habitual use in Rwanda well known board in pharmaceutical laboratory in order to check if no substandard or counterfeit amoxicillin has entered in our country that can result in antimicrobial resistance, treatment failure which can be a chief diffi culty on public health. an easy, selective, precise, speedy, specifi c, and correct reverse phase HPLC UV-seen technique has been verifi ed for the dedication of amoxicillin, in addition that is a cost-effective technique for the established method, monobasic potassium phosphate (KH2PO4) used as buffer and methanol and had been used as a mobile section in the ratio 95:5 respectively. The elution turned into fi nished in an isocratic mode at a go with the fl ow rate of 1.5ml/minute proposed method became demonstrated as according to ICH guiding principle refereeing additionally to USP necessities for amoxicillin capsule. linearity range of amoxicillin and was evaluated inside the variety of 20–160 g/ml. the correlation coeffi cient r2 changed into 0.9998 and the relative well known deviation between six replicates injection was always much less than 2%. The retention time was found 3.5±0.02. the high percentage of healing of amoxicillin is 100.6±4% indicates that the proposed method is exceptionally correct and precise trueness of with the trueness of 100.06±1.2% .the statistical evaluation proved that the demonstrated method is appropriate for analysis of amoxicillin as the majority drug and pharmaceutical formula with none interference from excipients .with the aid of considering the effi ciency of the drug samples, all analyzed pattern were within the variety of 90-120 % of percentage of labeled amount, but the effi ciency had been distinctive amongst samples. The have a look at located that no counterfeit, no substandard product turned into amongst all batches of amoxicillin samples throughout the c programming language of the look at. Review Article Validation of HPLC-UV method for determination of amoxicillin Trihydrate in capsule Sendanyoye Marcel*, Uwambajineza Tito, Ineza Ines and Nahimana Jean Pierre University of Rwanda, College of Medicine and Health Science, Department of Pharmacy, Kigali, Rwanda *Address for Correspondence: Sendanyoye Marcel, University of Rwanda, College of Medicine and Health Science, Department of Pharmacy, Kigali, Rwanda, Tel: +250783493980; Email: marcellomarcello@yahoo.com Submitted: 14 September 2018 Approved: 03 October 2018 Published: 04 October 2018 Copyright: © 2018 Marcel S, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Validity of a virtual environment for stroke rehabilitation.
BACKGROUND AND PURPOSE Virtual environments for use in stroke rehabilitation are in development, but there has been little evaluation of their suitability for this purpose. We evaluated a virtual environment developed for the rehabilitation of the task of making a hot drink. METHODS Fifty stroke patients undergoing rehabilitation in a UK hospital stroke unit were involved. The performance of stroke rehabilitation patients when making a hot drink had the neurological impairments associated with performance of this task, and the errors observed were compared for standardized task performance in the real world and in a virtual environment. Neurological impairments were measured using standardized assessments. Errors in task performance were assessed rating video recordings and classified into error types. RESULTS Real-world and virtual environment performance scores were not strongly associated (rho=0.30; P<0.05). Performance scores in both settings were associated with age, Barthel ADL score, Mini Mental State Examination score, and tests of visuospatial function. Real-world performance only was associated with arm function and sequencing ability. Virtual environment performance only was associated with language function and praxis. Participants made different errors during task performance in the real world and in the virtual environment. CONCLUSIONS Although this virtual environment was usable by stroke rehabilitation patients, it posed a different rehabilitation challenge from the task it was intended to simulate, and so it might not be as effective as intended as a rehabilitation tool. Other virtual environments for stroke rehabilitation in development require similar evaluation.
CertiCoq : A verified compiler for Coq
CertiCoq is a mechanically verified, optimizing compiler for Coq that bridges the gap between certified high-level programs and their translation to machine language. We outline its design as well as the main foundational and engineering challenges involved in building and certifying a compiler for Coq in Coq.
An Autonomous Multi-UAV System for Search and Rescue
This paper proposes and evaluates a modular architecture of an autonomous unmanned aerial vehicle (UAV) system for search and rescue missions. Multiple multicopters are coordinated using a distributed control system. The system is implemented in the Robot Operating System (ROS) and is capable of providing a real-time video stream from a UAV to one or more base stations using a wireless communications infrastructure. The system supports a heterogeneous set of UAVs and camera sensors. If necessary, an operator can interfere and reduce the autonomy. The system has been tested in an outdoor mission serving as a proof of concept. Some insights from these tests are described in the paper.
Strategic Management of Distressed Inventory
I is well known that maximizing revenue from a fixed stock of perishable goods may require discounting prices rather than allowing unsold inventory to perish. This behavior is seen in industries ranging from fashion retail to tour packages and baked goods. A number of authors have addressed the markdown management problem in which a seller seeks to determine the optimal sequence of discounts to maximize the revenue from a fixed stock of perishable goods. However, merchants who consistently use markdown policies risk training customers to “wait for the sale.” We investigate models in which the decision to sell inventory at a discount will change the future expectations of customers and hence their buying behavior. We show that, in equilibrium, a single-price policy is optimal if all consumers are strategic and demand is known to the seller. Relaxing any of these conditions can lead to a situation in which a two-price markdown policy is optimal. We show using numerical simulation that if customers update their expectations of availability over time, then optimal sales limit policies can evolve in a complex fashion.
Learning a concept-based document similarity measure
Document similarity measures are crucial components of many text-analysis tasks, including information retrieval, document classification, and document clustering. Conventional measures are brittle: They estimate the surface overlap between documents based on the words they mention and ignore deeper semantic connections. We propose a new measure that assesses similarity at both the lexical and semantic levels, and learns from human judgments how to combine them by using machine-learning techniques. Experiments show that the new measure produces values for documents that are more consistent with people’s judgments than people are with each other. We also use it to classify and cluster large document sets covering different genres and topics, and find that it improves both classification and clustering performance.
Tagging YouTube-A Classification of Tagging Practice on YouTube
A problem exists of how to categorise the abundance of user generated content being uploaded to social sites. One method of categorisation being applied is tagging, user generated keywords that are assigned to the content. This research presents a study into the tagging practice of YouTube users. A classification scheme was applied to a dataset of 768 tags, assigning the tags to different categories of tag type. Analysis reveals how useful the tagging method on YouTube is at improving the categorisation of user generated video content in contrast to collaborative tagging systems.
Execute This! Analyzing Unsafe and Malicious Dynamic Code Loading in Android Applications
The design of the Android system allows applications to load additional code from external sources at runtime. On the one hand, malware can use this capability to add malicious functionality after it has been inspected by an application store or anti-virus engine at installation time. On the other hand, developers of benign applications can inadvertently introduce vulnerabilities. In this paper, we systematically analyze the security implications of the ability to load additional code in Android. We developed a static analysis tool to automatically detect attempts to load external code using static analysis techniques, and we performed a large-scale study of 1,632 popular applications from the Google Play store, showing that loading external code in an insecure way is a problem in as much as 9.25% of those applications and even 16% of the top 50 free applications. We also show how malware can use code-loading techniques to avoid detection by exploiting a conceptual weakness in current Android malware protection. Finally, we propose modifications to the Android framework that enforce integrity checks on code to mitigate the threats imposed by the ability to load external code.
Blockchain for the Internet of Things: A systematic literature review
In the Internet of Things (IoT) scenario, the block-chain and, in general, Peer-to-Peer approaches could play an important role in the development of decentralized and dataintensive applications running on billion of devices, preserving the privacy of the users. Our research goal is to understand whether the blockchain and Peer-to-Peer approaches can be employed to foster a decentralized and private-by-design IoT. As a first step in our research process, we conducted a Systematic Literature Review on the blockchain to gather knowledge on the current uses of this technology and to document its current degree of integrity, anonymity and adaptability. We found 18 use cases of blockchain in the literature. Four of these use cases are explicitly designed for IoT. We also found some use cases that are designed for a private-by-design data management. We also found several issues in the integrity, anonymity and adaptability. Regarding anonymity, we found that in the blockchain only pseudonymity is guaranteed. Regarding adaptability and integrity, we discovered that the integrity of the blockchain largely depends on the high difficulty of the Proof-of-Work and on the large number of honest miners, but at the same time a difficult Proof-of-Work limits the adaptability. We documented and categorized the current uses of the blockchain, and provided a few recommendations for future work to address the above-mentioned issues.
Solution of non-convex economic load dispatch problem using Grey Wolf Optimizer
Grey Wolf Optimizer (GWO) is a recently developed meta-heuristic search algorithm inspired by grey wolves (Canis lupus), which simulate the social stratum and hunting mechanism of grey wolves in nature and based on three main steps of hunting: searching for prey, encircling prey and attacking prey. This paper presents the application of GWO algorithm for the solution of non-convex and dynamic economic load dispatch problem (ELDP) of electric power system. The performance of GWO is tested for ELDP of small-, medium- and large-scale power systems, and the results are verified by a comparative study with lambda iteration method, Particle Swarm Optimization algorithm, Genetic Algorithm, Biogeography-Based Optimization, Differential Evolution algorithm, pattern search algorithm, NN-EPSO, FEP, CEP, IFEP and MFEP. Comparative results show that the GWO algorithm is able to provide very competitive results compared to other well-known conventional, heuristics and meta-heuristics search algorithms.
Deleterious effect of right ventricular apical pacing on left ventricular diastolic function and the impact of pre-existing diastolic disease.
AIMS Right ventricular apex (RVA) pacing may have deleterious effects on left ventricular (LV) systolic function, but its impact on LV diastolic function has not been explored. METHODS AND RESULTS Ninety-seven patients with sinus node dysfunction and ejection fraction (EF) ≥ 50% with permanent RVA pacing were randomly programmed to V-sense and V-pace modes and examined by echocardiography. Tissue Doppler imaging was employed to assess myocardial systolic velocity (S') and early diastolic velocity (E') at the mitral annulus. Systolic dyssynchrony was assessed using 12 LV segmental model (Ts-SD). Switching from V-sense to V-pace resulted in the worsening of both diastolic and systolic functions as shown by the decreased EF, reduced mean E' and S' velocities, as well as increase in LV volume and Ts-SD (all P< 0.001). Reduction of mean E' and S' of ≥ 1 cm/s occurred in 35 (36%) and 45 (46%) patients, respectively. In pre-defined subgroup analysis, only patients with pre-existing LV diastolic dysfunction had a significant reduction of mean E' and S' (both P< 0.001) even after age adjustment. Multivariate logistic regression analysis showed that independent factors for the reduction of mean E' ≥ 1 cm/s or mean S' ≥ 1 cm/s at V-pace were pre-existing LV diastolic dysfunction [odds ratio (OR): 4.735, P= 0.007 for E'; OR: 3.307, P= 0.022 for S'] and systolic dyssynchrony at V-pace (OR: 5.459, P= 0.007 for E'; OR: 2.725, P= 0.035 for S'). CONCLUSION In patients with preserved EF, RVA pacing is associated with the deterioration of both LV diastolic and systolic functions, which is particularly obvious in those with pre-existing LV diastolic dysfunction and V-pace-induced systolic dyssynchrony.
Analogical Representation of Spatial Events for Understanding Traffic Behaviour
In computer vision the usual level of \in-terpretation" is the identiication of the objects in the image. In this paper, we extend the level of interpretation to include spatial event detection using a knowledge base for a known scene. This will allow us to formulate a computational theory for forming conceptual descriptions about the behaviours of the objects. Here we describe an analogical representation of space and time that supports the formation of these conceptual descriptions and allows strong contextual indexing of our spatial knowledge.
Introduction to Analytic Number Theory Math 531 Lecture Notes , Fall 2005
The values of most common arithmetic functions f (n), such as the divisor function d(n) or the Moebius function µ(n), depend heavily on the arithmetic nature of the argument n. As a result, such functions exhibit a seemingly chaotic behavior when plotted or tabulated as functions of n, and it does not make much sense to seek an " asymptotic formula " for f (n). However, it turns out that most natural arithmetic functions are very well behaved on average, in the sense that the arithmetic means M f (x) = (1/x) n≤x f (n), or, equivalently, the " summatory functions " S f (x) = n≤x f (n), behave smoothly as x → ∞ and can often be estimated very accurately. In this chapter we discuss the principal methods to derive such estimates. Aside from the intrinsic interest of studying the behavior of M f (x) or S f (x), these quantities arise naturally in a variety of contexts, and having good estimates available is crucial for a number of applications. Here are some examples, all of which will be discussed in detail later in this chapter. (1) The number of Farey fractions of order Q, i.e., the number of rationals in the interval (0, 1) whose denominator in lowest terms is ≤ Q, is equal to S φ (Q), where S φ (x) = n≤x φ(n) is the summatory function of the Euler phi function. (2) The " probability " that two randomly chosen positive integers are co-prime is equal to the limit lim x→∞ 2S φ (x)/x 2 , which turns to be 6/π 2. (3) The " probability " that a randomly chosen positive integer is squarefree 41 42 CHAPTER 2. ARITHMETIC FUNCTIONS II is equal to the " mean value " of the function µ 2 (n)(= |µ(n)|), i.e., the limit lim x→∞ M µ 2 (x), which turns out to be 6/π 2. (4) More generally, if f A (n) is the characteristic function of a set A ⊂ N, then the mean value lim x→∞ M f A (x) of f A , if it exists, can be interpreted as the " density " of the set A, or the " probability " that a randomly chosen positive integer belongs to A. (5) The Prime Number Theorem is equivalent to the relation lim x→∞ M Λ (x) = 1, which can be interpreted …
Effects of Glutamine on Glycemic Control During and After Exercise in Adolescents With Type 1 Diabetes
OBJECTIVE To investigate if oral glutamine ameliorates exercise and postexercise nighttime hypoglycemia in type 1 diabetic adolescents. RESEARCH DESIGN AND METHODS Ten adolescents (15.2 +/- 1.4 years [SD], A1C 6.9 +/- 0.9%) on insulin pumps were studied. The subjects were randomized to receive a glutamine or placebo drink pre-exercise and at bedtime (0.25 g/kg/dose). A 3:00 p.m. exercise session consisted of four 15-min treadmill/5-min rest cycles. Pre-exercise blood glucose was 140-150 mg/dl and was monitored throughout the night. Studies were randomized crossover over 3 weeks. RESULTS Blood glucose levels dropped comparably (52%) during exercise on both days. However, the overnight number of hypoglycemic events was higher on glutamine than placebo (<or=70 mg/dl, P = 0.03 and <or=60, P = 0.05). The cumulative probability of nighttime hypoglycemia was increased on glutamine days (80%) versus placebo days (50%) (P = 0.02). CONCLUSIONS Glutamine increased the cumulative probability of postexercise overnight hypoglycemia compared with placebo in adolescents with type 1 diabetes. Whether glutamine may enhance insulin sensitivity postexercise requires further study in type 1 diabetes.
Patient-centred care in established rheumatoid arthritis.
Review of the evidence on patient-centred care (PCC) in rheumatoid arthritis (RA) shows that involving the patient as an individual - with unique needs, concerns and preferences - has a relevant impact on treatment outcomes (safety, effectiveness and costs). This approach empowers patients to take personal responsibility for their treatment. Because clinicians are only able to interact personally with their patients just a few hours per year, patients with a chronic condition such as RA should be actively involved in the management of their disease. To stimulate this active role, five different PCC activities can be distinguished: (1) patient education, (2) patient involvement/shared decision-making, (3) patient empowerment/self-management, (4) involvement of family and friends and (5) physical and emotional support. This article reviews the existing knowledge on these five PCC activities in the context of established RA management, especially focused on opportunities to increase medication adherence in established RA.
New Low-Frequency Dispersion Model for AlGaN/GaN HEMTs Using Integral Transform and State Description
A new concept for the low-frequency dispersion aspect of large-signal modeling of microwave III-V field-effect transistors is presented. The approach circumvents the integrability problem between the small-signal transconductance GmRF and the output conductance GdsRF by means of an integral formulation and simultaneously yields a proper description of the drain channel current in the small- and large-signal regime. In the theoretical description of the approach and in an extraction example of an AlGaN/GaN HEMT, it is shown that three independent 2-D nonlinear quantities determine the intrinsic drain channel current (GmRF, GdsRF, and dc current). The concept is transferred to the modeling of the nonlinear charge control, where the integrability problem between the large-signal charge functions and the small-signal intrinsic capacitance matrix (Cgs, Cgd, and Cds) is addressed consistently under consideration of the charge control delays. For the large-signal modeling under pulsed-dc/RF excitation, the dc continuous wave (dc-CW) modeling approach is combined with the state-modeling concept using a superposition formula for drain current and charges, respectively. The new model is implemented in ADS using a 12- and 14-port symbolically defined device for both the dc-CW and pulsed-RF case, respectively. The model has been verified by comparison to measured CW and pulsed-RF load-pull and waveform data at 10-GHz fundamental frequency.
Dexamethasone, paclitaxel, etoposide, cyclophosphamide (d-TEC) and G-CSF for stem cell mobilisation in multiple myeloma
Forty-one patients with multiple myeloma were treated with a novel stem cell mobilisation regimen. The primary end points were adequate stem cell mobilising ability (>1% circulating CD34-positive cells) and collection (⩾4 × 106 CD34-positive cells/kg), and safety. The secondary end point was activity against myeloma. The regimen (d-TEC) consisted of dexamethasone, paclitaxel 200 mg/m2 i.v., etoposide 60 mg/kg i.v., cyclophosphamide 3 g/m2 i.v., and G-CSF 5–10 μg/kg/day i.v. A total of 84 cycles were administered to these 41 individuals. Patient characteristics included a median age of 53 years, a median of five prior chemotherapy cycles, and a median interval of 10 months from diagnosis of myeloma to first cycle of d-TEC. Seventy-five percent of the patients had stage II or III disease, 50% had received carmustine and/or melphalan previously, and 25% had received prior radiation therapy. Eighty-eight percent of patients mobilised adequately after the first cycle of d-TEC and 91% mobilized adequately after the second cycle. An adequate number of stem cells were collected in 32 patients. Of the remaining nine patients, three mobilised, but stem cells were not collected, two mobilised but stem cell collection was <4 × 106 CD34-positive cells/kg, three did not mobilise, and one died of disease progression. Major toxicities included pancytopenia, alopecia, fever and stomatitis. One patient died from multi-organ failure and progressive disease. Fifty percent of evaluable patients demonstrated a partial response and 28.6% of patients had a minor response. This novel dose-intense regimen was safe, capable of stem cell mobilisation and collection, even in heavily pre-treated patients, and active against the underlying myeloma. Bone Marrow Transplantation (2001) 28, 137–143.
Integrating the core professional values of nursing: a profession, not just a career.
To meet the predicted deficit of more than 1 million nurses by 2020, traditional nursing recruitment must target previously un-recruited populations, as well as a culturally diversified workforce to include variations in age, ethnicity, gender, life style, national origin, and sexual orientations. As diversity increases, differences must be bridged to acculturate new nurses to recognize and identify with a shared nursing ideology and culture. The core professional nursing values (CPNVs) impart a common foundation that unites students and nurses in a meaningful, collective culture. Baccalaureate nursing programs actively promulgate these professional nursing values, however, methods to incorporate them into curricula are often absent from the literature. Following an intervention integrating the CPNVs into academic education, students affirmed the usefulness of this approach describing that the integration of the core values created a shared culture of professional nursing and deepened their commitment to the profession. Incorporating the CPNVs provided a promising approach that bridged the cultural chasm of a highly diverse student population and the profession of nursing by creating a shared professional culture across the myriad differences.
Linearly convergent stochastic heavy ball method for minimizing generalization error
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.
Reconstructing the Human Past using Ancient and Modern Genomes
The study of DNA variation is one of the most promising avenues for learning about the evolutionary and historical past of humans and other species. However, the difficulty associated with obtainin ...
Future Policy Directions Challenges for the States
"Who is to decide?" is the quintessential question in politics. For energy and environmental policies, the issue will be partially determined by which level of government, federal or state, serves as the central decision-making arena. This article suggests that, if brought together, the insights of two separate literatures might usefully be applied to this question. An overview of natural resources and environmental policy literature will suggest that the locus of decision-making authority in states and regions may result in more rational, less environmentally damaging decisions. An examination of the literature on federalism points toward a trend of growing state influence over and capability for decision making. The article therefore concludes with a list of some of the challenges involved in effectuating state influence over energy and environmental policy.
SU(4) pure-gauge phase structure and string tensions☆☆☆
Abstract We present numerical evidence that the SU(4) pure-gauge dynamics has a finite-temperature first-order phase transition. For a 6 × 20 3 lattice, this transition occurs at the inverse-square coupling of 8 g 2 /≈ 10.79 . Below this and above the known bulk phase transition at 8 g 2 ≈10.2 is a confined phase in which we find two different string tensions, one between the fundamental 4 and 4 ∗ representations and the other between the self-dual diquark 6 representations. The ratio of these two is about 1.5. The correlation in the adjoint representation suggests no string forms between adjoint charges.
Effects of Cable on the Dynamics of a Cantilever Beam with Tip Mass
The dynamic effects of cable attachment on a cantilever beam with tip mass are investigated by an improved Chebyshev spectral element method. The cabled beam is modeled as a double-beam system connected by springs at several discrete locations. By utilizing high order Chebyshev polynomials as basis functions and meshing the system at the locations of connections, precise numerical results of the natural frequencies and mode shapes can be obtained using only a few elements. The accuracy of this method is validated through comparing the results of finite element method and those of spectral element method in literature.The validatedmethod is implemented to investigate the effects of parameters, including spring stiffness, number of connections, density, and Young’s modulus of cable. The results show that the mode shapes of the cabled beam system can be classified into two types: beam mode shapes and cable mode shapes, according to their main deformation. Their corresponding natural frequencies change in very different ways with the variation of system parameters. This work can be applied to optimize the dynamic characteristics of precise spacecraft structures with cable attachments.
What is Gab: A Bastion of Free Speech or an Alt-Right Echo Chamber
Over the past few years, a number of new “fringe” communities, like 4chan or certain subreddits, have gained traction on the Web at a rapid pace. However, more often than not, little is known about how they evolve or what kind of activities they attract, despite recent research has shown that they influence how false information reaches mainstream communities. This motivates the need to monitor these communities and analyze their impact on the Web’s information ecosystem. In August 2016, a new social network called Gab was created as an alternative to Twitter. It positions itself as putting “people and free speech first”, welcoming users banned or suspended from other social networks. In this paper, we provide, to the best of our knowledge, the first characterization of Gab. We collect and analyze 22M posts produced by 336K users between August 2016 and January 2018, finding that Gab is predominantly used for the dissemination and discussion of news and world events, and that it attracts alt-right users, conspiracy theorists, and other trolls. We also measure the prevalence of hate speech on the platform, finding it to be much higher than Twitter, but lower than 4chan’s Politically Incorrect board.
Imputation-boosted collaborative filtering using machine learning classifiers
As data sparsity remains a significant challenge for collaborative filtering (CF, we conjecture that predicted ratings based on imputed data may be more accurate than those based on the originally very sparse rating data. In this paper, we propose a framework of imputation-boosted collaborative filtering (IBCF), which first uses an imputation technique, or perhaps machine learned classifier, to fill-in the sparse user-item rating matrix, then runs a traditional Pearson correlation-based CF algorithm on this matrix to predict a novel rating. Empirical results show that IBCF using machine learning classifiers can improve predictive accuracy of CF tasks. In particular, IBCF using a classifier capable of dealing well with missing data, such as naïve Bayes, can outperform the content-boosted CF (a representative hybrid CF algorithm) and IBCF using PMM (predictive mean matching, a state-of-the-art imputation technique), without using external content information.
Block-Row Sparse Multiview Multilabel Learning for Image Classification
In image analysis, the images are often represented by multiple visual features (also known as multiview features), that aim to better interpret them for achieving remarkable performance of the learning. Since the processes of feature extraction on each view are separated, the multiple visual features of images may include overlap, noise, and redundancy. Thus, learning with all the derived views of the data could decrease the effectiveness. To address this, this paper simultaneously conducts a hierarchical feature selection and a multiview multilabel (MVML) learning for multiview image classification, via embedding a proposed a new block-row regularizer into the MVML framework. The block-row regularizer concatenating a Frobenius norm (F-norm) regularizer and an l2,1-norm regularizer is designed to conduct a hierarchical feature selection, in which the F-norm regularizer is used to conduct a high-level feature selection for selecting the informative views (i.e., discarding the uninformative views) and the 12,1-norm regularizer is then used to conduct a low-level feature selection on the informative views. The rationale of the use of a block-row regularizer is to avoid the issue of the over-fitting (via the block-row regularizer), to remove redundant views and to preserve the natural group structures of data (via the F-norm regularizer), and to remove noisy features (the 12,1-norm regularizer), respectively. We further devise a computationally efficient algorithm to optimize the derived objective function and also theoretically prove the convergence of the proposed optimization method. Finally, the results on real image datasets show that the proposed method outperforms two baseline algorithms and three state-of-the-art algorithms in terms of classification performance.
Analysis and design of topological structure for DC solid-state circuit breaker
Existing mechanical circuit breakers can not satisfy the requirements of fast operation in power system due to noise, electric arc and long switching response time. Moreover the non-grid-connected wind power system is based on the Flexible Direct Current Transmission (FDCT) technique. It is especially necessary to research the Solid-State Circuit Breakers (SSCB) to realize the rapid and automatic control for the circuit breakers in the system. Meanwhile, the newly-developed Solid-State Circuit Breakers (SSCB) operating at the natural zero-crossing point of AC system is not suitable for a DC system. Based on the characteristics of the DC system, a novel circuit scheme has been proposed in this paper. The new scheme makes full use of ideology of soft-switching and current-commutation forced by resonance. This scheme successfully realizes the soft turn-on and fast turn-off. In this paper, the topology of current limiter is presented and analytical mathematical models are derived through comprehensive analysis. Finally, normal turn-on and turn-off experiments and overload delay protection test were conducted. The results show the reliability of the novel theory and feasibility of proposed topology. The proposed scheme can be applied in the grid-connected and non-grid-connected DC transmission and distribution systems.
Potential antibacterial mechanism of silver nanoparticles and the optimization of orthopedic implants by advanced modification technologies
Infection, as a common postoperative complication of orthopedic surgery, is the main reason leading to implant failure. Silver nanoparticles (AgNPs) are considered as a promising antibacterial agent and always used to modify orthopedic implants to prevent infection. To optimize the implants in a reasonable manner, it is critical for us to know the specific antibacterial mechanism, which is still unclear. In this review, we analyzed the potential antibacterial mechanisms of AgNPs, and the influences of AgNPs on osteogenic-related cells, including cellular adhesion, proliferation, and differentiation, were also discussed. In addition, methods to enhance biocompatibility of AgNPs as well as advanced implants modifications technologies were also summarized.
Optimization Methods for Large-Scale Machine Learning
This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.
Quantifying and Testing Indirect Effects in Simple Mediation Models When the Constituent Paths Are Nonlinear.
Most treatments of indirect effects and mediation in the statistical methods literature and the corresponding methods used by behavioral scientists have assumed linear relationships between variables in the causal system. Here we describe and extend a method first introduced by Stolzenberg (1980) for estimating indirect effects in models of mediators and outcomes that are nonlinear functions but linear in their parameters. We introduce the concept of the instantaneous indirect effect of X on Y through M and illustrate its computation and describe a bootstrapping procedure for inference. Mplus code as well as SPSS and SAS macros are provided to facilitate the adoption of this approach and ease the computational burden on the researcher.
Evolutionary algorithms for multiobjective optimization: methods and applications
Many real-world problems involve two types of problem difficulty: i) multiple, conflicting objectives and ii) a highly complex search space. On the one hand, instead of a single optimal solution competing goals give rise to a set of compromise solutions, generally denoted as Pareto-optimal. In the absence of preference information, none of the corresponding trade-offs can be said to be better than the others. On the other hand, the search space can be too large and too complex to be solved by exact methods. Thus, efficient optimization strategies are required that are able to deal with both difficulties. Evolutionary algorithms possess several characteristics that are desirable for this kind of problem and make them preferable to classical optimization methods. In fact, various evolutionary approaches to multiobjective optimization have been proposed since 1985, capable of searching for multiple Paretooptimal solutions concurrently in a single simulation run. However, in spite of this variety, there is a lack of extensive comparative studies in the literature. Therefore, it has remained open up to now: • whether some techniques are in general superior to others, • which algorithms are suited to which kind of problem, and • what the specific advantages and drawbacks of certain methods are. The subject of this work is the comparison and the improvement of existing multiobjective evolutionary algorithms and their application to system design problems in computer engineering. In detail, the major contributions are: • An experimental methodology to compare multiobjective optimizers is developed. In particular, quantitative measures to assess the quality of trade-off fronts are introduced and a set of general test problems is defined, which are i) easy to formulate, ii) represent essential aspects of real-world problems, and iii) test for different types of problem difficulty. • On the basis of this methodology, an extensive comparison of numerous evolutionary techniques is performed in which further aspects such as the influence of elitism and the population size are also investigated. • A novel approach to multiobjective optimization, the strength Pareto evolutionary algorithm, is proposed. It combines both established and new techniques in a unique manner. • Two complex multicriteria applications are addressed using evolutionary algorithms: i) the automatic synthesis of heterogeneous hardware/systems and ii) the multidimensional exploration of software implementations for digital signal processors. Zusammenfassung Viele praktische Optimierungsprobleme sind durch zwei Eigenschaften charakterisiert: a) mehrere, teilweise im Konflikt stehende Zielfunktionen sind involviert, und b) der Suchraum ist hochgradig komplex. Einerseits f ̈ uhren widersprüchliche Optimierungskriterien dazu, dass es statt eines klar definierten Optimums eine Menge von Kompromissl ̈ o ungen, allgemein als Pareto-optimal bezeichnet, gibt. Insofern keine Gewichtung der Kriterien vorliegt, m ̈ ussen die entsprechenden Alternativen als gleichwertig betrachtet werden. Andererseits kann der Suchraum eine bestimmte Gr ̈ osse und Komplexit ̈ at überschreiten, so dass exakte Optimierungsverfahren nicht mehr anwendbar sind. Erforderlich sind demnach effiziente Suchstrategien, die beiden Aspekten gerecht werden. Evolutionäre Algorithmen sind aufgrund mehrerer Merkmale f ̈ ur diese Art von Problem besonders geeignet; vor allem im Vergleich zu klassischen Methoden weisen sie gewisse Vorteile auf. Doch obwohl seit 1985 verschiedenste evolutionäre Ansätze entwickelt wurden, die mehrere Pareto-optimale L ̈ osungen in einem einzigen Simulationslauf generieren k ̈ onnen, mangelt es in der Literatur an umfassenden Vergleichsstudien. Folglich blieb bislang ungekl ̈ art, • ob bestimmte Techniken anderen Methoden generell ̈ ub rlegen sind, • welche Algorithmen f ̈ ur welche Art von Problem geeignet sind und • wo die spezifischen Vorund Nachteile einzelner Verfahren liegen. Die vorliegende Arbeit hat zum Gegenstand, bestehende evolution ̈ are Mehrzieloptimierungsverfahren zu vergleichen, zu verbessern und auf Entwurfsprobleme im Bereich der Technischen Informatik anzuwenden. Im Einzelnen werden folgende Themen behandelt: • Eine Methodik zum experimentellen Vergleich von Mehrzieloptimierungsverfahren wird entwickelt. Unter anderem werden quantitative Qualit ̈ atsmasse für Mengen von Kompromissl ̈ osungen eingef ̈ uhrt und mehrere Testfunktionen definiert, die a) eine einfache Problembeschreibung besitzen, b) wesentliche Merkmale realer Optimierungsprobleme repr ̈ asentieren und c) erlauben, verschiedene Einflussfaktoren separat zu ̈ uberprüfen. • Auf der Basis dieser Methodik wird ein umfangreicher Vergleich diverser evolutionärer Techniken durchgef ̈ uhrt, wobei auch weitere Aspekte wie die Auswirkungen von Elitism und der Populationsgr ̈ osse auf den Optimierungsprozess untersucht werden. • Ein neues Verfahren, der Strength-Pareto-Evolutionary-Algorithm, wird vorgestellt. Es kombiniert auf spezielle Art und Weise bew ̈ ahrte und neue Konzepte miteinander. • Zwei komplexe Mehrzielprobleme werden auf der Basis evolution ̈ arer Methoden untersucht: a) die automatische Synthese von heterogenen Hardware/Software-Systemen und b) die mehrdimensionale Exploration von Softwareimplementierungen f ̈ ur digitale Signalverarbeitungsprozessoren. I would like to thank Prof. Dr. Lothar Thiele for the valuable discussions concerning this research, Prof. Dr. Kalyanmoy Deb for his willingness to be the co-examiner of my thesis,
Real-time automatic license plate recognition for CCTV forensic applications
We propose an efficient real-time automatic license plate recognition (ALPR) framework, particularly designed to work on CCTV video footage obtained from cameras that are not dedicated to the use in ALPR. At present, in license plate detection, tracking and recognition are reasonably well-tackled problems with many successful commercial solutions being available. However, the existing ALPR algorithms are based on the assumption that the input video will be obtained via a dedicated, high-resolution, high-speed camera and is/or supported by a controlled capture environment, with appropriate camera height, focus, exposure/shutter speed and lighting settings. However, typical video forensic applications may require searching for a vehicle having a particular number plate on noisy CCTV video footage obtained via non-dedicated, medium-to-low resolution cameras, working under poor illumination conditions. ALPR in such video content faces severe challenges in license plate localization, tracking and recognition stages. This paper proposes a novel approach for efficient localization of license plates in video sequence and the use of a revised version of an existing technique for tracking and recognition. A special feature of the proposed approach is that it is intelligent enough to automatically adjust for varying camera distances and diverse lighting conditions, a requirement for a video forensic tool that may operate on videos obtained by a diverse set of unspecified, distributed CCTV cameras.
A Direct constraint on dimension eight operators from Z ---> neutrino anti-neutrino gamma
We study the radiative decay Z-->neutrino antineutrino gamma whitin an effective Lagrangian approach. Using the search for energetic single-photon events in the data collected by the L3 collaboration, we get direct bounds on the dimension-six and the dimension-eight operators associated with the tau-neutrino magnetic moment and the anomalous electromagnetic properties of the Z boson. As a by-product of our calculation, we reproduce the L3 result for the bound on the tau-neutrino magnetic moment.
Rehabilitation robots assisting in walking training for SCI patient
We have been developing a series of robots to apply for each step of spinal cord injury (SCI) recovery. We describe the preliminary walking pattern assisting robot and the practical walking assisting robot, which will be applied to the incomplete type of SCI to Quadriplegia. The preliminary experimental results are performed by normal subjects to verify the basic functions of the robot assisting and to prepare the test by SCI patients.
Information Technology Adoption in Small and Medium-sized Enterprises ; An Appraisal of Two Decades Literature
Small to medium-sized enterprises (SMEs) account for major source of employment, technological advancements, and competitive advantages for both developed and developing countries. Owing to the intensified competitive pressure and necessity for entering to global market undergone by SMEs, these businesses are incrementally employing Information Technology (IT) to take advantage of its substantial benefits. Most of prior researches have more focused on IT adoption in large organizations. However, and with regard to the limited resources controlled by SMEs, the process of IT adoption in this business sector is considerably different. The purpose of this paper is to analyze and contrast the internal and external issues affecting the process of IT adoption in SMEs to provide clearer understanding of this process by reviewing IT adoption literature, which includes more than 20 years of empirical research and case studies from a variety of databases with high concentration on certain SME-related issues. Proposed integrated framework demonstrates the process of IT adoption in SMEs through reviewing exiting perspectives in the literature. This study will assist different parties involved with adoption process including managers, vendors, consultants, and governments to achieve a practical synopsis of the IT adoption process in SMEs, which is believed to assist them with successful adoption.
Surfacing Tacit Knowledge in Requirements Negotiation: Experiences using Easy Win Win
Defects in the requirements definition process often lead to costly project failures. One eminent problem is that it can be difficult to take deliberate advantage of important tacit knowledge of success-critical stakeholders. People know more that they can ever tell. Implicit stakeholder goals, hidden assumptions, unshared expectations often result in severe problems in the later stages of software development. In this paper, we will present a set of collaborative techniques that support a team of success-critical stakeholders in surfacing tacit knowledge during systems development projects. We will discuss these techniques in the context of the EasyWinWin requirements negotiation methodology and illustrate our approach with examples from real-world negotiations.
Prediction of 4-year college student performance using cognitive and noncognitive predictors and the impact on demographic status of admitted students.
This study was conducted to determine the validity of noncognitive and cognitive predictors of the performance of college students at the end of their 4th year in college. Results indicate that the primary predictors of cumulative college grade point average (GPA) were Scholastic Assessment Test/American College Testing Assessment (SAT/ACT) scores and high school GPA (HSGPA) though biographical data and situational judgment measures added incrementally to this prediction. SAT/ACT scores and HSGPA were collected and used in various ways by participating institutions in the admissions process while situational judgment measures and biodata were collected for research purposes only during the first few weeks of the participating students' freshman year. Alternative outcomes such as a self-report of performance on a range of student performance dimensions and a measure of organizational citizenship behavior, as well as class absenteeism, were best predicted by noncognitive measures. The racial composition of a student body selected with only cognitive measures or both cognitive and noncognitive measures under various levels of selectivity as well as the performance of students admitted under these scenarios is also reported. The authors concluded that both the biodata and situational judgment measures could be useful supplements to cognitive indexes of student potential in college admissions.