title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Security and privacy issues in cloud computing | This paper shows a quick review of cloud computing technology along with its deployment and service models. We have focused in the security issues of cloud computing and we have listed most of the solutions which are available to solve these issues. Moreover, we have listed the most popular threats which been recognized by cloud security alliance (CSA). Also there are some other threats has been mentioned in this paper. Finally the privacy issues has been explained. |
Altering source or amount of dietary carbohydrate has acute and chronic effects on postprandial glucose and triglycerides in type 2 diabetes: Canadian trial of Carbohydrates in Diabetes (CCD). | BACKGROUND AND AIMS
Nutrition recommendations for type 2 diabetes (T2DM) are partly guided by the postprandial responses elicited by diets varying in carbohydrate (CHO). We aimed to explore whether long-term changes in postprandial responses on low-glycemic-index (GI) or low-CHO diets were due to acute or chronic effects in T2DM.
METHODS AND RESULTS
Subjects with diet-alone-treated T2DM were randomly assigned to high-CHO/high-GI (H), high-CHO/low-GI (L), or low-CHO/high-monounsaturated-fat (M) diets for 12-months. At week-0 (Baseline) postprandial responses after H-meals (55% CHO, GI = 61) were measured from 0800 h to 1600 h. After 12 mo subjects were randomly assigned to H-meals or study diet meals (L, 57% CHO, GI = 50; M, 44% CHO, GI = 61). This yielded 5 groups: H diet with H-meals (HH, n = 34); L diet with H- (LH, n = 17) or L-meals (LL, n = 16); and M diet with H- (MH, n = 18) or M meals (MM, n = 19). Postprandial glucose fluctuations were lower in LL than all other groups (p < 0.001). Changes in postprandial-triglycerides differed among groups (p < 0.001). After 12 mo in HH and MM both fasting- and postprandial-triglycerides were similar to Baseline while in MH postprandial-triglycerides were significantly higher than at Baseline (p = 0.028). In LH, triglycerides were consistently (0.18-0.34 mmol/L) higher than Baseline throughout the day, while in LL the difference from Baseline varied across the day from 0.04 to 0.36 mmol/L (p < 0.001).
CONCLUSION
Low-GI and low-CHO diets have both acute and chronic effects on postprandial glucose and triglycerides in T2DM subjects. Thus, the composition of the acute test-meal and the habitual diet should be considered when interpreting the nutritional implications of different postprandial responses. |
Fuzzy Fingerprint Vault | Biometrics-based authentication has the potential to eliminate illegal key exchange problem associated with traditional cryptosystems. In this paper, we explore the utilization of a fingerprint minutiae line based representation scheme in a new cryptographic construct called fuzzy vault. Minutiae variability is quantified for a fingerprint database marked by a human expert. |
Defect detection on hardwood logs using high resolution three-dimensional laser scan data | The location, type, and severity of external defects on hardwood logs and stems are the primary indicators of overall log quality and value. External defects provide hints about the internal log characteristics. Defect data would improve the sawyer's ability to process logs such that a higher valued product (lumber) is generated. Using a high-resolution laser log scanner, we scanned and digitally photographed 162 red-oak and yellow-poplar logs. By means of a new robust estimator that performs circle fitting, a residual image is extracted from laser scan data that are corrupted by extreme outliers induced by the scanning equipment and loose bark. The residuals provide information to identify defects with height differentiation from the log surface. Combining simple shape definition rules with the height map allows most severe defects to be detected by determining the contour levels of a residual image. In addition, bark texture changes can be examined such that defects not associated with a height change might be detected. |
Pediatric outcomes data collection instrument scores in ambulatory children with cerebral palsy: an analysis by age groups and severity level. | BACKGROUND
The Pediatric Outcomes Data Collection Instrument (PODCI) was developed in 1994 as a patient-based tool for use across a broad age range and wide array of musculoskeletal disorders, including children with cerebral palsy (CP). The purpose of this study was to establish means and SDs of the Parent PODCI measures by age groups and Gross Motor Function Classification System (GMFCS) levels for ambulatory children with CP.
METHODS
This instrument was one of several studied in a prospective, multicenter project of ambulatory patients with CP between the aged 4 and 18 years and GMFCS levels I through III. Participants included 338 boys and 221 girls at a mean age of 11.1 years, with 370 diplegic, 162 hemiplegic, and 27 quadriplegic. Both baseline and follow-up data sets of the completed Parent PODCI responses were statistically analyzed.
RESULTS
Age was identified as a significant predictor of the PODCI measures of Upper Extremity Function, Transfers and Basic Mobility, Global Function, and Happiness With Physical Condition. Gross Motor Function Classification System levels was a significant predictor of Transfers and Basic Mobility, Sports and Physical Function, and Global Function. Pattern of involvement, sex, and prior orthopaedic surgery were not statistically significant predictors for any of the Parent PODCI measures. Mean and SD scores were calculated for age groups stratified by GMFCS levels. Analysis of the follow-up data set validated the findings derived from the baseline data. Linear regression equations were derived, with age as a continuous variable and GMFCS levels as a categorical variable, to be used for Parent PODCI predicted scores.
CONCLUSIONS
The results of this study provide clinicians and researchers with a set of Parent PODCI values for comparison to age- and severity-matched populations of ambulatory patients with CP. |
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering | Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge. |
Effects of terbinafine and itraconazole on the pharmacokinetics of orally administered tramadol | Tramadol is widely used for acute, chronic, and neuropathic pain. Its primary active metabolite is O-desmethyltramadol (M1), which is mainly accountable for the μ-opioid receptor-related analgesic effect. Tramadol is metabolized to M1 mainly by cytochrome P450 (CYP)2D6 enzyme and to other metabolites by CYP3A4 and CYP2B6. We investigated the possible interaction of tramadol with the antifungal agents terbinafine (CYP2D6 inhibitor) and itraconazole (CYP3A4 inhibitor). We used a randomized placebo-controlled crossover study design with 12 healthy subjects, of which 8 were extensive and 4 were ultrarapid CYP2D6 metabolizers. On the pretreatment day 4 with terbinafine (250 mg once daily), itraconazole (200 mg once daily) or placebo, subjects were given tramadol 50 mg orally. Plasma concentrations of tramadol and M1 were determined over 48 h and some pharmacodynamic effects over 12 h. Pharmacokinetic variables were calculated using standard non-compartmental methods. Terbinafine increased the area under plasma concentration–time curve (AUC0-∞) of tramadol by 115 % and decreased the AUC0-∞ of M1 by 64 % (P < 0.001). Terbinafine increased the peak concentration (C max) of tramadol by 53 % (P < 0.001) and decreased the C max of M1 by 79 % (P < 0.001). After terbinafine pretreatment the elimination half-life of tramadol and M1 were increased by 48 and 50 %, respectively (P < 0.001). Terbinafine reduced subjective drug effect of tramadol (P < 0.001). Itraconazole had minor effects on tramadol pharmacokinetics. Terbinafine may reduce the opioid effect of tramadol and increase the risk of its monoaminergic adverse effects. Itraconazole has no meaningful interaction with tramadol in subjects who have functional CYP2D6 enzyme. |
Direct torque control of PWM inverter-fed AC motors - a survey | This paper presents a review of recently used direct torque and flux control (DTC) techniques for voltage inverter-fed induction and permanent-magnet synchronous motors. A variety of techniques, different in concept, are described as follows: switching-table-based hysteresis DTC, direct self control, constant-switching-frequency DTC with space-vector modulation (DTC-SVM). Also, trends in the DTC-SVM techniques based on neuro-fuzzy logic controllers are presented. Some oscillograms that illustrate properties of the presented techniques are shown. |
Simple, Robust Autonomous Grasping in Unstructured Environments | The inherent uncertainty associated with unstructured grasping tasks makes establishing a successful grasp difficult. Traditional approaches to this problem involve hands that are complex, fragile, require elaborate sensor suites, and are difficult to control. In this paper, we demonstrate a novel autonomous grasping system that is both simple and robust. The four-fingered hand is driven by a single actuator, yet can grasp objects spanning a wide range of size, shape, and mass. The hand is constructed using polymer-based shape deposition manufacturing, with joints formed by elastomeric flexures and actuator and sensor components embedded in tough rigid polymers. The hand has superior robustness properties, able to withstand large impacts without damage and capable of grasping objects in the presence of large positioning errors. We present experimental results showing that the hand mounted on a three degree of freedom manipulator arm can reliably grasp 5 cm-scale objects in the presence of positioning error of up to 100% of the object size and 10 cm-scale objects in the presence of positioning error of up to 33% of the object size, while keeping acquisition contact forces low. |
Pose Embeddings: A Deep Architecture for Learning to Match Human Poses | We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method. |
ℓ<ce:inf loc=post>1</ce:inf>-penalized linear mixed-effects models for high dimensional data with application to BCI | Recently, a novel statistical model has been proposed to estimate population effects and individual variability between subgroups simultaneously, by extending Lasso methods. We will for the first time apply this so-called ℓ(1)-penalized linear regression mixed-effects model for a large scale real world problem: we study a large set of brain computer interface data and through the novel estimator are able to obtain a subject-independent classifier that compares favorably with prior zero-training algorithms. This unifying model inherently compensates shifts in the input space attributed to the individuality of a subject. In particular we are now for the first time able to differentiate within-subject and between-subject variability. Thus a deeper understanding both of the underlying statistical and physiological structures of the data is gained. |
Low Annexin A1 expression predicts benefit from induction chemotherapy in oral cancer patients with moderate or poor pathologic differentiation grade | The benefit of induction chemotherapy in locally advanced oral squamous cell carcinoma (OSCC) remains to be clearly defined. Induction chemotherapy is likely to be effective for biologically distinct subgroups of patients and biomarker development might lead to identification of the patients whose tumors are to respond to a particular treatment. Annexin A1 may serve as a biomarker for responsiveness to induction chemotherapy. The aim of this study was to investigate Annexin A1 expression in pre-treatment biopsies from a cohort of OSCC patients treated with surgery and post-operative radiotherapy or docetaxel, cisplatin and 5-fluorouracil (TPF) induction chemotherapy followed by surgery and post-operative radiotherapy. Furthermore we sought to assess the utility of Annexin A1 as a prognostic or predictive biomarker. Immunohistochemical staining for Annexin A1 was performed in pre-treatment biopsies from 232 of 256 clinical stage III/IVA OSCC patients. Annexin A1 index was estimated as the proportion of tumor cells (low and high, <50% and ≥50% of stained cells, respectively) to Annexin A1 cellular membrane and cytoplasm staining. There was a significant correlation between Annexin A1 expression and pathologic differentiation grade (P=0.015) in OSCC patients. The proportion of patients with low Annexin A1 expression was significantly higher amongst those with moderate/poorly differentiated tumor (78/167) compared to those with well differentiated tumor (18/65). Multivariate Cox model analysis showed clinical stage (P=0.001) and Annexin A1 expression (P=0.038) as independent prognostic risk factors. Furthermore, a low Annexin A1 expression level was predictive of longer disease-free survival (P=0.036, HR=0.620) and locoregional recurrence-free survival (P=0.031, HR=0.607) compared to high Annexin A1 expression. Patients with moderate/poorly differentiated tumor and low Annexin A1 expression benefited from TPF induction chemotherapy as measured by distant metastasis-free survival (P=0.048, HR=0.373) as well as overall survival (P=0.078, HR=0.410). Annexin A1 can be used as a prognostic biomarker for OSCC. Patients with moderate/poorly differentiated OSCC and low Annexin A1 expression can benefit from the addition of TPF induction chemotherapy to surgery and post-operative radiotherapy. Annexin A1 expression can potentially be used as a predictive biomarker to select OSCC patients with moderate/poorly differentiated tumor who may benefit from TPF induction chemotherapy. |
Multilinear Discriminant Analysis for Higher-Order Tensor Data Classification | In the past decade, great efforts have been made to extend linear discriminant analysis for higher-order data classification, generally referred to as multilinear discriminant analysis (MDA). Existing examples include general tensor discriminant analysis (GTDA) and discriminant analysis with tensor representation (DATER). Both the two methods attempt to resolve the problem of tensor mode dependency by iterative approximation. GTDA is known to be the first MDA method that converges over iterations. However, its performance relies highly on the tuning of the parameter in the scatter difference criterion. Although DATER usually results in better classification performance, it does not converge, yet the number of iterations executed has a direct impact on DATER's performance. In this paper, we propose a closed-form solution to the scatter difference objective in GTDA, namely, direct GTDA (DGTDA) which also gets rid of parameter tuning. We demonstrate that DGTDA outperforms GTDA in terms of both efficiency and accuracy. In addition, we propose constrained multilinear discriminant analysis (CMDA) that learns the optimal tensor subspace by iteratively maximizing the scatter ratio criterion. We prove both theoretically and experimentally that the value of the scatter ratio criterion in CMDA approaches its extreme value, if it exists, with bounded error, leading to superior and more stable performance in comparison to DATER. |
PHYSICAL ACTIVITY LEVEL AMONG UNIVERSITY STUDENTS: A CROSS SECTIONAL SURVEY | Background and Objective: Physical inactivity is the fourth leading risk factor for global mortality. Physical inactivity levels are rising in developing countries and Malaysia is of no exception. Malaysian Adult Nutrition Survey 2003 reported that the prevalence of physical inactivity was 39.7% and the prevalence was higher for women (42.6%) than men (36.7%). In Malaysia, the National Health and Morbidity Survey 2006 reported that 43.7% (5.5 million) of Malaysian adults were physically inactive. These statistics show that physically inactive is an important public health concern in Malaysia. College students have been found to have poor physical activity habits. The objective of this study was to identify the physical activity level among students of Asia Metropolitan University (AMU) in Malaysia. |
Mechanisms underlying skin disorders induced by EGFR inhibitors | The epidermal growth factor receptor (EGFR) is a receptor tyrosine kinase that is frequently mutated or overexpressed in a large number of tumors such as carcinomas or glioblastoma. Inhibitors of EGFR activation have been successfully established for the therapy of some cancers and are more and more frequently being used as first or later line therapies. Although the side effects induced by inhibitors of EGFR are less severe than those observed with classic cytotoxic chemotherapy and can usually be handled by out-patient care, they may still be a cause for dose reduction or discontinuation of treatment that can reduce the effectiveness of antitumor therapy. The mechanisms underlying these cutaneous side effects are only partly understood. Important questions, such as the reasons for the correlation between the intensity of the side effects and the efficiency of treatment with EGFR inhibitors, remain to be answered. Optimized adjuvant strategies to accompany anti-EGFR therapy need to be found for optimal therapeutic application and improved quality of life of patients. Here, we summarize current literature on the molecular and cellular mechanisms underlying the cutaneous side effects induced by EGFR inhibitors and provide evidence that keratinocytes are probably the optimal targets for adjuvant therapy aimed at alleviating skin toxicities. |
The Use of Alternative Social Networking Sites in Higher Educational Settings: A Case Study of the E-Learning Benefits of Ning in Education | Distance education as a primary means of instruction is expanding significantly at the college and university level. Simultaneously, the growth of social networking sites (SNS) including Facebook, LinkedIn, and MySpace is also rising among today’s college students. An increasing number of higher education instructors are beginning to combine distance education delivery with SNSs. However, there is currently little research detailing the educational benefits associated with the use of SNSs. Non-commercial, education-based SNSs, such as Ning in Education, have been recently shown to build communities of practice and facilitate social presence for students enrolled in distance education courses. In order to evaluate the largely unexplored educational benefits of SNSs, we surveyed graduate students enrolled in distance education courses using Ning in Education, an education-based SNS, based on their attitudes toward SNSs as productive online tools for teaching and learning. The results of our study suggest that education-based SNSs can be used most effectively in distance education courses as a technological tool for improved online communications among students in higher distance education courses. Introduction The use of distance education courses as a primary instructional delivery option, especially in the higher education community, is expanding at an unprecedented rate. The 9.7% growth rate in the number of college and university students enrolled in at least one online class reported by Allen and Seaman (2007) significantly exceeded the 1.5% growth rate in the overall higher education student population during the same period. Simultaneously, the emergence and growth of commercial social networking sites (SNSs) such as Facebook, Friendster, LinkedIn, LiveJournal, and MySpace has been extensive and widespread (Boyd & Ellison, 2007). Facebook, for example, is currently the fastest growing commercial SNS in the world, with more than 300 million active user profiles (Facebook, 2009). Given the rising popularity of both distance education and SNSs, it seems logical to merge these popular two technologies with the goal of improving online teaching and learning (National School Boards Association [NSBA], 2007; University of Minnesota, 2008). Research has shown that distance education courses are often more successful when they develop communities of practice (Barab & Duffy, 2000; Journal of Interactive Online Learning Brady, Holcomb, and Smith 152 DeSchryver, Mishra, Koehler, & Francis, 2009) as well as encourage high levels of online social presence among students (Anderson, 2005). Fostering a sense of community is critically important, especially in an online environment where students often do not get the opportunity to meet face-to-face with other students or the instructor in the course. Since they facilitate the sharing of information—personal and otherwise—the technologies used in SNSs aid discussion and create intimacy among online students, as they have they ability to connect and build community in a socially and educationally constructed network (Educause Learning Initiative [ELI], 2007). In contrast to SNS, course management systems (CMS), such as Blackboard and Moodle, tend to be very focused and lack the personal touch and networking capacity that SNSs offer. For example, instructors using CMS may pose a question in an online discussion board and each student posts a response. However, these student posts are really not interactions at all, but merely question and answer sessions. Using an SNS that is user centered, rather than class centered, such as a CMS, has the potential to increase student engagement. SNSs can actively encourage online community building, extending learning beyond the boundaries of the classroom (Smith, 2009). A comparison of typical SNS and a traditional CMS appears in Table 1. |
Theoretical derivation of wind power probability distribution function and applications | The instantaneous wind power contained in the air current is directly proportional with the cube of the wind speed. In practice, there is a record of wind speeds in the form of a time series. It is, therefore, necessary to develop a formulation that takes into consideration the statistical parameters of such a time series. The purpose of this paper is to derive the general wind power formulation in terms of the statistical parameters by using the perturbation theory, which leads to a general formulation of the wind power expectation and other statistical parameter expressions such as the standard deviation and the coefficient of variation. The formulation is very general and can be applied specifically for any wind speed probability distribution function. Its application to two-parameter Weibull probability distribution of wind speeds is presented in full detail. It is concluded that provided wind speed is distributed according to a Weibull distribution, the wind power could be derived based on wind speed data. It is possible to determine wind power at any desired risk level, however, in practical studies most often 5% or 10% risk levels are preferred and the necessary simple procedure is presented for this purpose in this paper. 2011 Elsevier Ltd. All rights reserved. |
A multinomial logistic regression modeling approach for anomaly intrusion detection | Although researchers have long studied using statistical modeling techniques to detect anomaly intrusion and profile user behavior, the feasibility of applying multinomial logistic regression modeling to predict multi-attack types has not been addressed, and the risk factors associated with individual major attacks remain unclear. To address the gaps, this study used the KDD-cup 1999 data and bootstrap simulation method to fit 3000 multinomial logistic regression models with the most frequent attack types (probe, DoS, U2R, and R2L) as an unordered independent variable, and identified 13 risk factors that are statistically significantly associated with these attacks. These risk factors were then used to construct a final multinomial model that had an ROC area of 0.99 for detecting abnormal events. Compared with the top KDD-cup 1999 winning results that were based on a rule-based decision tree algorithm, the multinomial logistic model-based classification results had similar sensitivity values in detecting normal and a significantly lower overall misclassification rate (18.9% vs. 35.7%). The study emphasizes that the multinomial logistic regression modeling technique with the 13 risk factors provides a robust approach to detect anomaly intrusion. |
Pathologies in information bottleneck for deterministic supervised learning | Information bottleneck (IB) is a method for extracting information from one random variable X that is relevant for predicting another random variable Y . To do so, IB identifies an intermediate “bottleneck” variable T that has low mutual information I(X;T ) and high mutual information I(Y ;T ). The IB curve characterizes the set of bottleneck variables that achieve maximal I(Y ;T ) for a given I(X;T ), and is typically explored by optimizing the IB Lagrangian, I(Y ;T )− βI(X;T ). Recently, there has been interest in applying IB to supervised learning, particularly for classification problems that use neural networks. In most classification problems, the output class Y is a deterministic function of the input X , which we refer to as “deterministic supervised learning”. We demonstrate three pathologies that arise when IB is used in any scenario where Y is a deterministic function of X: (1) the IB curve cannot be recovered by optimizing the IB Lagrangian for different values of β; (2) there are “uninteresting” solutions at all points of the IB curve; and (3) for classifiers that achieve low error rates, the activity of different hidden layers will not exhibit a strict trade-off between compression and prediction, contrary to a recent proposal. To address problem (1), we propose a functional that, unlike the IB Lagrangian, can recover the IB curve in all cases. We finish by demonstrating these issues on the MNIST dataset. |
Comparative evaluation of machines for electric and hybrid vehicles based on dynamic operation and loss minimization | This paper proposes a method to evaluate induction machines for electric vehicles (EVs) and hybrid electric vehicles (HEVs). Some performance aspects of induction machines are also compared to permanent magnet synchronous machines (PMSMs). An overview of static efficiency maps is presented, but efficiency maps miss dynamic effects and under-predict induction machine efficiencies. The proposed evaluation method is based on dynamic efficiency under loss minimization and overall energy consumption over standard driving cycles that are provided by the U.S. Environmental Protection Agency. Over each of these cycles, the dynamic efficiency and drive-cycle energy are determined based on experimental motor data in combination with a dynamic HEV simulator. Results show that efficiency in the fast-changing dynamic environment of a vehicle can be higher than inferred from static efficiency maps. Overall machine efficiency is compared for rated flux, and for dynamic loss-minimizing flux control. The energy efficiency given optimum flux is typically five points higher than for rated flux. This result is comparable to published PMSM results. A PMSM is also used for comparisons, and results show that both machines can perform well in HEV and EV applications. |
Using Mappings to Prove Timing Properties | Nancy A. Lynch received the B.S. degree in mathematics from Brooklyn College, Brooklyn, NY, in 1968, and the Ph.D. degree in mathematics from the Massachusetts Institute of Technology, Cambridge, MA, in 1972. She is presently a professor of computer science and electrical engineering at Massachusetts Institute of Technology. She has also been on the computer science faculty at Georgia Institute of Technology and on the mathematics faculty at Tufts University and the University of Southern California. Her research interests are in distributed and real-time computing and theoretical computer science. In particular, she has worked on formal models and verification methods, on algorithm design and analysis, and on impossibility results. She also likes to hike and ski. |
Discriminative coupled dictionary hashing for fast cross-media retrieval | Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional Hamming space, has attracted intensive attention in recent years. The existing cross-media hashing approaches only aim at learning hash functions to preserve the intra-modality and inter-modality correlations, but do not directly capture the underlying semantic information of the multi-modal data. We propose a discriminative coupled dictionary hashing (DCDH) method in this paper. In DCDH, the coupled dictionary for each modality is learned with side information (e.g., categories). As a result, the coupled dictionaries not only preserve the intra-similarity and inter-correlation among multi-modal data, but also contain dictionary atoms that are semantically discriminative (i.e., the data from the same category is reconstructed by the similar dictionary atoms). To perform fast cross-media retrieval, we learn hash functions which map data from the dictionary space to a low-dimensional Hamming space. Besides, we conjecture that a balanced representation is crucial in cross-media retrieval. We introduce multi-view features on the relatively ``weak'' modalities into DCDH and extend it to multi-view DCDH (MV-DCDH) in order to enhance their representation capability. The experiments on two real-world data sets show that our DCDH and MV-DCDH outperform the state-of-the-art methods significantly on cross-media retrieval. |
High Performance Emulation of Quantum Circuits | As quantum computers of non-trivial size become available in the near future, it is imperative to develop tools to emulate small quantum computers. This allows for validation and debugging of algorithms as well as exploring hardware-software co-design to guide the development of quantum hardware and architectures. The simulation of quantum computers entails multiplications of sparse matrices with very large dense vectors of dimension 2n, where n denotes the number of qubits, making this a memory-bound and network bandwidth-limited application. We introduce the concept of a quantum computer emulator as a component of a software framework for quantum computing, enabling a significant performance advantage over simulators by emulating quantum algorithms at a high level rather than simulating individual gate operations. We describe various optimization approaches and present benchmarking results, establishing the superiority of quantum computer emulators in terms of performance. |
DeepSeek: Content Based Image Search & Retrieval | Most of the internet today is composed of digital media that includes videos and images. With pixels becoming the currency in which most transactions happen on the internet, it is becoming increasingly important to have a way of browsing through this ocean of information with relative ease. YouTube has 400 hours of video uploaded every minute and many million images are browsed on Instagram, Facebook, etc. Inspired by recent advances in the field of deep learning and success that it has gained on various problems like image captioning (Karpathy and Fei-Fei, 2015) and (Xu et al., 2015), machine translation (Bahdanau et al., 2014), word2vec , skip thoughts (Kiros et al., 2015), etc, we present DeepSeek a natural language processing based deep learning model that allows users to enter a description of the kind of images that they want to search, and in response the system retrieves all the images that semantically and contextually relate to the query. Two approaches are described in the following sections. |
A Method for Refocusing Photos using Depth from Defocus | This paper will describe a method for refocusing photos taken with a regular camera. By capturing a series of images from the same viewpoint with varying focal depth, a depth metric for the scene can be calculated and used to artificially blur the photo in a realistic way to emulate a new focal depth. |
The Role of the State in Economic Development | This paper discusses the recent literature on the role of the state in economic development. It concludes that government incentives to enact sound policies are key to economic success. It also discusses the evidence on what happens after episodes of economic and political liberalizations, asking whether political liberalizations strengthen government incentives to enact sound economic policies. The answer is mixed. Most episodes of economic liberalizations are indeed preceded by political liberalizations. But the countries that have done better are those that have managed to open up the economy first, and only later have liberalized their political system. (This abstract was borrowed from another version of this item.) |
Learning descriptors for object recognition and 3D pose estimation | Detecting poorly textured objects and estimating their 3D pose reliably is still a very challenging problem. We introduce a simple but powerful approach to computing descriptors for object views that efficiently capture both the object identity and 3D pose. By contrast with previous manifold-based approaches, we can rely on the Euclidean distance to evaluate the similarity between descriptors, and therefore use scalable Nearest Neighbor search methods to efficiently handle a large number of objects under a large range of poses. To achieve this, we train a Convolutional Neural Network to compute these descriptors by enforcing simple similarity and dissimilarity constraints between the descriptors. We show that our constraints nicely untangle the images from different objects and different views into clusters that are not only well-separated but also structured as the corresponding sets of poses: The Euclidean distance between descriptors is large when the descriptors are from different objects, and directly related to the distance between the poses when the descriptors are from the same object. These important properties allow us to outperform state-of-the-art object views representations on challenging RGB and RGB-D data. |
UbiCrawler: a scalable fully distributed Web crawler | We report our experience in implementing UbiCrawler, a scalable distributed Web crawler, using the Java programming language. The main features of UbiCrawler are platform independence, linear scalability, graceful degradation in the presence of faults, a very effective assignment function (based on consistent hashing) for partitioning the domain to crawl, and more in general the complete decentralization of every task. The necessity of handling very large sets of data has highlighted some limitations of the Java APIs, which prompted the authors to partially reimplement them. Copyright c © 2004 John Wiley & Sons, Ltd. |
Why M Heads are Better than One: Training a Diverse Ensemble of Deep Networks | Convolutional Neural Networks have achieved state-ofthe-art performance on a wide range of tasks. Most benchmarks are led by ensembles of these powerful learners, but ensembling is typically treated as a post-hoc procedure implemented by averaging independently trained models with model variation induced by bagging or random initialization. In this paper, we rigorously treat ensembling as a firstclass problem to explicitly address the question: what are the best strategies to create an ensemble? We first compare a large number of ensembling strategies, and then propose and evaluate novel strategies, such as parameter sharing (through a new family of models we call TreeNets) as well as training under ensemble-aware and diversity-encouraging losses. We demonstrate that TreeNets can improve ensemble performance and that diverse ensembles can be trained endto-end under a unified loss, achieving significantly higher “oracle” accuracies than classical ensembles. |
Combination 532-nm and 1064-nm lasers for noninvasive skin rejuvenation and toning. | BACKGROUND
Noninvasive techniques for skin rejuvenation are quickly becoming standard in the treatment of mild rhytids and overall skin toning. Multiple laser wavelengths and modalities have been used with varying degrees of success, including 532-nm, 585-nm, 1064-nm, 1320-nm, 1450-nm, and 1540-nm wavelengths.
OBJECTIVES
To evaluate a combination technique using a long-pulsed, 532-nm potassium titanyl phosphate (KTP) laser and a long-pulsed 1064-nm Nd:YAG laser, separately and combined, for noninvasive photorejuvenation and skin toning and collagen enhancement and to establish efficacy and degree of success.
DESIGN
Prospective nonrandomized study with longitudinal follow-up.
SETTING
Private dermatologic surgery and laser practice.
METHODS
A total of 150 patients, with skin types I through V, were treated with long-pulsed KTP 532-nm and long-pulsed Nd:YAG 1064-nm lasers, separately and combined. For the KTP 532-nm laser, the fluences varied between 7 to 15 J/cm2 at 7- to 20-millisecond pulse durations with a 2-mm handpiece and 6 to 15 J/cm2 at 30- to 50-millisecond pulses with a 4-mm handpiece. The 1064-nm Nd:YAG laser fluences were set at 24 to 30 J/cm2 for a 10-mm handpiece. These energies were delivered at 30- to 65-millisecond pulse durations. All subjects were treated at least 3 times and at most 6 times, depending on patient satisfaction level, at monthly intervals and were observed for up to 18 months after the last treatment.
MAIN OUTCOME MEASURES
All patients were asked to fill out a "severity scale" on which redness, pigmentation, rhytids, skin tone/tightness, texture, and patient satisfaction were noted before and after each treatment. Redness, pigmentation, rhytids, skin tone/tightness, and texture were also evaluated by the physician and another observer.
RESULTS
After 3 to 6 treatments, 50 patients treated with the 532-nm KTP laser alone showed improvement of 70% to 80% in redness and pigmentation, 30% to 50% in skin tone/tightening, 30% to 40% in skin texture, and 20% to 30% in rhytids. Another 50 patients treated with the 1064-nm Nd:YAG laser alone showed improvement of 10% to 20% in redness, 0% to 10% in pigmentation, 10% to 30% in skin tone/tightening, 20% to 30% in skin texture, and 10% to 30% in rhytids. The third group of 50 patients treated with both KTP and Nd:YAG lasers showed improvement of 70% to 80% in redness and pigmentation, 40% to 60% in skin tone/tightening, 40% to 60% in skin texture, and 30% to 40% in rhytids. Skin biopsy specimens taken at 1-, 2-, 3-, and 6-month intervals demonstrated new collagen formation.
CONCLUSIONS
All 150 patients exhibited mild to moderate improvement in the appearance of rhytids, moderate improvement in skin toning and texture, and great improvement in the reduction of redness and pigmentation. The KTP laser used alone produced results superior to those of the Nd:YAG laser. Results from combination treatment with both KTP and Nd:YAG lasers were slightly superior to those achieved with either laser alone. |
A MapReduce solution for associative classification of big data | Associative classifiers have proven to be very effective in classification problems. Unfortunately, the algorithms used for learning these classifiers are not able to adequately manage big data because of time complexity and memory constraints. To overcome such drawbacks, we propose a distributed association rule-based classification scheme shaped according to the MapReduce programming model. The scheme mines classification association rules (CARs) using a properly enhanced, distributed version of the well-known FP-Growth algorithm. Once CARs have been mined, the proposed scheme performs a distributed rule pruning. The set of survived CARs is used to classify unlabeled patterns. The memory usage and time complexity for each phase of the learning process are discussed, and the scheme is evaluated on seven real-world big datasets on the Hadoop framework, characterizing its scalability and achievable speedup on small computer clusters. The proposed solution for associative classifiers turns to be suitable to practically address ∗Corresponding Author: Tel: +39 05 |
EMPHASIS: An Emotional Phoneme-based Acoustic Model for Speech Synthesis System | We present EMPHASIS, an emotional phoneme-based acoustic model for speech synthesis system. EMPHASIS includes a phoneme duration prediction model and an acoustic parameter prediction model. It uses a CBHG-based regression network to model the dependencies between linguistic features and acoustic features. We modify the input and output layer structures of the network to improve the performance. For the linguistic features, we apply a feature grouping strategy to enhance emotional and prosodic features. The acoustic parameters are designed to be suitable for the regression task and waveform reconstruction. EMPHASIS can synthesize speech in real-time and generate expressive interrogative and exclamatory speech with high audio quality. EMPHASIS is designed to be a multi-lingual model and can synthesize Mandarin-English speech for now. In the experiment of emotional speech synthesis, it achieves better subjective results than other real-time speech synthesis systems. |
Vishleshan: Performance Comparison and Programming Process Mining Algorithms in Graph-Oriented and Relational Database Query Languages | Process-Aware Information System (PAIS) are IT systems that manages, supports business processes and generate large event logs from execution of business processes. Process Mining consists of analyzing event logs generated by PAISs and discover business process models and check for conformance between the discovered and actual models. The large volume of event logs generated are stored in databases. Relational databases perform well for certain class of applications. However, there are certain class of applications for which relational databases are not able to scale. Several NoSQL databases have emerged to encounter the challenges of scalability in traditional databases. Discovering social network from event logs is one of the most challenging and important Process Mining task. Similar-Task algorithm is one of the most widely used Organizational Mining techniques. Our objective is to investigate which of the databases (Relational or Graph) perform better for Organizational Mining under Process Mining. We implement Similar-Task algorithm on relational and NoSQL (graph oriented) databases using only query language constructs. We conduct empirical analysis on a large real world data set to compare the performance of row-oriented database and NoSQL graph-oriented database. |
Factors and Processes Shaping Land Cover and Land Cover Changes Along the Wisconsin River | Land use can exert a powerful influence on ecological systems, yet our understanding of the natural and social factors that influence land use and land-cover change is incomplete. We studied land-cover change in an area of about 8800 km2 along the lower part of the Wisconsin River, a landscape largely dominated by agriculture. Our goals were (a) to quantify changes in land cover between 1938 and 1992, (b) to evaluate the influence of abiotic and socioeconomic variables on land cover in 1938 and 1992, and (c) to characterize the major processes of land-cover change between these two points in time. The results showed a general shift from agricultural land to forest. Cropland declined from covering 44% to 32% of the study area, while forests and grassland both increased (from 32% to 38% and from 10% to 14% respectively). Multiple linear regressions using three abiotic and two socioeconomic variables captured 6% to 36% of the variation in land-cover categories in 1938 and 9% to 46% of the variation in 1992. Including socioeconomic variables always increased model performance. Agricultural abandonment and a general decline in farming intensity were the most important processes of land-cover change among the processes considered. Areas characterized by the different processes of land-cover change differed in the abiotic and socioeconomic variables that had explanatory power and can be distinguished spatially. Understanding the dynamics of landscapes dominated by human impacts requires methods to incorporate socioeconomic variables and anthropogenic processes in the analyses. Our method of hypothesizing and testing major anthropogenic processes may be a useful tool for studying the dynamics of cultural landscapes. |
Anticipating SMIL 2.0: The Developing Cooperative Infrastructure for Multimedia on the Web | SMIL is the W3C recommendation for bringing synchronized multimedia to the Web. Version 1.0 of SMIL was accepted as a recommendation in June 1998. Work is expected to be soon underway for preparing the next version of SMIL, version 2.0. Issues that will need to be addressed in developing version 2.0 include not just adding new features but also establishing SMIL’s relationship with various related existing and developing W3C efforts. In this paper we offer some suggestions for how to address these issues. Potential new constructs with additional features for SMIL 2.0 are presented. Other W3C efforts and their potential relationship with SMIL 2.0 are discussed. To provide a context for discussing these issues, this paper explores various approaches for integrating multimedia information with the World Wide Web. It focuses on the modeling issues on the document level and the consequences of the basic differences between text-oriented Web-pages and networked multimedia presentations. |
Action Video Games Make Dyslexic Children Read Better | Learning to read is extremely difficult for about 10% of children; they are affected by a neurodevelopmental disorder called dyslexia [1, 2]. The neurocognitive causes of dyslexia are still hotly debated [3-12]. Dyslexia remediation is far from being fully achieved [13], and the current treatments demand high levels of resources [1]. Here, we demonstrate that only 12 hr of playing action video games-not involving any direct phonological or orthographic training-drastically improve the reading abilities of children with dyslexia. We tested reading, phonological, and attentional skills in two matched groups of children with dyslexia before and after they played action or nonaction video games for nine sessions of 80 min per day. We found that only playing action video games improved children's reading speed, without any cost in accuracy, more so than 1 year of spontaneous reading development and more than or equal to highly demanding traditional reading treatments. Attentional skills also improved during action video game training. It has been demonstrated that action video games efficiently improve attention abilities [14, 15]; our results showed that this attention improvement can directly translate into better reading abilities, providing a new, fast, fun remediation of dyslexia that has theoretical relevance in unveiling the causal role of attention in reading acquisition. |
Effects of atorvastatin versus fenofibrate on apoB-100 and apoA-I kinetics in mixed hyperlipidemia. | Kinetics of apo B and apo AI were assessed in 8 patients with mixed hyperlipidemia at baseline and after 8 weeks of atorvastatin 80 mg q.d. and micronised fenofibrate 200 mg q.d. in a cross-over study. Both increased hepatic production and decreased catabolism of VLDL accounted for elevated cholesterol and triglyceride concentrations at baseline. Atorvastatin significantly decreased triglyceride, total, VLDL and LDL cholesterol and apo B concentrations (-65%, -36%, -57%, -40% and -33%, respectively, P<0.05). Kinetic analysis revealed that atorvastatin stimulated the catabolism of apo B containing lipoproteins, enhanced the delipidation of VLDL1 and decreased VLDL1 production. Fenofibrate lowered triglycerides and VLDL cholesterol (-57% and -64%, respectively, P<0.05) due to enhanced delipidation of VLDL1 and VLDL2 and increased VLDL1 catabolism. Changes of HDL particle composition accounted for the increase of HDL cholesterol during atorvastatin and fenofibrate (18% and 23%, P<0.01). Only fenofibrate increased apo AI concentrations through enhanced apo AI synthesis (45%, P<0.05). We conclude that atorvastatin exerts additional beneficial effects on the metabolism of apo B containing lipoproteins unrelated to an increase in LDL receptor activity. Fenofibrate but not atorvastatin increases apo AI production and plasma turnover. |
Unsupervised Visual Attribute Transfer with Reconfigurable Generative Adversarial Networks | Learning to transfer visual attributes requires supervision dataset. Corresponding images with varying attribute values with the same identity are required for learning the transfer function. This largely limits their applications, because capturing them is often a difficult task. To address the issue, we propose an unsupervised method to learn to transfer visual attribute. The proposed method can learn the transfer function without any corresponding images. Inspecting visualization results from various unsupervised attribute transfer tasks, we verify the effectiveness of the proposed method. |
3D Convolutional Neural Networks for Human Action Recognition | We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods. |
Controlling alpha-globin: a review of alpha-globin expression and its impact on beta-thalassemia. | Synthesis of alpha-globin and alpha-globin subunits of hemoglobin occurs at high levels during erythrocyte differentiation in a tightly controlled and coordinated fashion. Expression of alpha-globin is a fascinatingly complex process which has been meticulously defined in several recent studies, from chromatin modifications to Pol II recruitment. Following this, alpha-globin transcripts are processed and stabilized by a protein complex which binds the 3' untranslated region. Transcription and stabilization contribute to high level expression of alpha-globin. However, translation of alpha-globin at levels exceeding alpha-globin expression damages cellular membranes and results in beta-thalassemia. It is, therefore, crucial that alpha-globin proteins are properly folded and stabilized, processes which are dependent on the presence of haem and AHSP. The exceedingly well-characterized process of alpha-globin expression elegantly illustrates the complex interaction of factors which are required to balance necessary high expression against the negative impacts of overexpression. |
Face-to-Face Interactions of Postpartum Depressed and Nondepressed Mother-Infant Pairs at 2 Months | Depression's influence on mother-infant interactious at 2 months postpartum was studied in 24 depressed and 22 nondepressed mothex-infant dyads. Depression was diagnosed using the SADS-L and RDC. In S's homes, structured interactions of 3 min duration were videotaped and later coded using behavioral descriptors and a l-s time base. Unstructured interactions were described using rating scales. During structured interactions, depressed mothers were more negative and their babies were less positive than were nondepressed dyads. The reduced positivity of depressed dyads was achieved through contingent resixmfiveness. Ratings from unstructured interactions were consistent with these findings. Results support the hypothesis that depression negatively influences motherinfant behaviol; but indicate that influence may vary with development, chronicity, and presence of other risk factors. |
GLOBAL-SCALE OBJECT DETECTION USING SATELLITE IMAGERY | In recent years, there has been a substantial increase in the availability of high-resolution commercial satellite imagery, enabling a variety of new remote-sensing applications. One of the main challenges for these applications is the accurate and efficient extraction of semantic information from satellite imagery. In this work, we investigate an important instance of this class of challenges which involves automatic detection of multiple objects in satellite images. We present a system for large-scale object training and detection, leveraging recent advances in feature representation and aggregation within the bag-of-words paradigm. Given the scale of the problem, one of the key challenges in learning object detectors is the acquisition and curation of labeled training data. We present a crowd-sourcing based framework that allows efficient acquisition of labeled training data, along with an iterative mechanism to overcome the label noise introduced by the crowd during the labeling process. To show the competence of the presented scheme, we show detection results over several object-classes using training data captured from close to 200 cities and tested over multiple geographic locations. |
An integrated modeling method for assessment of quality systems applied to aerospace manufacturing supply chains | In highly regulated industries such as aerospace, the introduction of new quality standard can provide the framework for developing and formulating innovative novel business models which become the foundation to build a competitive, customer-centric enterprise. A number of enterprise modeling methods have been developed in recent years mainly to offer support for enterprise design and help specify systems requirements and solutions. However, those methods are inefficient in providing sufficient support for quality systems links and assessment. The implementation parts of the processes linked to the standards remain unclear and ambiguous for the practitioners as a result of new standards introduction. This paper proposed to integrate new revision of AS/EN9100 aerospace quality elements through systematic integration approach which can help the enterprises in business re-engineering process. The assessment capability model is also presented to identify impacts on the existing system as a result of introducing new standards. |
Reliability of editors' subjective quality ratings of peer reviews of manuscripts. | CONTEXT
Quality of reviewers is crucial to journal quality, but there are usually too many for editors to know them all personally. A reliable method of rating them (for education and monitoring) is needed.
OBJECTIVE
Whether editors' quality ratings of peer reviewers are reliable and how they compare with other performance measures.
DESIGN
A 3.5-year prospective observational study.
SETTING
Peer-reviewed journal.
PARTICIPANTS
All editors and peer reviewers who reviewed at least 3 manuscripts.
MAIN OUTCOME MEASURES
Reviewer quality ratings, individual reviewer rate of recommendation for acceptance, congruence between reviewer recommendation and editorial decision (decision congruence), and accuracy in reporting flaws in a masked test manuscript.
INTERVENTIONS
Editors rated the quality of each review on a subjective 1 to 5 scale.
RESULTS
A total of 4161 reviews of 973 manuscripts by 395 reviewers were studied. The within-reviewer intraclass correlation was 0.44 (P<.001), indicating that 20% of the variance seen in the review ratings was attributable to the reviewer. Intraclass correlations for editor and manuscript were only 0.24 and 0.12, respectively. Reviewer average quality ratings correlated poorly with the rate of recommendation for acceptance (R=-0.34) and congruence with editorial decision (R=0.26). Among 124 reviewers of the fictitious manuscript, the mean quality rating for each reviewer was modestly correlated with the number of flaws they reported (R=0.53). Highly rated reviewers reported twice as many flaws as poorly rated reviewers.
CONCLUSIONS
Subjective editor ratings of individual reviewers were moderately reliable and correlated with reviewer ability to report manuscript flaws. Individual reviewer rate of recommendation for acceptance and decision congruence might be thought to be markers of a discriminating (ie, high-quality) reviewer, but these variables were poorly correlated with editors' ratings of review quality or the reviewer's ability to detect flaws in a fictitious manuscript. Therefore, they cannot be substituted for actual quality ratings by editors. |
UW-CSE at SemEval-2016 Task 10: Detecting Multiword Expressions and Supersenses using Double-Chained Conditional Random Fields | We describe our entry to SemEval 2016 Task 10: Detecting Minimal Semantic Units and their Meanings. Our approach uses a discriminative first-order sequence model similar to Schneider and Smith (2015). The chief novelty in our approach is a factorization of the labels into multiword expression and supersense labels, and restricting first-order dependencies within these two parts. Our submitted models achieved first place in the closed competition (CRF) and second place in the open competition (2-CRF). |
Software and Hardware Implementations of Stereo Matching | Stereo matching is one of the key technologies in stereo vision system due to its ultra high data bandwidth requirement, heavy memory accessing and algorithm complexity. To speed up stereo matching, various algorithms are implemented by different software and hardware processing methods. This paper presents a survey of stereo matching software and hardware implementation research status based on local and global algorithm analysis. Based on different processing platforms, including CPU, DSP, GPU, FPGA and ASIC, analysis are made on software or hardware realization performance, which is represented by frame rate, efficiency represented by MDES, and processing quality represented by error rate. Among them, GPU, FPGA and ASIC implementations are suitable for real-time embedded stereo matching applications, because they are low power consumption, low cost, and have high performance. Finally, further stereo matching optimization technologies are pointed out, including both algorithm and parallelism optimization for data bandwidth reduction and memory storage strategy. |
Forming interactivity: a tool for rapid prototyping of physical interactive products | The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design. |
Autonomous semantic mapping for robots performing everyday manipulation tasks in kitchen environments | In this work we report about our efforts to equip service robots with the capability to acquire 3D semantic maps. The robot autonomously explores indoor environments through the calculation of next best view poses, from which it assembles point clouds containing spatial and registered visual information. We apply various segmentation methods in order to generate initial hypotheses for furniture drawers and doors. The acquisition of the final semantic map makes use of the robot's proprioceptive capabilities and is carried out through the robot's interaction with the environment. We evaluated the proposed integrated approach in the real kitchen in our laboratory by measuring the quality of the generated map in terms of the map's applicability for the task at hand (e.g. resolving counter candidates by our knowledge processing system). |
Metaphor Annotation : A Systematic Study ( Technical Report CSRP-03-04 ) | This document describes a study of metaphor annotation that was carried out as part of the ATT-Meta project. The study lead to a number of results about metaphor and how it is signalled that are reported elsewhere (e.g. Wallington et al 2003a, Wallington et al 2003b). It also resulted in a public database of metaphorical views (http://www.cs.bham.ac.uk/research/attmeta/DatabankDCA/index.html). The annotated files are also viewable on line at: http://www.cs.bham.ac.uk/~amw/dcaProject. The primary aim of this document is to describe how the project was set up and run, and to discuss the measures we took to identify and quantify inter-annotator (dis)agreement. |
Design of Greenhouse Control System Based on Wireless Sensor Networks and AVR Microcontroller | In order to accurately determine the growth of greenhouse crops, the system based on AVR Single Chip microcontroller and wireless sensor networks is developed, it transfers data through the wireless transceiver devices without setting up electric wiring, the system structure is simple. The monitoring and management center can control the temperature and humidity of the greenhouse, measure the carbon dioxide content, and collect the information about intensity of illumination, and so on. In addition, the system adopts multilevel energy memory. It combines energy management with energy transfer, which makes the energy collected by solar energy batteries be used reasonably. Therefore, the self-managing energy supply system is established. The system has advantages of low power consumption, low cost, good robustness, extended flexible. An effective tool is provided for monitoring and analysis decision-making of the greenhouse environment. |
Video Action Detection with Relational Dynamic-Poselets | 1. Cao, L., Liu, Z., Huang, T.S.: Cross-dataset action detection. In: CVPR (2010). 2. Yang, Y., Ramanan, D.: Articulated pose estimation with flexible mixtures-of-parts. In: CVPR (2011) 3. Lan, T., etc.: Discriminative figure-centric models for joint action localization and recognition. In: ICCV (2011). 4. Tian, Y., Sukthankar, R., Shah, M.: Spatiotemporal deformable part models for action detection. In: CVPR (2013). 5. Wang, H., Schmid, C.: Action recognition with improved trajectories. In: ICCV (2013). Experiments |
Transcatheter aortic valve implantation versus surgical aortic valve replacement for severe aortic stenosis: results from an intermediate risk propensity-matched population of the Italian OBSERVANT study. | BACKGROUND
Few studies have yielded information on comparative effectiveness of transcatheter aortic valve implantation (TAVI) versus surgical aortic valve replacement (SAVR) procedures in a real-world setting. The aim of this analysis is to describe procedural and post-procedural outcomes in a TAVI/SAVR intermediate risk propensity-matched population.
METHODS
OBSERVANT is an observational prospective multicenter cohort study, enrolling AS patients undergoing SAVR or TAVI. Propensity score method was applied to analyze procedural and post-procedural outcomes. Pairs of patients with the same probability score were matched (caliper matching).
RESULTS
The unadjusted enrolled population (N=2108) comprises 1383 SAVR patients, 602 transarterial-TAVI patients and 123 transapical-TAVI patients. Matched population comprised a total of 266 patients (133 patients for each group). A relatively low risk population was selected (mean logistic EuroSCORE 9.4 ± 10.4% vs 8.9 ± 9.5%, SAVR vs TAVI; p=0.650). Thirty-day mortality was 3.8% for both SAVR and TAVI (p=1.000). The incidence of stroke (1.5% SAVR and 0.0% TAVI; p=0.156) and myocardial infarction (0.8% SAVR and 0.8% TAVI; p=1.000) was not statistically different between groups, whereas a higher requirement for blood transfusion was reported across the surgical cohort (49.6% vs 36.1%; p=0.026). A higher incidence of major vascular damage (5.3% vs. 0.0%; p=0.007) and pacemaker implantation(0.8% vs 12.0%; p=0.001) were reported in the TAVI group.
CONCLUSIONS
Patients undergoing transcatheter and surgical treatment of severe aortic stenosis are still extremely distinct populations. In the relatively low-risk propensity-matched population analyzed, despite similar procedural and 30-day mortality, SAVR was associated with a higher risk for blood transfusion, whereas TAVI showed a significantly increased rate of vascular damage, permanent AV block and residual aortic valve regurgitation. |
Design of an endoluminal NOTES robotic system | Natural orifice transluminal endoscopic surgery, or NOTES, allows for exceedingly minimally invasive surgery but has high requirements for the dexterity and force capabilities of the tools. An overview of the ViaCath System is presented. This system is a first generation teleoperated robot for endoluminal surgery and consists of a master console with haptic interfaces, slave drive mechanisms, and 6 degree-of-freedom, long-shafted flexible instruments that run alongside a standard gastroscope or colonoscope. The system was validated through animal studies. It was discovered that the devices were difficult to introduce into the GI tract and manipulation forces were insufficient. The design of a second generation system is outlined with improvements to the instrument articulation section and a steerable overtube. Results of basic evaluation tests performed on the tools are also presented. |
A Dynamic Theory of Organizational Knowledge Creation | This paper proposes a paradigm for managing the dynamic aspects of organizational knowledge creating processes. Its central theme is that organizational knowledge is created through a continuous dialogue between tacit and explicit knowledge. The nature of this dialogue is examined and four patterns of interaction involving tacit and explicit knowledge are identified. It is argued that while new knowledge is developed by individuals, organizations play a critical role in articulating and amplifying that knowledge. A theoretical framework is developed which provides an analytical perspective on the constituent dimensions of knowledge creation. This framework is then applied in two operational models for facilitating the dynamic creation of appropriate organizational knowledge. (Self-Designing Organization; Teams; Knowledge Conversion; Organizational Innovation; Management Models) |
Assessment of website security by penetration testing using Wireshark | Evolving technology has created an inevitable threat of expose of data that is shared online. Wireshark tool enables the ethical hacker to reveal the flaws in the system security at the user authentication level. This approach of identifying vulnerabilities is deemed fit as the strategy involved in this testing is rapid and provides good success in identifying vulnerabilities. The usage of Wireshark also ensures that the procedure followed is up to the required standards. This paper discussed about the need to utilize penetration testing, the benefits of using Wireshark for the same and goes on to illustrate one method of using the tool to perform penetration testing. Most areas of a network are highly susceptible to security attacks by adversaries. This paper focuses on solving the aforementioned issue by surveying various tools available for penetration testing. This also provides a sample of basic penetration testing using Wireshark. |
Long-term memories and experiences of childbirth in a Nordic context—a secondary analysis - International Journal of Qualitative Studies on Health and Well-being - Vol. 4, 2 - ISBN: 1748-2623, 1748-2631 - p.115-128 | The experience of childbirth is an important life experience for women. However, in-depth knowledge about long-term experiences is limited. The aim of the study was to describe women’s experiences two to 20 years after birth. This study is a part of a meta-synthesis project about childbearing in the Nordic countries. Methodologically, the study was a secondary analysis performed on original data from three selected qualitative studies by the authors, in three Nordic countries, Finland, Iceland and Sweden, and in two different forms of care, birth centre care and standard maternity care. There were 29 participants, both primipara and multiparous women. The result from this study shows that women, in a long-term perspective describe childbirth as an encounter with different participants and the most important is with the midwife. The midwife is also important in connection to the atmosphere experienced during birth. The childbirth experience has a potential to strengthen self-confidence and trust in others or, on the contrary, it can mean failure or distrust. Impersonal encounters linger feelings of being abandoned and alone. This dimension is in particular demonstrated in the description of the woman who had given birth at standard maternity care. The conclusion of this study is that childbirth experience has a potential to strengthen self-confidence and trust in others or on the contrary failure or distrust. Maternity care should be organized in a way that emphasis this aspects of care. |
MCMC using Hamiltonian dynamics | Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious “momentum” variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor — a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time. |
Deep Learning and Convolutional Neural Networks in the Aid of the Classification of Melanoma | Pattern recognition in digital images is a major limitation in machine learning area. But, in recent years, deep learning has rapidly been diffused, providing large advancements in visual computing by solving the main problems that machine learning imposes. Based on these advances, this study aims to improve results of a problem well-known by visual computing, the classification of melanoma, this one is classified as a malignant tumor, highly invasive and easily confused with other skin diseases. To achieve this, we use some techniques of deep learning to try to get better results in the task of classifying whether a melanotic lesion is the malignant type (melanoma) or not (nevus). In this work we present a training approach using a custom dataset of skin diseases, transfer learning, convolutional neural networks and data augmentation of the deep network ResNet (Deep Residual Network). Keywords-deep learning; convolutional neural networks; melanoma classification; |
Improving Sponsor's Experience in Reward-Based Crowdfunding: A Psychological Ownership Perspective | Psychological ownership brings new perspective to understand individual behaviors in organization, team, and the information system areas. In this research we use this theory to investigate how to improve sponsor experience in the context of reward-based crowdfunding. We propose that psychological ownership is one important mental state for sponsors in crowdfunding. The research model in this study shows that psychological ownership could predict sponsor’s commitment to project and continuous intention to contribute to it. Sponsors can develop the feeling of psychological ownership through three routes, i.e., perceived control, intimate knowing, and self-investment. In addition, social capital (sponsor expertise, social trust in entrepreneur and social ties) and social interaction (sponsor participation and entrepreneur activeness) are hypothesized to be important antecedents to the routes to psychological ownership in crowdfunding. The research model and hypotheses will be tested with survey data from www.zhongchou.cn, a famous crowdfunding platform in China. The expected research findings would provide interesting implications for research and practice. |
Wireless Networks Design in the Era of Deep Learning: Model-Based, AI-Based, or Both? | This work deals with the use of emerging deep learning techniques in future wireless communication networks. It will be shown that data-driven approaches should not replace, but rather complement traditional design techniques based on mathematical models. Extensive motivation is given for why deep learning based on artificial neural networks will be an indispensable tool for the design and operation of future wireless communications networks, and our vision of how artificial neural networks should be integrated into the architecture of future wireless communication networks is presented. A thorough description of deep learning methodologies is provided, starting with the general machine learning paradigm, followed by a more in-depth discussion about deep learning and artificial neural networks, covering the most widely-used artificial neural network architectures and their training methods. Deep learning will also be connected to other major learning frameworks such as reinforcement learning and transfer learning. A thorough survey of the literature on deep learning for wireless communication networks is provided, followed by a detailed description of several novel case-studies wherein the use of deep learning proves extremely useful for network design. For each case-study, it will be shown how the use of (even approximate) mathematical models can significantly reduce the amount of live data that needs to be acquired/measured to implement data-driven approaches. Finally, concluding remarks describe those that in our opinion are the major directions for future research in this field. |
Single Electron 2-Bit Multiplier | A Single Electron 2 bit multiplier is presented in this paper. Modern techniques of lithography make it possible to confine electrons to sufficiently small dimensions that the quantization of both their charge and their energy are easily observable. When such confined electrons are allowed to tunnel to metallic leads a single electron transistor (SET) is created. This transistor turns on and off again every time one electron is added to the isolated region 2 bit multiplier performs multiplication through a series of additions. For example, suppose we want to multiply 2 * 1. Instead of building a multiplier circuit, we can instead use an adder and perform 2 * 1 by adding 1 + 1. The first number indicates how many times the second number is added to itself. The adder which is used to build the 2 bit multiplier circuit is designed using single electron transistor. The logic operation of 2 bit multiplier is verified using simulation software "SIMON2. 0". |
Amino Acid-based Formula in Cow's Milk Allergy: Long-term Effects on Body Growth and Protein Metabolism. | OBJECTIVES
The long-term effects of amino acid-based formula (AAF) in the treatment of cow's milk allergy (CMA) are largely unexplored. The present study comparatively evaluates body growth and protein metabolism in CMA children treated with AAF or with extensively hydrolyzed whey formula (eHWF), and healthy controls.
METHODS
A 12-month multicenter randomized control trial was conducted in outpatients with CMA (age 5-12 m) randomized in 2 groups, treated with AAF (group 1) and eHWF (group 2), and compared with healthy controls (group 3) fed with follow-on (if age <12 months) or growing-up formula (if age >12 months). At enrolment (T0), after 3 (T3), 6 (T6), and 12 months (T12) a clinical evaluation was performed. At T0 and T3, in subjects with CMA serum levels of albumin, urea, total protein, retinol-binding protein, and insulin-like growth factor 1 were measured.
RESULTS
Twenty-one subjects in group 1 (61.9% boys, age 6.5 ± 1.5 months), 19 in group 2 (57.9% boys, age 7 ± 1.7 months) and 25 subjects in group 3 (48% boys, age 5.5 ± 0.5 months) completed the study. At T0, the weight z score was similar in group 1 (-0.74) and 2 (-0.76), with differences compared to group 3 (-0.17, P < 0.05). At T12, the weight z score value was similar between the 3 groups without significant differences. There were no significant changes in protein metabolism in children in groups 1 and 2.
CONCLUSION
Long-term treatment with AAF is safe and allows adequate body growth in children with CMA. |
Shear-Induced Hemolysis: Species Differences. | The nonphysiological mechanical shear stress in blood-contacting medical devices is one major factor to device-induced blood damage. Animal blood is often used to test device-induced blood damage potential of these devices due to its easy accessibility and low cost. However, the differences in shear-induced blood damage between animals and human have not been well characterized. The purpose of this study was to investigate shear-induced hemolysis of human and three commonly used preclinical evaluation animal species (ovine, porcine, and bovine) under shear conditions encountered in blood-contacting medical devices. Shear-induced hemolysis experiments were conducted using two single-pass blood-shearing devices. Driven by an externally pressurized reservoir, blood single-passes through a small annular gap in the shearing devices where the blood was exposed to a uniform high shear stress. Shear-induced hemolysis at different conditions of exposure time (0.04 to 1.5 s) and shear stress (25 to 320 Pa) was quantified for ovine, porcine, bovine, and human blood, respectively. Within these ranges of shear stress and exposure time, shear-induced hemolysis was less than 2% for the four species. The results showed that the ovine blood was more susceptible to shear-induced injury than the bovine, porcine, and human blood. The response of the porcine and bovine blood to shear was similar to the human blood. The dependence of hemolysis on shear stress level and exposure time was found to fit well the power law functional form for the four species. The coefficients of the power law models for the ovine, porcine, bovine, and human blood were derived. |
Montelukast, a leukotriene receptor antagonist, for the treatment of persistent asthma in children aged 2 to 5 years. | BACKGROUND
The greatest prevalence of asthma is in preschool children; however, the clinical utility of asthma therapy for this age group is limited by a narrow therapeutic index, long-term tolerability, and frequency and/or difficulty of administration. Inhaled corticosteroids and inhaled cromolyn are the most commonly prescribed controller therapies for young children with persistent asthma, although very young patients may have difficulty using inhalers, and dose delivery can be variable. Moreover, reduced compliance with inhaled therapy relative to orally administered therapy has been reported. One potential advantage of montelukast is the ease of administering a once-daily chewable tablet; additionally, no tachyphylaxis or change in the safety profile has been evidenced after up to 140 and 80 weeks of montelukast therapy in adults and pediatric patients aged 6 to 14 years, respectively. To our knowledge, this represents the first large, multicenter study to address the effects of a leukotriene receptor antagonist in children younger than 5 years of age with persistent asthma, as well as one of the few asthma studies that incorporated end points validated for use in preschool children.
OBJECTIVE
Our primary objective was to determine the safety profile of montelukast, an oral leukotriene receptor antagonist, in preschool children with persistent asthma. Secondarily, the effect of montelukast on exploratory measures of asthma control was also studied. DESIGN AND STATISTICAL ANALYSIS: We conducted a double-blind, multicenter, multinational study at 93 centers worldwide: including 56 in the United States, and 21 in countries in Africa, Australia, Europe, North America, and South America. In this study, we randomly assigned 689 patients (aged 2-5 years) to 12 weeks of treatment with placebo (228 patients) or 4 mg of montelukast as a chewable tablet (461 patients) after a 2-week placebo baseline period. Patients had a history of physician-diagnosed asthma requiring use of beta-agonist and a predefined level of daytime asthma symptoms. Caregivers answered questions twice daily on a validated, asthma-specific diary card and, at specified times during the study, completed a validated asthma-specific quality-of-life questionnaire. Physicians and caregivers completed a global evaluation of asthma control at the end of the study. Efficacy end points included: daytime and overnight asthma symptoms, daily use of beta-agonist, days without asthma, frequency of asthma attacks, number of patients discontinued because of asthma, need for rescue medication, physician and caregiver global evaluations of change, asthma-specific caregiver quality of life, and peripheral blood eosinophil counts. Although exploratory, the efficacy end points were predefined and their analyses were written in a data analysis plan before study unblinding. At screening and at study completion, a complete physical examination was performed. Routine laboratory tests were drawn at screening and weeks 6 and 12, and submitted to a central laboratory for analysis. Adverse effects were collected from caregivers at each clinic visit. An intention-to-treat approach, including all patients with a baseline measurement and at least 1 postrandomization measurement, was performed for all efficacy end points. An analysis-of-variance model with terms for treatment, study center and stratum (inhaled/nebulized corticosteroid use, cromolyn use, or none) was used to estimate treatment group means and between-group differences and to construct 95% confidence intervals. Treatment-by-age, -sex, -race, -radioallergosorbent test, -stratum, and -study center interactions were evaluated by including each term separately. Fisher's exact test was used for between-group comparisons of the frequency of asthma attacks, discontinuations from the study because of worsening asthma, need for rescue medication, and the frequencies of adverse effects. Because of an imbalance in baseline values for eosinophil counts for the 2 treatment groups, an analysis of covariance was performed on the eosinophil change from baseline with the patient's baseline as covariate.
STUDY PARTICIPANTS
Of the 689 patients enrolled, approximately 60% were boys and 60% were white. Patients were relatively evenly divided by age: 21%, 24%, 30%, and 23% were aged 2, 3, 4, and 5 years, respectively. For 77% of the patients, asthma symptoms first developed during the first 3 years of life. During the placebo baseline period, patients had asthma symptoms on 6.1 days/week and used beta-agonist on 6.0 days/week.
RESULTS
In over 12 weeks of treatment of patients aged 2 to 5 years, montelukast administered as a 4-mg chewable tablet produced significant improvements compared with placebo in multiple parameters of asthma control including: daytime asthma symptoms (cough, wheeze, trouble breathing, and activity limitation); overnight asthma symptoms (cough); the percentage of days with asthma symptoms; the percentage of days without asthma; the need for beta-agonist or oral corticosteroids; physician global evaluations; and peripheral blood eosinophils. The clinical benefit of montelukast was evident within 1 day of starting therapy. Improvements in asthma control were consistent across age, sex, race, and study center, and whether or not patients had a positive radioallergosorbent test. Montelukast demonstrated a consistent effect regardless of concomitant use of inhaled/nebulized corticosteroid or cromolyn therapy. Caregiver global evaluations, the percentage of patients experiencing asthma attacks, and improvements in quality-of-life scores favored montelukast, but were not significantly different from placebo. There were no clinically meaningful differences between treatment groups in overall frequency of adverse effects or of individual adverse effects, with the exception of asthma, which occurred significantly more frequently in the placebo group. There were no significant differences between treatment groups in the frequency of laboratory adverse effects or in the frequency of elevated serum transaminase levels. Approximately 90% of the patients completed the study.
CONCLUSIONS
Oral montelukast (4-mg chewable tablet) administered once daily is effective therapy for asthma in children aged 2 to 5 years and is generally well tolerated without clinically important adverse effects. Similarly, in adults and children aged 6 to 14 years, montelukast improves multiple parameters of asthma control. Thus, this study confirms and extends the benefit of montelukast to younger children with persistent asthma. |
[The psychometric properties of the Turkish version of Myocardial Infarction Dimensional Assessment Scale (MIDAS)]. | OBJECTIVE
The purpose of this study was to describe the psychometric properties of the Myocardial Infarction Dimensional Assessment Scale (MIDAS).
METHODS
This is a methodological cultural adaptation study. The MIDAS consists of 35-items covering seven domains: physical activity, insecurity, emotional reaction, dependency, diet, concerns over medication, and side effects which are rated on a five-point Likert scale from 1: never to 5:always. The highest score of MIDAS is 100.Quality of life (QOL) decreases as the score of scale increases. Overall 185 myocardial infarction (MI) patients were enrolled in this study. Cronbach alpha was used for the reliability analysis. The criterion validity, structural validity, and sensitivity analysis approach was used for validity analysis. New York Heart Association (NYHA) and the Canadian Cardiovascular Society Functional Classifications (CCSFC) for testing the criterion validity; SF-36 for construct validity testing of the Turkish version of the MIDAS were used.
RESULTS
The range of Cronbach alpha values is 0.79-0.90 for seven domains of the scale. No problematic items were observed for the entire scale. Medication related domains of the MIDAS showed considerable floor effects (35.7%-22.7%). Confirmatory Factor analysis indicators [Comparative Fit Index (CFI) =0.95 and Root Mean Square Error of Approximation (RMSEA) =0.075] supported the construct validity of MIDAS. Convergent validity of the MIDAS was confirmed with correlation of SF-36 scale where appropriate. Criterion validity results was also satisfactory by comparing different stages of the NYHA and the CCSFC (p<0.05).
CONCLUSION
Overall results revealed that Turkish version of the MIDAS is a reliable and valid instrument. |
Cloud Computing and Extreme Learning Machine for a Distributed Energy Consumption Forecasting in Equipment-Manufacturing Enterprises | AbstractEnergy consumption forecasting is a kind of fundamental work of the energy management in equipment-manufacturing enterprises, and an important way to reduce energy consumption. Therefore, this paper proposes an intellectualized, short-term distributed energy consumption forecasting model for equipment-manufacturing enterprises based on cloud computing and extreme learning machine considering the practical enterprise situation of massive and high-dimension data. The analysis of the real energy consumption data provided by LB Enterprise was undertaken and corresponding calculating experiments were completed using a 32-node cloud computing cluster. The experimental results show that the energy consumption forecasting accuracy of the proposed model is higher than the traditional support vector regression and the generalized neural network algorithm. Furthermore, the proposed forecasting algorithm possesses excellent parallel performance, overcomes the shortcoming of a single computer's insufficient computing power when facing massive and high-dimensional data without increasing the cost. |
Determination of pazopanib (GW-786034) in mouse plasma and brain tissue by liquid chromatography-tandem mass spectrometry (LC/MS-MS). | A simple, rapid and sensitive liquid chromatography-tandem mass spectrometric (LC/MS-MS) method has been developed and validated for the quantitative determination of pazopanib in mouse plasma and brain tissue homogenate. Single liquid-liquid extraction step with ethyl acetate was employed for analysis of pazopanib and the internal standard (IS); vandetanib. HPLC separation was performed on an XTerra(®) MS C18 column 50 mm × 4.6 mm, 5.0 μm. The mobile phase consisted of 70% acetonitrile and 30% water with 0.1% formic acid, pumped at a flow rate of 0.25 ml/min. Analysis time was 3.5 min per run and both the analyte and IS eluted within 1.8-2.0 min. Multiple reactions monitoring (MRM) mode was utilized to detect the compounds of interest. The mass spectrometer was operated in the positive ion mode for detection. The precursor to product ions (Q1→Q3) selected for pazopanib and internal standard during quantitative optimization were (m/z) 438.1→357.2 and 475.0→112.2 respectively. The calibration curves were linear over the range of 3.9-1000 ng/ml in both biological matrices. Lower limit of quantification (LLOQ) for mouse plasma and brain tissue was 3.9 ng/ml. The values for inter and intra day precision and accuracy were well within the ranges acceptable for analytical assessment (<15%). This method was applied to determine brain to plasma concentration ratio and relevant pharmacokinetic parameters of pazopanib after a single intravenous dose of 5 mg/kg in FVB wild type mice. |
Induction Motor Fault Diagnosis Based on Neuropredictors and Wavelet Signal Processing | Early detection and diagnosis of incipient faults is desirable for online condition assessment, product quality assurance and improved operational efficiency of induction motors running off power supply mains. In this paper, a model-based fault diagnosis system is developed for induction motors, using recurrent dynamic neural networks for transient response prediction and multi-resolution signal processing for nonstationary signal feature extraction. In addition to nameplate information required for the initial setup, the proposed diagnosis system uses measured motor terminal currents and voltages, and motor speed. The effectiveness of the diagnosis system is demonstrated through staged motor faults of electrical and mechanical origin. The developed system is scalable to different power ratings and it has been successfully demonstrated with data from 2.2-, 373-, and 597-kW induction motors. Incremental tuning is used to adapt the diagnosis system during commissioning on a new motor, significantly reducing the system development time. |
Effect of rikkunshito, a Japanese herbal medicine, on gastrointestinal symptoms and ghrelin levels in gastric cancer patients after gastrectomy | Gastric cancer patients who undergo gastrectomy suffer from a post-gastrectomy syndrome that includes weight loss, dumping syndrome, reflux esophagitis, alkaline gastritis, and finally malnutrition. It is important to ameliorate the post-gastrectomy symptoms to restore postoperative quality of life (QoL). The aim of this study was to investigate the effect of rikkunshito, a Japanese herbal medicine, on postoperative symptoms and ghrelin levels in gastric cancer patients after gastrectomy. Twenty-five patients who had undergone gastrectomy received 2.5 g of rikkunshito before every meal for 4 weeks, and a drug withdrawal period was established for the next 4 weeks. Changes in gastrointestinal hormones, including ghrelin, and appetite visual analog scale scores were measured, and QoL was estimated by using the European Organization for Research and Treatment of Cancer core questionnaire QLQ-C30. The Dysfunction After Upper Gastrointestinal Surgery for Cancer (DAUGS) scoring system was used to evaluate gastrointestinal symptoms after gastrectomy. Sixteen men and nine women (mean age 61.9 years) were enrolled in the study. All patients had either stage I (n = 24) or II (n = 1) disease and had undergone either distal gastrectomy (n = 17) or total gastrectomy (n = 8) by a laparoscopy-assisted approach. The mean ratio of the acyl-/total ghrelin concentration increased significantly after rikkunshito administration (Pre: 7.8 ± 2.1, 4 weeks: 10.5 ± 1.7 %, p = 0.0026). The total DAUGS score, as well as the scores reflecting limited activity due to decreased food consumption, reflux symptoms, dumping symptoms, and nausea and vomiting significantly improved after rikkunshito administration. The present study demonstrated a significant attenuation of gastrointestinal symptoms after gastrectomy by treatment with rikkunshito. Rikkunshito is potentially useful to minimize gastrointestinal symptoms after gastrectomy. |
Analysis and Design of a New AC–DC Single-Stage Full-Bridge PWM Converter With Two Controllers | Single-phase power factor correction (PFC) ac-dc converters are widely used in the industry for ac-dc power conversion from single phase ac-mains to an isolated output dc voltage. Typically, for high-power applications, such converters use an ac-dc boost input converter followed by a dc-dc full-bridge converter. A new ac-dc single-stage high-power universal PFC ac input full-bridge, pulse-width modulated converter is proposed in this paper. The converter can operate with an excellent input power factor, continuous input and output currents, and a non-excessive intermediate dc bus voltage and has reduced number of semiconductor devices thus presenting a cost-effective novel solution for such applications. In this paper, the operation of the proposed converter is explained, a steady-state analysis of its operation is performed, and the results of the analysis are used to develop a procedure for its design. The operation of the proposed converter is confirmed with results obtained from an experimental prototype. |
Cefuroxime and cefuroxime axetil versus amoxicillin plus clavulanic acid in the treatment of lower respiratory tract infections | In a large multinational study, the clinical and bacteriological efficacy of intravenous cefuroxime 750 mg t.i.d. followed by oral cefuroxime axetil 500 mg b.i.d. was compared to that of amoxicillin plus clavulanic acid (CA) administered as 1.2 g intravenously t.i.d. followed by 625 mg orally t.i.d. in the treatment of lower respiratory tract infections in hospitalised patients. A total of 512 patients were entered (256 in each treatment group). All were suffering from pneumonia or acute exacerbations of chronic bronchitis or bronchiectasis and required initial parenteral antibiotic therapy. Parenteral therapy lasted 48 to 72 h and was followed by five days of oral therapy. The clinical responses in the two treatment groups were very similar: 223 of 256 (87.1 %) patients were cured or improved with cefuroxime/cefuroxime axetil compared to 220 of 256 (85.9 %) with amoxicillin/CA. Positive pre-treatment sputum samples were obtained from 44 % of the patients. Clearance rates obtained were again similar: 72.8 % with cefuroxime/cefuroxime axetil and 70 % with amoxicillin/CA. Ten percent of the isolates were beta-lactamase producers, similar numbers of which were cleared in both groups. Both regimens were generally well tolerated, with only 5 % of patients treated with the cefuroxime regimen and 4.3 % of patients treated with amoxicillin/CA experiencing drug-related adverse events. Cefuroxime/cefuroxime axetil “follow-on” therapy produces clinical and bacteriological efficacy equivalent to that of amoxicillin/CA, with the advantage of twice daily oral administration. |
An Eecient Oo-line Electronic Cash System Based on the Representation Problem | We present a new o -line electronic cash system based on a problem, called the representation problem, of which little use has been made in literature thus far. Our system is the rst to be based entirely on discrete logarithms. Using the representation problem as a basic concept, some techniques are introduced that enable us to construct protocols for withdrawal and payment that do not use the cut and choose methodology of earlier systems. As a consequence, our cash system is much more e cient in both computation and communication complexity than previously proposed systems. Another important aspect of our system concerns its provability. Contrary to previously proposed systems, its correctness can be mathematically proven to a very great extent. Speci cally, if we make one plausible assumption concerning a single hash-function, the ability to break the system seems to imply that one can break the Di e-Hellman problem. Our system o ers a number of extensions that are hard to achieve in previously known systems. In our opinion the most interesting of these is that the entire cash system (including all the extensions) can be incorporated straightforwardly in a setting based on wallets with observers, which has the important advantage that doublespending can be prevented in the rst place, rather than detecting the identity of a double-spender after the fact. In particular, it can be incorporated even under the most stringent requirements conceivable about the privacy of the user, which seems to be impossible to do with previously proposed systems. Another bene t of our system is that framing attempts by a bank have negligible probability of success (independent of computing power) by a simple mechanism from within the system, which is something that previous solutions lack entirely. Furthermore, the basic cash system can be extended to checks, multi-show cash and divisibility, while retaining its computational e ciency. Although in this paper we only make use of the representation problem in groups of prime order, similar intractable problems hold in RSA-groups (with computational equivalence to factoring and computing RSAroots). We discuss how one can use these problems to construct an e cient cash system with security related to factoring or computation of RSA-roots, in an analogous way to the discrete log based system. Finally, we discuss a decision problem (the decision variant of the Di e-Hellman problem) that is strongly related to undeniable signatures, which to our knowledge has never been stated in literature and of which we do not know whether it is inBPP . A proof of its status would be of interest to discrete log based cryptography in general. Using the representation problem, we show in the appendix how to batch the con rmation protocol of undeniable signatures such that polynomially many undeniable signatures can be veri ed in four moves. AMS Subject Classi cation (1991): 94A60 CR Subject Classi cation (1991): D.4.6 |
User Acceptance of the E-Government Services in Malaysia: Structural Equation Modelling Approach | This paper identifies the factors that determine users’ acceptance of e-Government services and its causal relationships using a theoretical model based on the Technology Acceptance Model. Data relating to the constructs were collected from 200 respondents in Malaysia and subjected to Structural Equation Modeling analysis. The proposed model fits the data well. Results indicate that the important determinants of user acceptance of the e-Government services are perceived usefulness, ease of use, compatibility, interpersonal influence, external influence, self efficacy, facilitating conditions, attitude, subjective norms, perceived behavioral control, and intention to use e-Government services/system. Finally, implications and recommendations of these finding are discussed. |
Data-Driven Techniques in Disaster Information Management | Improving disaster management and recovery techniques is one of national priorities given the huge toll caused by man-made and nature calamities. Data-driven disaster management aims at applying advanced data collection and analysis technologies to achieve more effective and responsive disaster management, and has undergone considerable progress in the last decade. However, to the best of our knowledge, there is currently no work that both summarizes recent progress and suggests future directions for this emerging research area. To remedy this situation, we provide a systematic treatment of the recent developments in data-driven disaster management. Specifically, we first present a general overview of the requirements and system architectures of disaster management systems and then summarize state-of-the-art data-driven techniques that have been applied on improving situation awareness as well as in addressing users’ information needs in disaster management. We also discuss and categorize general data-mining and machine-learning techniques in disaster management. Finally, we recommend several research directions for further investigations. |
Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations | CANDECOMP/PARAFAC (CP) has found numerous applications in wide variety of areas such as in chemometrics, telecommunication, data mining, neuroscience, separated representations. For an order- N tensor, most CP algorithms can be computationally demanding due to computation of gradients which are related to products between tensor unfoldings and Khatri-Rao products of all factor matrices except one. These products have the largest workload in most CP algorithms. In this paper, we propose a fast method to deal with this issue. The method also reduces the extra memory requirements of CP algorithms. As a result, we can accelerate the standard alternating CP algorithms 20-30 times for order-5 and order-6 tensors, and even higher ratios can be obtained for higher order tensors (e.g., N ≥ 10). The proposed method is more efficient than the state-of-the-art ALS algorithm which operates two modes at a time (ALSo2) in the Eigenvector PLS toolbox, especially for tensors with order N ≥ 5 and high rank. |
The sil Locus in Streptococcus Anginosus Group: Interspecies Competition and a Hotspot of Genetic Diversity | The Streptococcus Invasion Locus (Sil) was first described in Streptococcus pyogenes and Streptococcus pneumoniae, where it has been implicated in virulence. The two-component peptide signaling system consists of the SilA response regulator and SilB histidine kinase along with the SilCR signaling peptide and SilD/E export/processing proteins. The presence of an associated bacteriocin region suggests this system may play a role in competitive interactions with other microbes. Comparative analysis of 42 Streptococcus Anginosus/Milleri Group (SAG) genomes reveals this to be a hot spot for genomic variability. A cluster of bacteriocin/immunity genes is found adjacent to the sil system in most SAG isolates (typically 6-10 per strain). In addition, there were two distinct SilCR peptides identified in this group, denoted here as SilCRSAG-A and SilCRSAG-B, with corresponding alleles in silB. Our analysis of the 42 sil loci showed that SilCRSAG-A is only found in Streptococcus intermedius while all three species can carry SilCRSAG-B. In S. intermedius B196, a putative SilA operator is located upstream of bacteriocin gene clusters, implicating the sil system in regulation of microbe-microbe interactions at mucosal surfaces where the group resides. We demonstrate that S. intermedius B196 responds to its cognate SilCRSAG-A, and, less effectively, to SilCRSAG-B released by other Anginosus group members, to produce putative bacteriocins and inhibit the growth of a sensitive strain of S. constellatus. |
Variation in sweet corn kernel characteristics associated with stand establishment and eating quality | A better understanding of the relationships between kernel characteristics associated with eating quality and stand establishment could be helpful in selection of superior genotypes by the sweet corn industry. A set of sweet corn (Zea mays L.) inbred lines with different endosperm mutations (su1, su1 se1 and sh2) were evaluated for field emergence and seedling growth rate at two locations over two years. Kernel characteristics associated with eating quality (kernel moisture concentration, kernel tenderness, sugars, phytoglycogen and dimethyl sulfide (DMS) concentrations were determined for the same inbreds by laboratory analysis from ears harvested at 18 and 22 days after pollination (DAP). Amounts of sugars, phytoglycogen and starch were also measured in mature dry kernel samples of the same inbreds. Extensive genetic variability was found among endosperm mutations and among genetic backgrounds within the different endosperm groups for most of the characteristics under study. Most of the kernel attributes associated with eating quality were uncorrelated indicating that selection to improve specific eating quality characteristics can be conducted simultaneously. A negative correlation between field emergence and sugar concentrations in immature kernels suggests that in breeding programs designed to develop germplasm with improved germination and stand establishment, concurrent attention must be given to the fresh quality of the harvested product. This information is of value to breeders and commercial growers for selection of sh2 and su1 se1 lines with superior field emergence and eating quality. |
A Proposal of Metrics for Botnet Detection Based on Its Cooperative Behavior | In this paper, we propose three metrics for detecting botnets through analyzing their behavior. Our social infrastructure (i.e., the Internet) is currently experiencing the danger of bots' malicious activities as the scale of botnets increases. Although it is imperative to detect botnet to help protect computers from attacks, effective metrics for botnet detection have not been adequately researched. In this work we measure enormous amounts of traffic passing through the Asian Internet Interconnection Initiatives (AIII) infrastructure. To validate the effectiveness of our proposed metrics, we analyze measured traffic in three experiments. The experimental results reveal that our metrics are applicable for detecting botnets, but further research is needed to refine their performance |
The financial market impact of UK quantitative easing 1 | We measure the impact of the UK's initial 2009–10 Quantitative Easing (QE) Programme on bonds and other assets. First, we use a macro-finance yield curve both to create a counterfactual path for bond yields and to estimate the impact of QE directly. Second, we analyse the impact of individual QE operations on a range of asset prices. We find that QE significantly lowered government bond yields through the portfolio balance channel – by around 50 to 100 basis points. We also uncover significant effects of individual operations but limited pass through to other assets. |
Transparent, conductive graphene electrodes for dye-sensitized solar cells. | Transparent, conductive, and ultrathin graphene films, as an alternative to the ubiquitously employed metal oxides window electrodes for solid-state dye-sensitized solar cells, are demonstrated. These graphene films are fabricated from exfoliated graphite oxide, followed by thermal reduction. The obtained films exhibit a high conductivity of 550 S/cm and a transparency of more than 70% over 1000-3000 nm. Furthermore, they show high chemical and thermal stabilities as well as an ultrasmooth surface with tunable wettability. |
Determinants of maize seed income and adoption of foundation seed production: evidence from Palpa District of Nepal | Maize is the second most important staple crop in terms of area and production in Nepal. The production and yield of maize are low in Nepal as compared to other similar agro-climatic regions. Seed is considered as a vital input in production. The yield of maize can be increased by using improved seeds and technologies. Farmers were generating good income being involved in foundation seed production as compared to certified seed. The maize seed sector in Nepal is handicapped by low domestic research and production capacity, which results in the poor supply of breeder and foundation seed for its multiplication. Hence, this study is aimed to investigate determinants of income from maize seed and adoption of foundation seed production in Palpa District of Nepal. Palpa District of Nepal was selected for the study because of its high contribution on maize seed production. The sample size was determined using the software Raosoft. A total of 182 samples were selected using simple random sampling technique. Descriptive statistics, probit model, income regression model and the instrumental variable model were used to analyze data. The per hectare income from foundation seed production was higher than that from certified seed by NRs. 51,541. The study revealed that schooling year of household head, family type, active members, farm category, total income from maize seed production and training received had statistically significant effect on the adoption of foundation seed production. It was found that income increased by about 44% for the households producing foundation seed as compared to certified seed. This higher income is mainly driven by the higher yield as well as the higher price of the foundation seed. The study revealed that increase in area under maize seed by one hectare would increase the income by 242%. A result of the instrumental variable model showed that foundation seed production and extension services received do not affect significantly on maize seed income. This study identified that foundation seed production was profitable farm business in Palpa District of Nepal. However, very few farmers adopted such technology due to lack of proper training and extension services. Farmers should focus to increase area under foundation seed production to achieve higher returns. |
Prevalence of erythromycin and clindamycin resistance among clinical isolates of the Streptococcus anginosus group in Germany. | Members of the Streptococcus anginosus group (SAG) are frequently involved in pyogenic infections in humans. In the present study, the antimicrobial susceptibility of 141 clinical SAG isolates to six antimicrobial agents was analysed by agar dilution. All isolates were susceptible to penicillin, cefotaxime and vancomycin. However, 12.8 % displayed increased MIC values (0.12 mg l(-1)) for penicillin. Resistance to erythromycin was detected in eight (5.7 %) isolates. Characterization of the erythromycin-resistant isolates with the double-disc diffusion test revealed Macrolide-Lincosamide-Streptogramin(B) and M-type resistance in six and two isolates, respectively. The erythromycin-resistant isolates were further characterized by PCR for the resistance genes ermA, ermB and mefA. Resistance and intermediate resistance to ciprofloxacin were detected in two and six isolates, respectively. Molecular typing by PFGE revealed a high genetic heterogeneity among the SAG isolates and no evidence for a clonal relationship between the erythromycin-resistant isolates. Our data show that resistance to erythromycin, clindamycin and ciprofloxacin has emerged among SAG isolates in Germany. The implications of these findings for susceptibility testing and antimicrobial therapy of SAG infections are discussed. |
Complexity of Adaptation in Real-World Case-Based Reasoning Systems | The essence of Case-Based Reasoning (CBR) as a problem solving paradigm is that solutions are generated by adapting the solutions of similar problems rather than solving the problem from first principles. In this paper we present a categorisation of problem solving tasks, arranged according to complexity. In addition we categorise CBR systems according to the complexity of the adaptation process involved. We describe three CBR systems; a system for property valuation, a system for software design and a system for modelling in engineering analysis. We discuss the manner in which the advantage of a CBR solution to these problems shifts as the task becomes more complex and the complexity of the adaptation process changes. |
Effect of selected yogic practices on the management of hypertension. | On the basis of medical officers diagnosis, thirty three (N = 33) hypertensives, aged 35-65 years, from Govt. General Hospital, Pondicherry, were examined with four variables viz, systolic and diastolic blood pressure, pulse rate and body weight. The subjects were randomly assigned into three groups. The exp. group-I underwent selected yoga practices, exp. group-II received medical treatment by the physician of the said hospital and the control group did not participate in any of the treatment stimuli. Yoga imparted in the morning and in the evening with 1 hr/session. day-1 for a total period of 11-weeks. Medical treatment comprised drug intake every day for the whole experimental period. The result of pre-post test with ANCOVA revealed that both the treatment stimuli (i.e., yoga and drug) were effective in controlling the variables of hypertension. |
Cataract surgical coverage and self-reported barriers to cataract surgery in a rural Myanmar population. | PURPOSE
The aim of this study is to determine the cataract surgical coverage and investigate the barriers to cataract surgery as reported by those with cataract-induced visual impairment in rural Myanmar.
METHODS
A cross-sectional, population-based survey of inhabitants 40 years of age and over from villages in the Meiktila District (central Myanmar); 2481 eligible participants were identified and 2076 participated. Data recording included corrected visual acuity, dilated slit lamp examination and stereoscopic fundus examination. Lens opacity was graded using the Lens Opacities Classification System III. Participants with cataract-induced visual impairment (acuity < 6/18 in better eye) were also invited to respond to a verbal questionnaire about barriers to cataract surgery.
RESULTS
Cataract surgical coverage for visual acuity cut-offs of <6/18, <6/60 and <3/60 was 9.74%, 20.11% and 22.3%, respectively, for people and 4.18%, 9.39% and 13.47%, respectively, for eyes. Cataract surgical coverage was higher for men than women, but gender was not associated with refusal of services. Of the 239 who responded to the extra questionnaire, 216 were blind or had low vision owing to cataract. Three quarters refused referral for surgery: cost and fear of surgery were the most frequently reported barriers.
CONCLUSION
Cost plays a large role in the burden of cataract in this region. Implementation of educational programmes, reforms to local health service and subsidization of ophthalmic care may improve the uptake of cataract surgery. |
Dietary intake, lung function and airway inflammation in Mexico City school children exposed to air pollutants | INTRODUCTION
Air pollutant exposure has been associated with an increase in inflammatory markers and a decline in lung function in asthmatic children. Several studies suggest that dietary intake of fruits and vegetables might modify the adverse effect of air pollutants.
METHODS
A total of 158 asthmatic children recruited at the Children's Hospital of Mexico and 50 non-asthmatic children were followed for 22 weeks. Pulmonary function was measured and nasal lavage collected and analyzed every 2 weeks. Dietary intake was evaluated using a 108-item food frequency questionnaire and a fruit and vegetable index (FVI) and a Mediterranean diet index (MDI) were constructed. The impact of these indices on lung function and interleukin-8 (IL-8) and their interaction with air pollutants were determined using mixed regression models with random intercept and random slope.
RESULTS
FVI was inversely related to IL-8 levels in nasal lavage (p < 0.02) with a significant inverse trend (test for trend p < 0.001), MDI was positively related to lung function (p < 0.05), and children in the highest category of MDI had a higher FEV1 (test for trend p < 0.12) and FVC (test for trend p < 0.06) than children in the lowest category. A significant interaction was observed between FVI and ozone for FEV1 and FVC as was with MDI and ozone for FVC. No effect of diet was observed among healthy children.
CONCLUSION
Our results suggest that fruit and vegetable intake and close adherence to the Mediterranean diet have a beneficial effect on inflammatory response and lung function in asthmatic children living in Mexico City. |
The effect of n−3 fatty acids on coronary atherosclerosis: Results from SCIMO, an angiographic study, background and implications | According to the model of “response to injury”, the arterial endothelium is occasionally injured in hyperlipidemia, hypertension, diabetes mellitus and in other states known as risk factors. The ensuing inflammatory response is modulated by cytokines and growth factors, among them platelet-derived growth factor (PDGF), and monocyte chemoattractant protein-1 (MCP-1). In two independent studies, we demonstrated that mRNA levels for PDGF-A and-B and for MCP-1 are reduced after ingestion of n−3 fatty acids by human volunteers. This reduction persists after monocyte stimulation/differentiation by adherence. Moreover, the reduction is brought about only by dietary n−3 fatty acids and not by other calsses of unsaturated fatty acids (n−6 or n−9). This appears to be one major mechanism of action of reduced progression/increased regression of establihed coronary artery disease by ingestion of 1.5 g/d n−3 fatty acids, as assessed by coronary angiography in a randomized placebo-controlled double-blind intervention study in 223 patients. The study was conducted according to “Good Clinical Practice”, comprehensive rules regulating investigations with pharmaceutical compounds. Together, our investigations lend support to the importance of PDGF-A, PDGF-B, and MCP-1 in the pathogenesis of atherosclerosis, and the beneficial role of n−3 fatty acids therein. |
SEE: Towards Semi-Supervised End-to-End Scene Text Recognition | Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results. |
A Comparison of Transient Annealing Methods for Silicide Formation | Nickel and cobalt silicides have been formed by raster-scanned electron beam and flash-lamp irradiation of thin metal films on single crystal (100) and (111) silicon wafers. RBS and channelling measurements indicate that the NiSi 2 is epitaxial and of good crystalline quality (X min 4% on (111)); epitaxial CoSi 2 was more difficult to form and of somewhat poorer quality. The elastic recoil technique has been used to determine bulk and interfacial light element contamination. These measurements have been correlated with resistivity and SEM studies of the surface textures. |
Estimating the Impact of the Death Penalty on Murder | This paper reviews the econometric issues in efforts to estimate the impact of the death penalty on murder, focusing on six recent studies published since 2003. We highlight the large number of choices that must be made when specifying the various panel data models that have been used to address this question. There is little clarity about the knowledge potential murderers have concerning the risk of execution: are they influenced by the passage of a death penalty statute, the number of executions in a state, the proportion of murders in a state that leads to an execution, and details about the limited types of murders that are potentially susceptible to a sentence of death? If an execution rate is a viable proxy, should it be calculated using the ratio of last year’s executions to last year’s murders, last year’s executions to the murders a number of years earlier, or some other values? We illustrate how sensitive various estimates are to these choices. Importantly, the most up-to-date OLS panel data studies generate no evidence of a deterrent effect, while three 2SLS studies purport to find such evidence. The 2SLS studies, none of which shows results that are robust to clustering their standard errors, are unconvincing because they all use a problematic structure based on poorly measured and theoretically inappropriate pseudo-probabilities that are |
Guillain-Barré syndrome. | Guillain-Barré syndrome consists of a group of neuropathic conditions characterized by progressive weakness and diminished or absent myotatic reflexes. The estimated annual incidence in the United States is 1.65 to 1.79 per 100,000 persons. Guillain-Barré syndrome is believed to result from an aberrant immune response that attacks nerve tissue. This response may be triggered by surgery, immunizations, or infections. The most common form of the disease, acute inflammatory demyelinating polyradiculoneuropathy, presents as progressive motor weakness, usually beginning in the legs and advancing proximally. Symptoms typically peak within four weeks, then plateau before resolving. More than one-half of patients experience severe pain, and about two-thirds have autonomic symptoms, such as cardiac arrhythmias, blood pressure instability, or urinary retention. Advancing symptoms may compromise respiration and vital functions. Diagnosis is based on clinical features, cerebrospinal fluid testing, and nerve conduction studies. Cerebrospinal fluid testing shows increased protein levels but a normal white blood cell count. Nerve conduction studies show a slowing, or possible blockage, of conduction. Patients should be hospitalized for multidisciplinary supportive care and disease-modifying therapy. Supportive therapy includes controlling pain with nonsteroidal anti-inflammatory drugs, carbamazepine, or gabapentin; monitoring for respiratory and autonomic complications; and preventing venous thrombosis, skin breakdown, and deconditioning. Plasma exchange therapy has been shown to improve short-term and long-term outcomes, and intravenous immune globulin has been shown to hasten recovery in adults and children. Other therapies, including corticosteroids, have not demonstrated benefit. About 3 percent of patients with Guillain-Barré syndrome die. Neurologic problems persist in up to 20 percent of patients with the disease, and one-half of these patients are severely disabled. |
Action recognition with multiscale spatio-temporal contexts | The popular bag of words approach for action recognition is based on the classifying quantized local features density. This approach focuses excessively on the local features but discards all information about the interactions among them. Local features themselves may not be discriminative enough, but combined with their contexts, they can be very useful for the recognition of some actions. In this paper, we present a novel representation that captures contextual interactions between interest points, based on the density of all features observed in each interest point's mutliscale spatio-temporal contextual domain. We demonstrate that augmenting local features with our contextual feature significantly improves the recognition performance. |
Prefuse: a Toolkit for Interactive Information Visualization | Although information visualization (infovis) technologies have proven indispensable tools for making sense of complex data, wide-spread deployment has yet to take hold, as successful infovis applications are often difficult to author and require domain-specific customization. To address these issues, we have created prefuse, a software framework for creating dynamic visualizations of both structured and unstructured data. prefuse provides theoretically-motivated abstractions for the design of a wide range of visualization applications, enabling programmers to string together desired components quickly to create and customize working visualizations. To evaluate prefuse we have built both existing and novel visualizations testing the toolkit's flexibility and performance, and have run usability studies and usage surveys finding that programmers find the toolkit usable and effective. |
On the need of certification in computational electromagnetics based engineering services | During the last five decades computing power has become more widely available. Consequently the solving of practical engineering and scientific problems has shifted towards virtual computer based assessments. A new field based on mathematical models and modern computers arose from this development and is called scientific computing. This field consists of numerical mathematics, computer science and the relating engineering discipline. This field differs from laboratory experiments and theoretical work which are the common forms of engineering work. Nowadays computational electromagnetics is a self-contained engineering service. This paper approaches the problem of providing proof of reliability and creating confidence in this special industrial sector. |
Cancer and the threat of death: the cognitive dynamics of death-thought suppression and its impact on behavioral health intentions. | Five studies examined the cognitive association between thoughts of cancer and thoughts of death and their implication for screening intentions. Study 1 found that explicit contemplation of cancer did not increase death-thought accessibility. In support of the hypothesis that this reflects suppression of death-related thoughts, Study 2 found that individuals who thought about cancer exhibited elevated death-thought accessibility under high cognitive load, and Study 3 demonstrated that subliminal primes of the word cancer led to increased death-thought accessibility. Study 4 revealed lower levels of death-thought accessibility when perceived vulnerability to cancer was high, once again suggesting suppression of death-related thoughts in response to conscious threats associated with cancer. Study 5 extended the analysis by finding that after cancer salience, high cognitive load, which presumably disrupts suppression of the association between cancer and death, decreased cancer-related self-exam intentions. Theoretical and practical implications for understanding terror management, priming and suppression, and responses to cancer are discussed. |
CTSUM: extracting more certain summaries for news articles | People often read summaries of news articles in order to get reliable information about an event or a topic. However, the information expressed in news articles is not always certain, and some sentences contain uncertain information about the event. Existing summarization systems do not consider whether a sentence in news articles is certain or not. In this paper, we propose a novel system called CTSUM to incorporate the new factor of information certainty into the summarization task. We first analyze the sentences in news articles and automatically predict the certainty levels of sentences by using the support vector regression method with a few useful features. The predicted certainty scores are then incorporated into a summarization system with a graph-based ranking algorithm. Experimental results on a manually labeled dataset verify the effectiveness of the sentence certainty prediction technique, and experimental results on the DUC2007 dataset shows that our new summarization system cannot only produce summaries with better content quality, but also produce summaries with higher certainty. |
Effect of code coverage on software reliability measurement | Summary & Conclusions—Existing software reliability-growth models often over-estimate the reliability of a given program. Empirical studies suggest that the over-estimations exist because the models do not account for the nature of the testing. Every testing technique has a limit to its ability to reveal faults in a given system. Thus, as testing continues in its region of saturation, no more faults are discovered and inaccurate reliability-growth phenomena are predicted from the models. This paper presents a technique intended to solve this Problem, using both time & code coverage measures for the prediction of software failures in operation. Coverage information collected during testing is used only to consider the effective portion of the test data. Execution time between test cases, which neither increases code coverage nor causes a failure, is reduced by a parameterized factor. Experiments were conducted to evaluate this technique, on a program created in a simulated environment with simulated faults, and on two industrial systems that contained tenths of ordinary faults. Two well-known reliability models, Goel-Okumoto and Musa-Okumoto, were applied to both the raw data and to the data adjusted using this technique. Results show that overestimation of reliability is properly corrected in the cases studied. This new approach has potential, not only to achieve more accurate applications of software reliability models, but to reveal effective ways of conducting software testing. |
Distributed Constraint Optimization Problems and Applications: A Survey | The field of multi-agent system (MAS) is an active area of research within artificial intelligence, with an increasingly important impact in industrial and other real-world applications. In a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as a prominent agent model to govern the agents’ autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have been proposed to enable support of MAS in complex, real-time, and uncertain environments. This survey provides an overview of the DCOP model, offering a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas. |
Mitigating Pilot Contamination by Pilot Reuse and Power Control Schemes for Massive MIMO Systems | The performance of massive multiple input multiple output systems may be limited by inter-cell pilot contamination (PC) unless appropriate PC mitigation or avoidance schemes are employed. In this paper we develop techniques based on existing long term evolution (LTE) measurements - open loop power control (OLPC) and pilot sequence reuse schemes, that avoid PC within a group of cells. We compare the performance of simple least-squares channel estimator with the higher-complexity minimum mean square error estimator, and evaluate the performance of the recently proposed coordinated pilot allocation (CPA) technique (which is appropriate in cooperative systems). The performance measures of interest include the normalized mean square error of channel estimation, the downlink signal-to-interference-plus-noise and spectral efficiency when employing maximum ratio transmission or zero forcing precoding at the base station. We find that for terminals moving at vehicular speeds, PC can be effectively mitigated in an operation and maintenance node using both the OLPC and the pilot reuse schemes. Additionally, greedy CPA provides performance gains only for a fraction of terminals, at the cost of degradation for the rest of the terminals and higher complexity. These results indicate that in practice, PC may be effectively mitigated without the need for second-order channel statistics or inter-cell cooperation. |
Methods for evaluating games: how to measure usability and user experience in games? | This workshop addresses current needs in the games developers' community and games industry to evaluate the overall user experience of games. New forms of interaction techniques, like gestures, eye-tracking or even bio-physiological input and feedback present the limits of current evaluation methods for user experience, and even standard usability evaluation used during game development. This workshop intends to bring together practitioners and researchers sharing their experiences using methods from HCI to explore and measure usability and user experience in games. To this workshop we also invite contributions from other disciplines (especially from the games industry) showing new concepts for user experience evaluation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.