title
stringlengths
8
300
abstract
stringlengths
0
10k
SmartPhoto: A Resource-Aware Crowdsourcing Approach for Image Sensing with Smartphones
Photos obtained via crowdsourcing can be used in many critical applications. Due to the limitations of communication bandwidth, storage and processing capability, it is a challenge to transfer the huge amount of crowdsourced photos. To address this problem, we propose a framework, called SmartPhoto, to quantify the quality (utility) of crowdsourced photos based on the accessible geographical and geometrical information (called metadata) including the smartphone's orientation, position and all related parameters of the built-in camera. From the metadata, we can infer where and how the photo is taken, and then only transmit the most useful photos. Three optimization problems regarding the tradeoffs between photo utility and resource constraints, namely the Max-Utility problem, the online Max-Utility problem and the Min-Selection problem, are studied. Efficient algorithms are proposed and their performance bounds are theoretically proved. We have implemented SmartPhoto in a testbed using Android based smartphones, and proposed techniques to improve the accuracy of the collected metadata by reducing sensor reading errors and solving object occlusion issues. Results based on real implementations and extensive simulations demonstrate the effectiveness of the proposed algorithms.
Accurate wild animal recognition using PCA, LDA and LBPH
In this paper, the performances of image recognition methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Patterns Histograms (LBPH) are tested and compared for the image recognition of the input animal images. The main idea of this paper is to present an independent, comparative study and some of the benefits and drawbacks of these most popular image recognition methods. Two sets of experiments are conducted for relative performance evaluations. In the first part of our experiments, the recognition accuracy of PCA, LDA and LBPH is demonstrated. The overall time execution for animal recognition process is evaluated in the second set of our experiments. We conduct tests on created animal database. The all algorithms have been tested on 300 different subjects (60 images for each class). The experimental result shows that the PCA features provide better results as LDA and LBPH for large training set. On the other hand, LBPH is better than PCA and LDA for small training data set.
Integrated control strategy for islanded operation in smart grids: Virtual inertia and ancillary services
Distributed Generation has become a consolidated phenomenon in distribution grids in the last few years. Even though the matter is very articulated and complex, islanding operation of distribution grid is being considered as a possible measure to improve service continuity. In this paper a novel static converter control strategy to obtain frequency and voltage regulation in islanded distribution grid is proposed. Two situations are investigated: in the former one electronic converter and one synchronous generator are present, while in the latter only static generation is available. In both cases, converters are supposed to be powered by DC micro-grids comprising of generation and storage devices. In the first case converter control will realize virtual inertia and efficient frequency regulation by mean of PID regulator; this approach allows to emulate a very high equivalent inertia and to obtain fast frequency regulation, which could not be possible with traditional regulators. In the second situation a Master-Slave approach will be adopted to maximize frequency and voltage stability. Simulation results confirm that the proposed control allows islanded operation with high frequency and voltage stability under heavy load variations.
Nanolubricant Oil Additives for Performance Improvement of the Intermediate Gearbox in the AH-64 D Helicopter
This paper presents a new nanolubricant for the intermediate gearbox of the Apache aircraft. Historically, the intermediate gearbox has been prone for grease leaking and this natural-occurring fault has negatively impacted the airworthiness of the aircraft. In this study, the incorporation of graphite nanoparticles in mobile aviation gear oil is presented as a nanofluid with excellent thermo-physical properties. Condition-based maintenance practices are demonstrated where four nanoparticle additive oil samples with different concentrations are tested in a full-scale tail rotor drive-train test stand, in addition to, a baseline sample for comparison purposes. Different condition monitoring results suggest the capacity of the nanofluids to have significant gearbox performance benefits when compared to the base oil.
DDI-CPI, a server that predicts drug–drug interactions through implementing the chemical–protein interactome
Drug-drug interactions (DDIs) may cause serious side-effects that draw great attention from both academia and industry. Since some DDIs are mediated by unexpected drug-human protein interactions, it is reasonable to analyze the chemical-protein interactome (CPI) profiles of the drugs to predict their DDIs. Here we introduce the DDI-CPI server, which can make real-time DDI predictions based only on molecular structure. When the user submits a molecule, the server will dock user's molecule across 611 human proteins, generating a CPI profile that can be used as a feature vector for the pre-constructed prediction model. It can suggest potential DDIs between the user's molecule and our library of 2515 drug molecules. In cross-validation and independent validation, the server achieved an AUC greater than 0.85. Additionally, by investigating the CPI profiles of predicted DDI, users can explore the PK/PD proteins that might be involved in a particular DDI. A 3D visualization of the drug-protein interaction will be provided as well. The DDI-CPI is freely accessible at http://cpi.bio-x.cn/ddi/.
SUN database: Large-scale scene recognition from abbey to zoo
Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes.
Search with Meanings:An Overview of Semantic Search Systems
Research on semantic search aims to improve conventional information search and retrieval methods, and facilitate information acquisition, processing, storage and retrieval on the semantic web. The past ten years have seen a number of implemented semantic search systems and various proposed frameworks. A comprehensive survey is needed to gain an overall view of current research trends in this field. We have investigated a number of pilot projects and corresponding practical systems focusing on their objectives, methodologies and most distinctive characteristics. In this paper, we report our study and findings based on which a generalised semantic search framework is formalised. Further, we describe issues with regards to future research in this area.
Left atrial appendage morphology, echocardiographic characterization, procedural data and in-hospital outcome of patients receiving left atrial appendage occlusion device implantation: a prospective observational study
BACKGROUND Implantation of left atrial appendage (LAA) occlusion devices was shown to be a feasible and effective alternative to oral anticoagulation in patients with non-valvular atrial fibrillation. However, only few data about in-hospital and peri-procedural data are currently available. This study aims to report about echocardiographic, procedural and in-hospital data of patients receiving LAA occlusion devices. METHODS This single-center, prospective and observational study includes consecutively patients being eligible for percutaneous implantation of LAA occlusion devices (either Watchman™ or Amplatzer™ Cardiac Plug 2). Data on pre- and peri-procedural transesophageal echocardiography (TEE), implantation and procedure related in-hospital complications were collected. The primary efficacy outcome measure was a successful device implantation without relevant peri-device leaks (i.e., < 5 mm). RESULTS In total, 37 patients were included, 22 receiving the Watchman™ and 15 ACP 2 device. Baseline characteristics did not differ significantly in both patient groups. The primary efficacy outcome measure was reached in 91.9% of patients (90.9% for the Watchman™, 93.3 % for the ACP 2 group). One device embolization (Watchman™ group) with successful retrieval occurred (2.7% of patients). No thromboembolism or device thrombosis were present. The majority of bleedings was caused by access site bleedings (88.3% of all bleedings), consisting mostly of mild hematomas corresponding to a BARC type 1 bleeding (80.0% of all access-site complications). One patient died due to septic shock (non-procedure related). CONCLUSIONS In daily real-life practice, percutaneous treatment with LAA occlusion devices appears to be an effective and safe.
Cloud Computing Adoption by Higher Education Institutions in Saudi Arabia: Analysis Based on TOE
(1) Background, Motivation and Objective: Academic study of Cloud Computing within Saudi Arabia is an emerging research field. Saudi Arabia represents the largest economy in the Arabian Gulf region. This positions it as a potential market of cloud computing technologies. Adoption of new innovations should be preceded by analysis of the added value, challenges and adequacy from technological, organizational and environmental perspectives. (2) Statement of Contribution/Method: This cross-sectional exploratory empirical research is based on Technology, Organization and Environment model targeting higher education institutions. In this study, the factors that influence the adoption by higher education institutions were analyzed and tested using Partial Least Square. (3) Results, Discussion and Conclusions: Three factors were found significant in this context. Relative Advantage, Data Privacy and Complexity are the most significant factors. The model explained 43% of the total adoption measure variation. Significant differences in the areas of cloud computing compatibility, complexity, vendor lock-in and peer pressure between large and small institutions were revealed. Items for future cloud computing research were explored through open-ended questions. Adoption of cloud services by higher education institutions has been started. It was found that the adoption rate among large universities is higher than small higher education institutions. Improving the network and Internet Infrastructure in Saudi Arabia at an affordable cost is a pre-requisite for cloud computing adoption. Cloud service provider should address the privacy and complexity concerns raised by non-adopters. Future information systems that are potential for hosting in cloud were prioritized.
What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary
BACKGROUND Overviews of reviews (overviews) compile data from multiple systematic reviews to provide a single synthesis of relevant evidence for decision-making. Despite their increasing popularity, there is limited methodological guidance available for researchers wishing to conduct overviews. The objective of this scoping review is to identify and collate all published and unpublished documents containing guidance for conducting overviews examining the efficacy, effectiveness, and/or safety of healthcare interventions. Our aims were to provide a map of existing guidance documents; identify similarities, differences, and gaps in the guidance contained within these documents; and identify common challenges involved in conducting overviews. METHODS We conducted an iterative and extensive search to ensure breadth and comprehensiveness of coverage. The search involved reference tracking, database and web searches (MEDLINE, EMBASE, DARE, Scopus, Cochrane Methods Studies Database, Google Scholar), handsearching of websites and conference proceedings, and contacting overview producers. Relevant guidance statements and challenges encountered were extracted, edited, grouped, abstracted, and presented using a qualitative metasummary approach. RESULTS We identified 52 guidance documents produced by 19 research groups. Relatively consistent guidance was available for the first stages of the overview process (deciding when and why to conduct an overview, specifying the scope, and searching for and including systematic reviews). In contrast, there was limited or conflicting guidance for the latter stages of the overview process (quality assessment of systematic reviews and their primary studies, collecting and analyzing data, and assessing quality of evidence), and many of the challenges identified were also related to these stages. An additional, overarching challenge identified was that overviews are limited by the methods, reporting, and coverage of their included systematic reviews. CONCLUSIONS This compilation of methodological guidance for conducting overviews of healthcare interventions will facilitate the production of future overviews and can help authors address key challenges they are likely to encounter. The results of this project have been used to identify areas where future methodological research is required to generate empirical evidence for overview methods. Additionally, these results have been used to update the chapter on overviews in the next edition of the Cochrane Handbook for Systematic Reviews of Interventions.
Geocoding for texts with fine-grain toponyms: an experiment on a geoparsed hiking descriptions corpus
Geoparsing and geocoding are two essential middleware services to facilitate final user applications such as location-aware searching or different types of location-based services. The objective of this work is to propose a method for establishing a processing chain to support the geoparsing and geocoding of text documents describing events strongly linked with space and with a frequent use of fine-grain toponyms. The geoparsing part is a Natural Language Processing approach which combines the use of part of speech and syntactico-semantic combined patterns (cascade of transducers). However, the real novelty of this work lies in the geocoding method. The geocoding algorithm is unsupervised and takes profit of clustering techniques to provide a solution for disambiguating the toponyms found in gazetteers, and at the same time estimating the spatial footprint of those other fine-grain toponyms not found in gazetteers. The feasibility of the proposal has been tested with a corpus of hiking descriptions in French, Spanish and Italian.
Driver behavior profiling using smartphones
The proliferation of smartphones and mobile devices embedding different types of sensors sets up a prodigious and distributed sensing platform. In particular, in the last years there has been an increasing necessity to monitor drivers to identify bad driving habits in order to optimize fuel consumption, to reduce CO2 emissions or, indeed, to design new reliable and fair pricing schemes for the insurance market. In this paper, we analyze the driver sensing capacity of smartphones. We propose a mobile tool that makes use of the most common sensors embedded in current smartphones and implement a Fuzzy Inference System that scores the overall driving behavior by combining different fuzzy sensing data.
A temperature-compensated low-noise digitally-controlled crystal oscillator for multi-standard applications
This paper presents an integrated 26-MHz digitally-controlled crystal oscillator (DCXO) with temperature compensation function for multi-standard cellular applications, that achieves a phase noise of -154 and -159 dBc/Hz at 10 kHz and 100 kHz offset, respectively. The frequency instability over temperature is compensated by built-in temperature sensor and compensating capacitor. The frequency instability from -10 to 55 degC is about +/- 1 ppm. The AFC frequency tuning is done by a digitally-controlled metal-oxide-metal capacitor array that is 13-bit thermometer decoded. The DCXO is implemented in a 0.13-mum CMOS technology.
Projection-free Distributed Online Learning in Networks
The conditional gradient algorithm has regained a surge of research interest in recent years due to its high efficiency in handling large-scale machine learning problems. However, none of existing studies has explored it in the distributed online learning setting, where locally light computation is assumed. In this paper, we fill this gap by proposing the distributed online conditional gradient algorithm, which eschews the expensive projection operation needed in its counterpart algorithms by exploiting much simpler linear optimization steps. We give a regret bound for the proposed algorithm as a function of the network size and topology, which will be smaller on smaller graphs or ”well-connected” graphs. Experiments on two large-scale real-world datasets for a multiclass classification task confirm the computational benefit of the proposed algorithm and also verify the theoretical regret bound.
Patients’ perceptions of changes in their blood pressure
Objectives: (1) To investigate patients’ experience of changes in their blood pressure (BP) in an every day setting and the accuracy of patients’ predictions; and (2) to examine what influences patients’ belief that they can tell when their BP is up. Subjects: A total of 102 hypertensive patients were recruited sequentially as they presented for routine BP checks. The setting was an inner city general practice. Design: Patients attended for BP checks on a weekly basis. Before each check they were asked whether they thought their BP was higher, lower or the same as usual. Subjects were classified as predictors if they thought they could tell when their BP was up. On completing their series of BP checks each subject completed symptom and Hospital Anxiety and Depression questionnaires. Main outcome measures: Accuracy of BP predictions, BP levels and variability, number of symptoms reported and anxiety level. Results: One hundred and two hypertensive patients entered the study of whom 51 patients were predictors. The majority (86%) of predictors could not accurately predict their BP. There were no significant differences in either BP or variability between predictors and non-predictors. Predictors were significantly more anxious and reported more symptoms than non-predictors. Conclusions: For the majority of predictors there is no significant relationship between predictions of BP and clinical measurements. Predictor status is associated with the reporting of more symptoms and higher levels of anxiety. Doctors should counsel patients against using subjective BP assessments to guide their use of antihypertensive medication.
Diffraction-limited high-finesse optical cavities
High-quality optical cavities with wavelength-sized end mirrors are important to the growing field of micro-optomechanical systems. We present a versatile method for calculating the modes of diffraction limited optical cavities and show that it can be used to determine the effect of a wide variety of cavity geometries and imperfections. Additionally, we show these calculations agree remarkably well with FDTD simulations for wavelength-sized optical modes, even though our method is based on the paraxial approximation.
Hilbert space embeddings of conditional distributions with applications to dynamical systems
In this paper, we extend the Hilbert space embedding approach to handle conditional distributions. We derive a kernel estimate for the conditional embedding, and show its connection to ordinary embeddings. Conditional embeddings largely extend our ability to manipulate distributions in Hilbert spaces, and as an example, we derive a nonparametric method for modeling dynamical systems where the belief state of the system is maintained as a conditional embedding. Our method is very general in terms of both the domains and the types of distributions that it can handle, and we demonstrate the effectiveness of our method in various dynamical systems. We expect that conditional embeddings will have wider applications beyond modeling dynamical systems.
The ICB-2015 Competition on Finger Vein Recognition
Finger vein recognition is a newly developed and promising biometrics technology. To facilitate evaluation in this area and study state-of-the-art performance of the finger vein recognition algorithms, we organized The ICB-2015 Competition on Finger Vein Recognition (ICFVR2015). This competition is held on a general recognition algorithm evaluation platform called RATE, with 3 data sets collected from volunteers and actual usage. 7 algorithms were finally submitted, with the best EER achieving 0.375%. This paper will first introduce the organization of the competition and RATE, then describe data sets and test protocols, and finally present results of the competition.
A Neural Local Coherence Model for Text Quality Assessment
We propose a local coherence model that captures the flow of what semantically connects adjacent sentences in a text. We represent the semantics of a sentence by a vector and capture its state at each word of the sentence. We model what relates two adjacent sentences based on the two most similar semantic states, each of which is in one of the sentences. We encode the perceived coherence of a text by a vector, which represents patterns of changes in salient information that relates adjacent sentences. Our experiments demonstrate that our approach is beneficial for two downstream tasks: Readability assessment, in which our model achieves new state-of-the-art results; and essay scoring, in which the combination of our coherence vectors and other taskdependent features significantly improves the performance of a strong essay scorer.
A Consensus Statement on the Use of Ketamine in the Treatment of Mood Disorders.
Importance Several studies now provide evidence of ketamine hydrochloride's ability to produce rapid and robust antidepressant effects in patients with mood and anxiety disorders that were previously resistant to treatment. Despite the relatively small sample sizes, lack of longer-term data on efficacy, and limited data on safety provided by these studies, they have led to increased use of ketamine as an off-label treatment for mood and other psychiatric disorders. Observations This review and consensus statement provides a general overview of the data on the use of ketamine for the treatment of mood disorders and highlights the limitations of the existing knowledge. While ketamine may be beneficial to some patients with mood disorders, it is important to consider the limitations of the available data and the potential risk associated with the drug when considering the treatment option. Conclusions and Relevance The suggestions provided are intended to facilitate clinical decision making and encourage an evidence-based approach to using ketamine in the treatment of psychiatric disorders considering the limited information that is currently available. This article provides information on potentially important issues related to the off-label treatment approach that should be considered to help ensure patient safety.
Perioperative morbidity of laparoscopic cryoablation of small renal masses with ultrathin probes: a European multicentre experience.
BACKGROUND Low morbidity has been advocated for cryoablation of small renal masses. OBJECTIVES To assess negative perioperative outcomes of laparoscopic renal cryoablation (LRC) with ultrathin cryoprobes and patient, tumour, and operative risk factors for their development. DESIGN, SETTING, AND PARTICIPANTS Prospective collection of data on LRC in five centres. INTERVENTION LRC. MEASUREMENTS Preoperative morbidity was assessed clinically and the American Society of Anaesthesiologists (ASA) score was assigned prospectively. Charlson Comorbidity Index (CCI) and Charlson-Age Comorbidity Index (CACI) scores were retrospectively assigned. Negative outcomes were prospectively recorded and defined as any undesired event during the perioperative period, including complications, with the latter classed according to the Clavien system. Patient, tumour, and operative variables were tested in univariate analysis as risk factors for occurrence of negative outcomes. Significant variables (p<0.05) were entered in a step-forward multivariate logistic regression model to identify independent risk factors for one or more perioperative negative outcomes. The confidence interval was settled at 95%. RESULTS AND LIMITATIONS There were 148 procedures in 144 patients. Median age and tumour size were 70.5 yr (range: 32-87) and 2.6 cm (range: 1.0-5.6), respectively. A laparoscopic approach was used in 145 cases (98%). Median ASA, CCI, and CACI scores were 2 (range: 1-3), 2 (range: 0-7), and 4 (range: 0-11), respectively. Comorbidities were present in 79% of patients. Thirty negative outcomes and 28 complications occurred in 25 (17%) and 23 (15.5%) cases, respectively. Only 20% of all complications were Clavien grade > or = 3. Multivariate analysis showed that tumour size in centimetres, the presence of cardiac conditions, and female gender were independent predictors of negative perioperative outcomes occurrence. Receiver operator characteristic curve confirmed the tumour size cut-off of 3.4 cm as an adequate predictor of negative outcomes. CONCLUSIONS Perioperative negative outcomes and complications occur in 17% and 15.5%, respectively, of cases treated by LRC with multiple ultrathin needles. Most of the complications are Clavien grade 1 or 2. The presence of cardiac conditions, female gender, and tumour size are independent prognostic factors for the occurrence of a perioperative negative outcome.
Dental ceramics: current thinking and trends.
Dental ceramics are presented within a simplifying framework allowing for understanding of their composition and development. The meaning of strength and details of the fracture process are explored, and recommendations are given regarding making structural comparisons among ceramics. Assessment of clinical survival data is dealt with, and literature is reviewed on the clinical behavior of metal-ceramic and all-ceramic systems. Practical aspects are presented regarding the choice and use of dental ceramics.
Pothole Detection System using Machine Learning on Android
This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor
Protecting endpoint devices in IoT supply chain
The Internet of Things (IoT), an emerging global network of uniquely identifiable embedded computing devices within the existing Internet infrastructure, is transforming how we live and work by increasing the connectedness of people and things on a scale that was once unimaginable. In addition to increased communication efficiency between connected objects, the IoT also brings new security and privacy challenges. Comprehensive measures that enable IoT device authentication and secure access control need to be established. Existing hardware, software, and network protection methods, however, are designed against fraction of real security issues and lack the capability to trace the provenance and history information of IoT devices. To mitigate this shortcoming, we propose an RFID-enabled solution that aims at protecting endpoint devices in IoT supply chain. We take advantage of the connection between RFID tag and control chip in an IoT device to enable data transfer from tag memory to centralized database for authentication once deployed. Finally, we evaluate the security of our proposed scheme against various attacks.
In-flight performance and calibration of SPICAV SOIR onboard Venus Express.
Solar occultation in the infrared, part of the Spectoscopy for Investigation of Characteristics of the Atmosphere of Venus (SPICAV) instrument onboard Venus Express, combines an echelle grating spectrometer with an acousto-optic tunable filter (AOTF). It performs solar occultation measurements in the IR region at high spectral resolution. The wavelength range probed allows a detailed chemical inventory of Venus's atmosphere above the cloud layer, highlighting the vertical distribution of gases. A general description of the instrument and its in-flight performance is given. Different calibrations and data corrections are investigated, in particular the dark current and thermal background, the nonlinearity and pixel-to-pixel variability of the detector, the sensitivity of the instrument, the AOTF properties, and the spectral calibration and resolution.
Genotype at the sIL-6 R A 358 C polymorphism does not influence response to anti-TNF therapy in patients with rheumatoid arthritis
Objectives. To investigate the association between genotype at the soluble interleukin 6 receptor (sIL-6R) A358C single nucleotide polymorphism (SNP, rs8192284), previously reported to correlate with soluble receptor levels, and response to anti-TNF therapy in subjects with RA. Methods. In a large cohort of Caucasian RA patients treated with anti-TNF medications (total, n = 1050; etanercept, n = 455; infliximab, n = 450; and adalimumab, n = 142), the sIL-6R A358C polymorphism was genotyped using a Taqman 50-allelic discrimination assay. Linear regression analysis adjusted for baseline 28 joint disease activity score (DAS28), baseline HAQ score, gender and use of concurrent DMARDs was used to assess the association of genotype at this polymorphism with response to anti-TNF therapy, defined by change in DAS28 after 6 months of treatment. Analyses were performed in the entire cohort, and also stratified by an anti-TNF agent. Additional analysis according to the EULAR response criteria was also performed, with the chi-squared test used to compare genotype groups. Results. No association between genotype at sIL-6R A358C and response to anti-TNF treatment was detected either in the cohort as a whole or after stratification by anti-TNF agent, in either the linear regression analysis or with response segregated according to EULAR criteria. Conclusions. This study shows that genotype at the functional sIL-6R A358C SNP is not associated with response to anti-TNF treatment in patients with RA.
A cataloging framework for software development methods
A framework providing a basis for comparing and evaluating software development methods (SDMs), which are systems of technical procedures and notational conventions for the organized construction of software-based systems, is presented. Using the framework, practitioners and methodologists can describe and rate an SDM's support for 21 properties. The application of the framework to two examples, OMT and R.J.A. Buhr's (1990) architectural design, is discussed. Several suggested uses of the framework include: comparing a group of SDMs with one another; defining a standard in terms of the framework and then using this as a basis for discussion; examining an SDM to discover its coverage and capabilities; and combining the properties of different SDMs to create a new SDM.<<ETX>>
Supplier evaluation using hybrid multiple criteria decision making approach
In the face of acute global competition, supplier management is rapidly emerging as a crucial issue to any companies striving for business success and sustainable development. To optimise competitive advantages, a company should incorporate ‘suppliers’ as an essential part of its core competencies. Supplier evaluation, the first step in supplier management, is a complex multiple criteria decision making (MCDM) problem, and its complexity is further aggravated if the highly important interdependence among the selection criteria is taken into consideration. The objective of this paper is to suggest a comprehensive decision method for identifying top suppliers by considering the effects of interdependence among the selection criteria. Proposed in this study is a hybrid model, which incorporates the technique of analytic network process (ANP) in which criteria weights are determined using fuzzy extent analysis, Technique for order performance by similarity to ideal solution (TOPSIS) under fuzzy environment is adopted to rank competing suppliers in terms of their overall performances. An example is solved to illustrate the effectiveness and feasibility of the suggested model.
Injuries to posterolateral corner of the knee: a comprehensive review from anatomy to surgical treatment☆
Although injuries to the posterolateral corner of the knee were previously considered to be a rare condition, they have been shown to be present in almost 16% of all knee injuries and are responsible for sustained instability and failure of concomitant reconstructions if not properly recognized. Although also once considered to be the "dark side of the knee", increased knowledge of the posterolateral corner anatomy and biomechanics has led to improved diagnostic ability with better understanding of physical and imaging examinations. The management of posterolateral corner injuries has also evolved and good outcomes have been reported after operative treatment following anatomical reconstruction principles.
A comparative analysis of synthetic genetic oscillators.
Synthetic biology is a rapidly expanding discipline at the interface between engineering and biology. Much research in this area has focused on gene regulatory networks that function as biological switches and oscillators. Here we review the state of the art in the design and construction of oscillators, comparing the features of each of the main networks published to date, the models used for in silico design and validation and, where available, relevant experimental data. Trends are apparent in the ways that network topology constrains oscillator characteristics and dynamics. Also, noise and time delay within the network can both have constructive and destructive roles in generating oscillations, and stochastic coherence is commonplace. This review can be used to inform future work to design and implement new types of synthetic oscillators or to incorporate existing oscillators into new designs.
Visual feature integration and the temporal correlation hypothesis.
The mammalian visual system is endowed with a nearly infinite capacity for the recognition of patterns and objects. To have acquired this capability the visual system must have solved what is a fundamentally combinatorial problem. Any given image consists of a collection of features, consisting of local contrast borders of luminance and wavelength, distributed across the visual field. For one to detect and recognize an object within a scene, the features comprising the object must be identified and segregated from those comprising other objects. This problem is inherently difficult to solve because of the combinatorial nature of visual images. To appreciate this point, consider a simple local feature such as a small vertically oriented line segment placed within a fixed location of the visual field. When combined with other line segments, this feature can form a nearly infinite number of geometrical objects. Any one of these objects may coexist with an equally large number of other
Hcm1 integrates signals from Cdk1 and calcineurin to control cell proliferation
Cyclin-dependent kinase (Cdk1) orchestrates progression through the cell cycle by coordinating the activities of cell-cycle regulators. Although phosphatases that oppose Cdk1 are likely to be necessary to establish dynamic phosphorylation, specific phosphatases that target most Cdk1 substrates have not been identified. In budding yeast, the transcription factor Hcm1 activates expression of genes that regulate chromosome segregation and is critical for maintaining genome stability. Previously we found that Hcm1 activity and degradation are stimulated by Cdk1 phosphorylation of distinct clusters of sites. Here we show that, upon exposure to environmental stress, the phosphatase calcineurin inhibits Hcm1 by specifically removing activating phosphorylations and that this regulation is important for cells to delay proliferation when they encounter stress. Our work identifies a mechanism by which proliferative signals from Cdk1 are removed in response to stress and suggests that Hcm1 functions as a rheostat that integrates stimulatory and inhibitory signals to control cell proliferation.
From Orthogonal to Non-orthogonal Multiple Access : Energy- and Spectrum-Efficient Resource Allocation
The rapid pace of innovations in information and communication technology (ICT) industry over the past decade has greatly improved people’s mobile communication experience. This, in turn, has escalated exponential growth in the number of connected mobile devices and data traffic volume in wireless networks. Researchers and network service providers have faced many challenges in providing seamless, ubiquitous, reliable, and high-speed data service to mobile users. Mathematical optimization, as a powerful tool, plays an important role in addressing such challenging issues. This dissertation addresses several radio resource allocation problems in 4G and 5G mobile communication systems, in order to improve network performance in terms of throughput, energy, or fairness. Mathematical optimization is applied as the main approach to analyze and solve the problems. Theoretical analysis and algorithmic solutions are derived. Numerical results are obtained to validate our theoretical findings and demonstrate the algorithms’ ability of attaining optimal or near-optimal solutions. Five research papers are included in the dissertation. In Paper I, we study a set of optimization problems of consecutive-channel allocation in single carrier-frequency division multiple access (SC-FDMA) systems. We provide a unified algorithmic framework to optimize the channel allocation and improve system performance. The next three papers are devoted to studying energy-saving problems in orthogonal frequency division multiple access (OFDMA) systems. In Paper II, we investigate a problem of jointly minimizing energy consumption at both transmitter and receiver sides. An energy-efficient scheduling algorithm is developed to provide optimality bounds and near-optimal solutions. Next in Paper III, we derive fundamental properties for energy minimization in load-coupled OFDMA networks. Our analytical results
Knowledge Development, Social Capital and Alliance Learning
Purpose – The purpose of this paper is to elucidate information on what creates the different types of knowledge.Design/methodology/approach – In the conceptual model it is argued that the concept of social capital provides an interesting view on the creation of market‐specific and firm‐specific knowledge.Findings – The major finding from the paper is that knowledge is an important by‐product of an alliance forming process, a process commonly termed as alliance learning.Research limitations/implications – Both market‐specific and firm‐specific knowledge have implications on two main types of alliance learning, that of mutual and non‐mutual learning.Practical implications – Alliance managers need to be aware that knowledge is a key driver as well as a beneficial outcome in the formation of alliances.Originality/value – This paper examines how the different types of knowledge evolve and how these different types of knowledge impact upon alliance learning.
Deploying Fog Applications: How Much Does It Cost, By the Way?
Deploying IoT applications through the Fog in a QoS-, context-, and cost-aware manner is challenging due to the heterogeneity, scale and dynamicity of Fog infrastructures. To decide how to allocate app functionalities over the continuum from the IoT to the Cloud, app administrators need to find a trade-off among QoS, resource consumption and cost. In this paper, we present a novel cost model for estimating the cost of deploying IoT applications to Fog infrastructures. We show how the inclusion of the cost model in the FogTorchΠ open-source prototype permits to determine eligible deployments of multi-component applications to Fog infrastructures and to rank them according to their QoS-assurance, Fog resource consumption and cost. We run the extended prototype on a motivating scenario, showing how it can support IT experts in choosing the deployments that best suit their desiderata.
Variance stabilization applied to microarray data calibration and to the quantification of differential expression
We introduce a statistical model for microarray gene expression data that comprises data calibration, the quantification of differential expression, and the quantification of measurement error. In particular, we derive a transformation h for intensity measurements, and a difference statistic Deltah whose variance is approximately constant along the whole intensity range. This forms a basis for statistical inference from microarray data, and provides a rational data pre-processing strategy for multivariate analyses. For the transformation h, the parametric form h(x)=arsinh(a+bx) is derived from a model of the variance-versus-mean dependence for microarray intensity data, using the method of variance stabilizing transformations. For large intensities, h coincides with the logarithmic transformation, and Deltah with the log-ratio. The parameters of h together with those of the calibration between experiments are estimated with a robust variant of maximum-likelihood estimation. We demonstrate our approach on data sets from different experimental platforms, including two-colour cDNA arrays and a series of Affymetrix oligonucleotide arrays.
An anthropomorphic underactuated robotic hand with 15 dofs and a single actuator
This paper presents the design and experimental validation of an anthropomorphic underactuated robotic hand with 15 degrees of freedom and a single actuator. First, the force transmission design of underactuated fingers is revisited. An optimal geometry of the tendon-driven fingers is then obtained. Then, underactuation between the fingers is addressed using differential mechanisms. Tendon routings are proposed and verified experimentally. Finally, a prototype of a 15-degree-of-freedom hand is built and tested. The results demonstrate the feasibility of a humanoid hand with many degrees of freedom and one single degree of actuation.
Spark-based anomaly detection over multi-source VMware performance data in real-time
Anomaly detection refers to identifying the patterns in data that deviate from expected behavior. These non-conforming patterns are often termed as outliers, malwares, anomalies or exceptions in different application domains. This paper presents a novel, generic real-time distributed anomaly detection framework for multi-source stream data. As a case study, we have decided to detect anomaly for multi-source VMware-based cloud data center. The framework monitors VMware performance stream data (e.g., CPU load, memory usage, etc.) continuously. It collects these data simultaneously from all the VMwares connected to the network. It notifies the resource manager to reschedule its resources dynamically when it identifies any abnormal behavior of its collected data. We have used Apache Spark, a distributed framework for processing performance stream data and making prediction without any delay. Spark is chosen over a traditional distributed framework (e.g., Hadoop and MapReduce, Mahout, etc.) that is not ideal for stream data processing. We have implemented a flat incremental clustering algorithm to model the benign characteristics in our distributed Spark based framework. We have compared the average processing latency of a tuple during clustering and prediction in Spark with Storm, another distributed framework for stream data processing. We experimentally find that Spark processes a tuple much quicker than Storm on average.
The atrial fibrillation ablation pilot study: a European Survey on Methodology and results of catheter ablation for atrial fibrillation conducted by the European Heart Rhythm Association.
AIMS The Atrial Fibrillation Ablation Pilot Study is a prospective registry designed to describe the clinical epidemiology of patients undergoing an atrial fibrillation (AFib) ablation, and the diagnostic/therapeutic processes applied across Europe. The aims of the 1-year follow-up were to analyse how centres assess in routine clinical practice the success of the procedure and to evaluate the success rate and long-term safety/complications. METHODS AND RESULTS Seventy-two centres in 10 European countries were asked to enrol 20 consecutive patients undergoing a first AFib ablation procedure. A web-based case report form captured information on pre-procedural, procedural, and 1-year follow-up data. Between October 2010 and May 2011, 1410 patients were included and 1391 underwent an AFib ablation (98.7%). A total of 1300 patients (93.5%) completed a follow-up control 367 ± 42 days after the procedure. Arrhythmia documentation was done by an electrocardiogram in 76%, Holter-monitoring in 52%, transtelephonic monitoring in 8%, and/or implanted systems in 4.5%. Over 50% became asymptomatic. Twenty-one per cent were re-admitted due to post-ablation arrhythmias. Success without antiarrhythmic drugs was achieved in 40.7% of patients (43.7% in paroxysmal AF; 30.2% in persistent AF; 36.7% in long-lasting persistent AF). A second ablation was required in 18% of the cases and 43.4% were under antiarrhythmic treatment. Thirty-three patients (2.5%) suffered an adverse event, 272 (21%) experienced a left atrial tachycardia, and 4 patients died (1 haemorrhagic stroke, 1 ventricular fibrillation in a patient with ischaemic heart disease, 1 cancer, and 1 of unknown cause). CONCLUSION The AFib Ablation Pilot Study provided crucial information on the epidemiology, management, and outcomes of catheter ablation of AFib in a real-world setting. The methods used to assess the success of the procedure appeared at least suboptimal. Even in this context, the 12-month success rate appears to be somewhat lower to the one reported clinical trials.
Unsupervised Type and Token Identification of Idiomatic Expressions
Idiomatic expressions are plentiful in everyday language, yet they remain mysterious, as it is not clear exactly how people learn and understand them. They are of special interest to linguists, psycholinguists, and lexicographers, mainly because of their syntactic and semantic idiosyncrasies as well as their unclear lexical status. Despite a great deal of research on the properties of idioms in the linguistics literature, there is not much agreement on which properties are characteristic of these expressions. Because of their peculiarities, idiomatic expressions have mostly been overlooked by researchers in computational linguistics. In this article, we look into the usefulness of some of the identified linguistic properties of idioms for their automatic recognition. Specifically, we develop statistical measures that each model a specific property of idiomatic expressions by looking at their actual usage patterns in text. We use these statistical measures in a type-based classification task where we automatically separate idiomatic expressions (expressions with a possible idiomatic interpretation) from similar-on-the-surface literal phrases (for which no idiomatic interpretation is possible). In addition, we use some of the measures in a token identification task where we distinguish idiomatic and literal usages of potentially idiomatic expressions in context.
A Proposal of Testing Imaginary Time in a Total Reflection
All paradoxes concerning faster-than-light signal propagation reported in recent experiments can be dispelled by using imaginary time in a quantum framework. I present a proposal of testing imaginary time in a total reflection.
Algorithm of a Perspective Transform-Based PDF417 Barcode Recognition
When a PDF417 barcode are recognized, there are major recognition processes such as segmentation, normalization, and decoding. Among them, the segmentation and normalization steps are very important because they have a strong influence on the rate of barcode recognition. There are also previous segmentation and normalization techniques of processing barcode image, but some issues as follows. First, the previous normalization techniques need an additional restoration process and apply an interpolation process. Second, the previous recognition algorithms recognize a barcode image well only when it is placed in the predefined rectangular area. Therefore, we propose a novel segmentation and normalization method in PDF417 with the aims of improving its recognition rate and precision. The segmentation process to detect the barcode area in an image uses the conventional morphology and Hough transformmethods. The normalization process of the bar code region is based on the conventional perspective transformation and warping algorithms. In addition, we perform experiments using both experimental and actual data for evaluating our algorithms. Consequently, our experimental results can be summarized as follows. First, our method showed a stable performance over existing PDF417 barcode detection and recognition. Second, it overcame the limitation problem where the location of an input image should locate in a predefined rectangle area. Finally, it is expected that our result can be used as a restoration tool of printed images such as documents and pictures.
Supernumerary nostril with congenital cataract.
Supernumerary nostril is a very rare congenital anomaly. It can be unilateral or bilateral, and it sometimes occurs in the presence of other congenital deformities. Behind the external opening of a supernumerary nostril is a small accessory nasal cavity, which may or may not communicate with the normal nasal cavity on the same side. We describe a new case in which the supernumerary nostril with a small accessory nasal cavity, which did not communicate with the normal nasal cavity on the same side, appeared in a young girl who also had microcornea and congenital cataract. The accessory nasal cavity was successfully removed surgically. We believe that this case may represent the first reported case of a supernumerary nostril with a congenital cataract on the same side. We also discuss the hypotheses that have been proposed to explain supernumerary nostrils.
Embodiment or Envatment ? : Reflections on the Bodily Basis of Consciousness
Suppose that a team of neurosurgeons and bioengineers were able to remove your brain from your body, suspend it in a life-sustaining vat of liquid nutrients, and connect its neurons and nerve terminals by wires to a supercomputer that would stimulate it with electrical impulses exactly like those it normally receives when embodied. According to this brain-in-a-vat thought experiment, your envatted brain and your embodied brain would have subjectively indistinguishable mental lives. For all you know—so one argument goes—you could be such a brain in a vat right now. Daniel Dennett calls this sort of philosophical thought experiment an “intuition pump” (1995). An intuition pump is designed to elicit certain intuitive convictions, but is not itself a proper argument: “Intuition pumps are fine if they’re used correctly, but they can also be misused. They’re not arguments, they’re stories. Instead of having a conclusion, they pump an intuition. They get you to say ‘Aha! Oh, I get it!’” (Dennett 1995, 182). Philosophers have used the brain-in-a-vat story mainly to raise the problem of radical skepticism and to elicit various intuitions about meaning and knowledge (Putnam 1981). The basic intuition the story tries to pump is that the envatted brain, though fully conscious, has systematically false beliefs about the world, including itself. Some philosophers reject this intuition. They propose that the envatted brain’s beliefs are really about its artificial environment or that it has no real beliefs at all. According to these proposals, the mental lives of the two brains do not match, despite their being subjectively indistinguishable. Dennett (1978) tells a classic variant of the brain-in-a-vat story, one in which he sees his own envatted brain and knows that it remotely controls his own body, but still cannot experience himself as located where his brain is located. Here the thought experiment serves to raise questions about the locus of the self in the physical world. 13
Improving Performance in Neural Networks Using a Boosting Algorithm
Patrice Simard AT &T Bell Laboratories Holmdel, NJ 07733 A boosting algorithm converts a learning machine with error rate less than 50% to one with an arbitrarily low error rate. However, the algorithm discussed here depends on having a large supply of independent training samples. We show how to circumvent this problem and generate an ensemble of learning machines whose performance in optical character recognition problems is dramatically improved over that of a single network. We report the effect of boosting on four databases (all handwritten) consisting of 12,000 digits from segmented ZIP codes from the United State Postal Service (USPS) and the following from the National Institute of Standards and Testing (NIST): 220,000 digits, 45,000 upper case alphas, and 45,000 lower case alphas. We use two performance measures: the raw error rate (no rejects) and the reject rate required to achieve a 1% error rate on the patterns not rejected. Boosting improved performance in some cases by a factor of three.
Phase II, Open Label, Randomized Comparative Trial of Ondansetron Alone versus the Combination of Ondansetron and Aprepitant for the Prevention of Nausea and Vomiting in Patients with Hematologic Malignancies Receiving Regimens Containing High-Dose Cytarabine
Background. Aprepitant is a P/neurokinin-1 receptor antagonist approved for the prevention of CINV in moderate emetic risk chemotherapy. We explored its effectiveness in patients with leukemia receiving cytarabine-based chemotherapy. Methods. Patients were randomized to ondansetron (OND) 8 mg IV 30 minutes before cytarabine followed by 24 mg IV continuous infusion daily until 6-12 hours after the last dose of chemotherapy alone or with aprepitant (APREP) oral 125 mg 6-12 hrs before chemotherapy and 80 mg daily until 1 day after the last dose of chemotherapy. Results. Forty-nine patients were enrolled in each arm; 42 in OND and 41 in OND + APREP arm were evaluable for efficacy. The ORR with OND + APREP was 80% compared to 67% with OND alone (P = 0.11). On days 6 and 7, higher proportion of patients treated with OND + APREP were free from nausea (74%, 74% versus 68%, 67%; P = 0.27 and 0.18, resp.). Requirement of rescue medications on days 2 and 3 was fewer in OND + APREP arm 7% and 5% compared to 21% and 16% in the OND arm, respectively (P = 0.06 and P = 0.07). Conclusions. There was a trend for overall improvement in emesis with ondansetron plus aprepitant. The potential benefit of this approach with specific chemotherapy combinations remains to be determined.
Unmanned aircraft systems in maritime operations: Challenges addressed in the scope of the SEAGULL project
The SEAGULL project aims at the development of intelligent systems to support maritime situation awareness based on unmanned aerial vehicles. It proposes to create an intelligent maritime surveillance system by equipping unmanned aerial vehicles (UAVs) with different types of optical sensors. Optical sensors such as cameras (visible, infrared, multi and hyper spectral) can contribute significantly to the generation of situational awareness of maritime events such as (i) detection and georeferencing of oil spills or hazardous and noxious substances; (ii) tracking systems (e.g. vessels, shipwrecked, lifeboat, debris, etc.); (iii) recognizing behavioral patterns (e.g. vessels rendezvous, high-speed vessels, atypical patterns of navigation, etc.); and (iv) monitoring parameters and indicators of good environmental status. On-board transponders will be used for collision detection and avoidance mechanism (sense and avoid). This paper describes the core of the research and development work done during the first 2 years of the project with particular emphasis on the following topics: system architecture, automatic detection of sea vessels by vision sensors and custom designed computer vision algorithms; and a sense and avoid system developed in the theoretical framework of zero-sum differential games.
Retrieving 3D shapes based on their appearance
In this paper, we propose an algorithm for shape-similarity comparison and retrieval of 3D shapes defined as polygon soup. One of the issues in comparing 3D shapes is the diversity of shape representations used to represent these "3D" shapes. While a solid model is well-defined and is easier to handle, others such as polygon soup poses many problems. In fact, a polygon soup 3D model most often does not define a 3D shape, but merely an illusion of "3D shape-ness" by its collection of independent polygons, lines, and manifold meshes. The most significant feature of our 3D shape similarity comparison method is that it accepts polygon soup and other ill-defined 3D models. Our approach is to use the rendered appearance only of the model as the basis for shape similarity comparison. Our method removes scale and positional degrees-of-freedom by using normalization, and the three rotational degrees of freedom by using a combination of discrete sampling of solid angles and a rotation-invariant 2D image similarity comparison algorithm. Evaluation experiments showed that, despite its simplicity, our approach worked quite well despite its simplicity.
Comparison of essential fatty acid intakes and serum levels of inflammatory factors between asthmatic and healthy adults: a case- control study.
Dietary fatty acids play a critical role in modulation of airway inflammation in asthma. This study was conducted to compare dietary intakes of essential fatty acids and serum levels of inflammatory factors in asthmatic and healthy adults, and to examine the potential relationship between inflammatory markers and dietary fatty acids. In this case-control study, 47 asthmatic patients (26 males and 21 females) were compared with 47 controls (24 males and 23 females). Blood samples were taken from case and control groups and tumor necrosis factor-α (TNF-α), high sensitive C-reactive protein (hs-CRP), leptin and adiponectin were determined. Dietary intakes were assessed by semi-quantitative food frequency questionnaire (FFQ). Dietary intakes of omega-3 fatty acids were significantly lower in asthmatic patients compared to controls (p<0.05). Serum concentrations of TNF-α, hs-CRP and leptin were significantly higher in asthmatic patients. There was a significant negative relationship between adiponectin levels and saturated fatty acid intakes in both groups, but the relationship between adiponectin and mono-unsaturated fatty acid intakes was positive and significant only in asthmatic group. No significant correlation between other inflammatory factors and dietary intakes was found in this study. Higher intake of omega-3 and lower levels of inflammatory factors in the healthy control group compared to asthmatic group may explain the protective role of essential fatty acids in asthma. Further studies with larger sample size are needed in this regard.
3C- A Provably Secure Pseudorandom Function and Message Authentication Code.A New mode of operation for Cryptographic Hash Function
We propose a new cryptographic construction called 3C, which works as a pseudorandom function (PRF), message authentication code (MAC) and cryptographic hash function. The 3Cconstruction is obtained by modifying the Merkle-Damg̊ard iterated construction used to construct iterated hash functions. We assume that the compression functions of Merkle-Damg̊ard iterated construction realize a family of fixed-length-input pseudorandom functions (FI-PRFs). A concrete security analysis for the family of 3Cvariable-length-input pseudorandom functions (VI-PRFs) is provided in a precise and quantitative manner. The 3CVI-PRF is then used to realize the 3CMAC construction called one-key NMAC (O-NMAC). O-NMAC is a more efficient variant of NMAC and HMAC in the applications where key changes frequently and the key cannot be cached. The 3C-construction works as a new mode of hash function operation for the hash functions based on Merkle-Damg̊ard construction such as MD5 and SHA-1. The generic 3Chash function is more resistant against the recent differential multi-block collision attacks than the Merkle-Damg̊ard hash functions and the extension attacks do not work on the 3Chash function. The 3C-X hash function is the simplest and efficient variant of the generic 3C hash function and it is the simplest modification to the Merkle-Damg̊ard hash function that one can achieve. We provide the security analysis for the functions 3C and 3C-X against multi-block collision attacks and generic attacks on hash functions. We combine the wide-pipe hash function with the 3C hash function for even better security against some generic attacks and differential attacks. The 3C-construction has all these features at the expense of one extra iteration of the compression function over the Merkle-Damg̊ard construction.
Incremental free-space carving for real-time 3D reconstruction
Almost all current multi-view methods are slow, and thus suited to offline reconstruction. This paper presents a set o f heuristic space-carving algorithms with a focus on speed over detail. The algorithms discretize space via the 3D Delaunay triangulation, and they carve away the volumes that violate free-space or visibility constraints. Whereas sim ilar methods exist, our algorithms are fast and fully incrementa l. They encompass a dynamic event-driven approach to reconstruction that is suitable for integration with online SLAM or Structure-from-Motion. We integrate our algorithms with PTAM [ 12], and we realize a complete system that reconstructs 3D geometry from video in real-time. Experiments on typical real-world inpu ts demonstrate online performance with modest hardware. We provide run-time complexity analysis and show that the perevent processing time is independent of the number of images previously processed: a requirement for real-time operation on lengthy image sequences.
Time-series analysis of MRI intensity patterns in multiple sclerosis
In progressive neurological disorders, such as multiple sclerosis (MS), magnetic resonance imaging (MRI) follow-up is used to monitor disease activity and progression and to understand the underlying pathogenic mechanisms. This article presents image postprocessing methods and validation for integrating multiple serial MRI scans into a spatiotemporal volume for direct quantitative evaluation of the temporal intensity profiles. This temporal intensity signal and its dynamics have thus far not been exploited in the study of MS pathogenesis and the search for MRI surrogates of disease activity and progression. The integration into a four-dimensional data set comprises stages of tissue classification, followed by spatial and intensity normalization and partial volume filtering. Spatial normalization corrects for variations in head positioning and distortion artifacts via fully automated intensity-based registration algorithms, both rigid and nonrigid. Intensity normalization includes separate stages of correcting intra- and interscan variations based on the prior tissue class segmentation. Different approaches to image registration, partial volume correction, and intensity normalization were validated and compared. Validation included a scan-rescan experiment as well as a natural-history study on MS patients, imaged in weekly to monthly intervals over a 1-year follow-up. Significant error reduction was observed by applying tissue-specific intensity normalization and partial volume filtering. Example temporal profiles within evolving multiple sclerosis lesions are presented. An overall residual signal variance of 1.4% +/- 0.5% was observed across multiple subjects and time points, indicating an overall sensitivity of 3% (for axial dual echo images with 3-mm slice thickness) for longitudinal study of signal dynamics from serial brain MRI.
Making web applications more energy efficient for OLED smartphones
A smartphone’s display is one of its most energy consuming components. Modern smartphones use OLED displays that consume more energy when displaying light colors as op- posed to dark colors. This is problematic as many popular mobile web applications use large light colored backgrounds. To address this problem we developed an approach for auto- matically rewriting web applications so that they generate more energy efficient web pages. Our approach is based on program analysis of the structure of the web application im- plementation. In the evaluation of our approach we show that it can achieve a 40% reduction in display power con- sumption. A user study indicates that the transformed web pages are acceptable to users with over 60% choosing to use the transformed pages for normal usage.
Circulant Binary Embedding
Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from O(d) to O(d log d), and the space complexity from O(d) to O(d) where d is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.
Learning rotation invariant convolutional filters for texture classification
We present a method for learning discriminative filters using a shallow Convolutional Neural Network (CNN). We encode rotation invariance directly in the model by tying the weights of groups of filters to several rotated versions of the canonical filter in the group. These filters can be used to extract rotation invariant features well-suited for image classification. We test this learning procedure on a texture classification benchmark, where the orientations of the training images differ from those of the test images. We obtain results comparable to the state-of-the-art. Compared to standard shallow CNNs, the proposed method obtains higher classification performance while reducing by an order of magnitude the number of parameters to be learned.
BareDroid: Large-Scale Analysis of Android Apps on Real Devices
To protect Android users, researchers have been analyzing unknown, potentially-malicious applications by using systems based on emulators, such as the Google's Bouncer and Andrubis. Emulators are the go-to choice because of their convenience: they can scale horizontally over multiple hosts, and can be reverted to a known, clean state in a matter of seconds. Emulators, however, are fundamentally different from real devices, and previous research has shown how it is possible to automatically develop heuristics to identify an emulated environment, ranging from simple flag checks and unrealistic sensor input, to fingerprinting the hypervisor's handling of basic blocks of instructions. Aware of this aspect, malware authors are starting to exploit this fundamental weakness to evade current detection systems. Unfortunately, analyzing apps directly on bare metal at scale has been so far unfeasible, because the time to restore a device to a clean snapshot is prohibitive: with the same budget, one can analyze an order of magnitude less apps on a physical device than on an emulator. In this paper, we propose BareDroid, a system that makes bare-metal analysis of Android apps feasible by quickly restoring real devices to a clean snapshot. We show how BareDroid is not detected as an emulated analysis environment by emulator-aware malware or by heuristics from prior research, allowing BareDroid to observe more potentially malicious activity generated by apps. Moreover, we provide a cost analysis, which shows that replacing emulators with BareDroid requires a financial investment of less than twice the cost of the servers that would be running the emulators. Finally, we release BareDroid as an open source project, in the hope it can be useful to other researchers to strengthen their analysis systems.
High-rate codes that are linear in space and time
Multiple-antenna systems that operate at high rates require simple yet effective space-time transmission schemes to handle the large traffic volume in real time. At rates of tens of bits/sec/Hz, V-BLAST, where every antenna transmits its own independent substream of data, has been shown to have good performance and simple encoding and decoding. Yet V-BLAST suffers from its inability to work with fewer receive antennas than transmit antennas—this deficiency is especially important for modern cellular systems where a basestation typically has more antennas than the mobile handsets. Furthermore, because V-BLAST transmits independent data streams on its antennas there is no built-in spatial coding to guard against deep fades from any given transmit antenna. On the other hand, there are many previously-proposed space-time codes that have good fading resistance and simple decoding, but these codes generally have poor performance at high data rates or with many antennas. We propose a high-rate coding scheme that can handle any configuration of transmit and receive antennas and that subsumes both V-BLAST and many proposed space-time block codes as special cases. The scheme transmits substreams of data in linear combinations over space and time. The codes are designed to optimize the mutual information between the transmitted and received signals. Because of their linear structure, the codes retain the decoding simplicity of V-BLAST, and because of their information-theoretic optimality, they possess many coding advantages. We give examples of the codes and show that their performance is generally superior to earlier proposed methods over a wide range of rates and SNR’s. Index Terms —Wireless communications, BLAST, multiple antennas, space-time codes, transmit diversity, receive diversity, fading channels
Neural networks for computer-aided diagnosis: detection of lung nodules in chest radiograms
The paper describes a neural-network-based system for the computer aided detection of lung nodules in chest radiograms. Our approach is based on multiscale processing and artificial neural networks (ANNs). The problem of nodule detection is faced by using a two-stage architecture including: 1) an attention focusing subsystem that processes whole radiographs to locate possible nodular regions ensuring high sensitivity; 2) a validation subsystem that processes regions of interest to evaluate the likelihood of the presence of a nodule, so as to reduce false alarms and increase detection specificity. Biologically inspired filters (both LoG and Gabor kernels) are used to enhance salient image features. ANNs of the feedforward type are employed, which allow an efficient use of a priori knowledge about the shape of nodules, and the background structure. The images from the public JSRT database, including 247 radiograms, were used to build and test the system. We performed a further test by using a second private database with 65 radiograms collected and annotated at the Radiology Department of the University of Florence. Both data sets include nodule and nonnodule radiographs. The use of a public data set along with independent testing with a different image set makes the comparison with other systems easier and allows a deeper understanding of system behavior. Experimental results are described by ROC/FROC analysis. For the JSRT database, we observed that by varying sensitivity from 60 to 75% the number of false alarms per image lies in the range 4-10, while accuracy is in the range 95.7-98.0%. When the second data set was used comparable results were obtained. The observed system performances support the undertaking of system validation in clinical settings.
Cutaneous metastatic squamous cell carcinoma to the parotid gland: analysis and outcome.
BACKGROUND Our aim was to review the presentation, treatment, and outcome of patients with metastatic cutaneous squamous cell carcinoma involving the parotid gland at a tertiary referral center. METHODS We performed a retrospective chart review of the cancer registry at the Princess Margaret Hospital, Toronto, from 1970 to 2001. All patients had a previously untreated metastatic cutaneous head and neck squamous cell carcinoma involving the parotid gland. A minimal follow-up of 1 year was mandatory for inclusion in the study. RESULTS Fifty-six white patients (43 men and 13 women), with a median age of 76 years (range, 49-97 years), were eligible for inclusion. The disease in all patients was retrospectively staged according to a new system. Twenty patients had P1 disease, 14 had P2, and 22 had P3. Therapy included surgery and adjuvant external beam radiation in 37 patients, single-modality external beam radiation in 12, and surgery alone in seven patients. The overall recurrence rate was 29%. The disease-specific survival was significantly worse in patients treated with external beam radiation alone (p <.05). Tumor size >6 cm (p <.01) and the presence of facial nerve involvement (p <.01) were poor prognostic factors. CONCLUSIONS Metastatic cutaneous squamous cell carcinoma to the parotid gland is an aggressive neoplasm that requires combination therapy. The presence of a lesion in excess of 6 cm or with facial nerve involvement is associated with a poor prognosis.
Groupwise Maximin Fair Allocation of Indivisible Goods
We study the problem of allocating indivisible goods among n agents in a fair manner. For this problem, maximin share (MMS) is a well-studied solution concept which provides a fairness threshold. Specifically, maximin share is defined as the minimum utility that an agent can guarantee for herself when asked to partition the set of goods into n bundles such that the remaining (n−1) agents pick their bundles adversarially. An allocation is deemed to be fair if every agent gets a bundle whose valuation is at least her maximin share. Even though maximin shares provide a natural benchmark for fairness, it has its own drawbacks and, in particular, it is not sufficient to rule out unsatisfactory allocations. Motivated by these considerations, in this work we define a stronger notion of fairness, called groupwise maximin share guarantee (GMMS). In GMMS, we require that the maximin share guarantee is achieved not just with respect to the grand bundle, but also among all the subgroups of agents. Hence, this solution concept strengthens MMS and provides an ex-post fairness guarantee. We show that in specific settings, GMMS allocations always exist. We also establish the existence of approximate GMMS allocations under additive valuations, and develop a polynomial-time algorithm to find such allocations. Moreover, we establish a scale of fairness wherein we show that GMMS implies approximate envy freeness. Finally, we empirically demonstrate the existence of GMMS allocations in a large set of randomly generated instances. For the same set of instances, we additionally show that our algorithm achieves an approximation factor better than the established, worst-case bound.
Body Mass Index and Breast Cancer Defined by Biological Receptor Status in Pre-Menopausal and Post-Menopausal Women: A Multicenter Study in China
BACKGROUND Few studies have investigated the association between body mass index (BMI) and breast cancer with consideration to estrogen/progesterone/human epidermal growth factor type 2 receptor status (ER/PR/HER2) in the breast tissue among Chinese pre- and post-menopausal women. METHODS Four thousand two hundred and eleven breast cancer patients were selected randomly from seven geographic regions of China from 1999 to 2008. Demographic data, risk factors, pathologic features, and biological receptor status of cases were collected from the medical charts. Chi-square test, fisher exact test, rank-correlation analysis, and multivariate logistic regression model were adopted to explore whether BMI differed according to biological receptor status in pre- and post-menopausal women. RESULTS Three thousand two hundred and eighty one eligible cases with BMI data were included. No statistically significant differences in demographic characteristics were found between the cases with BMI data and those without. In the rank-correlation analysis, the rates of PR+ and HER2+ were positively correlated with increasing BMI among post-menopausal women (rs BMI, PR+=0.867, P=0.001; rs BMI, HER2+ =0.636, P=0.048), but the ER+ rates did not vary by increasing BMI. Controlling for confounding factors, multivariate logistic regression models with BMI<24 kg/m(2) as the reference group were performed and found that BMI ≥ 24 kg/m(2) was only positively correlated with PR+ status among post-menopausal breast cancer cases (adjusted OR=1.420, 95% CI: 1.116-1.808, Wald=8.116, P=0.004). CONCLUSIONS Post-menopausal women with high BMI (≥ 24 kg/m(2)) have a higher proportion of PR+ breast cancer. In addition to effects mediated via the estrogen metabolism pathway, high BMI might increase the risk of breast cancer by other routes, which should be examined further in future etiological mechanism studies.
Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration
All fields of neuroscience that employ brain imaging need to communicate their results with reference to anatomical regions. In particular, comparative morphometry and group analysis of functional and physiological data require coregistration of brains to establish correspondences across brain structures. It is well established that linear registration of one brain to another is inadequate for aligning brain structures, so numerous algorithms have emerged to nonlinearly register brains to one another. This study is the largest evaluation of nonlinear deformation algorithms applied to brain image registration ever conducted. Fourteen algorithms from laboratories around the world are evaluated using 8 different error measures. More than 45,000 registrations between 80 manually labeled brains were performed by algorithms including: AIR, ANIMAL, ART, Diffeomorphic Demons, FNIRT, IRTK, JRD-fluid, ROMEO, SICLE, SyN, and four different SPM5 algorithms ("SPM2-type" and regular Normalization, Unified Segmentation, and the DARTEL Toolbox). All of these registrations were preceded by linear registration between the same image pairs using FLIRT. One of the most significant findings of this study is that the relative performances of the registration methods under comparison appear to be little affected by the choice of subject population, labeling protocol, and type of overlap measure. This is important because it suggests that the findings are generalizable to new subject populations that are labeled or evaluated using different labeling protocols. Furthermore, we ranked the 14 methods according to three completely independent analyses (permutation tests, one-way ANOVA tests, and indifference-zone ranking) and derived three almost identical top rankings of the methods. ART, SyN, IRTK, and SPM's DARTEL Toolbox gave the best results according to overlap and distance measures, with ART and SyN delivering the most consistently high accuracy across subjects and label sets. Updates will be published on the http://www.mindboggle.info/papers/ website.
WHY SHOULD THE AUTHORITY CARE ABOUT THE IMMIGRATION POLICY UNCERTAINTY ?
Article History Received: 4 April 2018 Revised: 30 April 2018 Accepted: 2 May 2018 Published: 4 May 2018
Formal worst-case performance analysis of time-sensitive Ethernet with frame preemption
One of the key challenges in future Ethernet-based automotive and industrial networks is the low-latency transport of time-critical data. To date, Ethernet frames are sent non-preemptively. This introduces a major source of delay, as, in the worst-case, a latency-critical frame might be blocked by a frame of lower priority, which started transmission just before the latency-critical frame. The upcoming IEEE 802.3br standard will introduce Ethernet frame preemption to address this problem. While high-priority traffic benefits from preemption, lower-priority (yet still latency-sensitive) traffic experiences a certain overhead, impacting its timing behavior. In this paper, we present a formal timing analysis for Ethernet to derive worst-case latency bounds under preemption. We use a realistic automotive Ethernet setup to analyze the worst-case performance of standard Ethernet and Ethernet TSN under preemption and also compare our results to non-preemptive implementations of these standards.
Automated Diagnosis of Glaucoma Using Digital Fundus Images
Glaucoma is a disease of the optic nerve caused by the increase in the intraocular pressure of the eye. Glaucoma mainly affects the optic disc by increasing the cup size. It can lead to the blindness if it is not detected and treated in proper time. The detection of glaucoma through Optical Coherence Tomography (OCT) and Heidelberg Retinal Tomography (HRT) is very expensive. This paper presents a novel method for glaucoma detection using digital fundus images. Digital image processing techniques, such as preprocessing, morphological operations and thresholding, are widely used for the automatic detection of optic disc, blood vessels and computation of the features. We have extracted features such as cup to disc (c/d) ratio, ratio of the distance between optic disc center and optic nerve head to diameter of the optic disc, and the ratio of blood vessels area in inferior-superior side to area of blood vessel in the nasal-temporal side. These features are validated by classifying the normal and glaucoma images using neural network classifier. The results presented in this paper indicate that the features are clinically significant in the detection of glaucoma. Our system is able to classify the glaucoma automatically with a sensitivity and specificity of 100% and 80% respectively.
On formalizing social commitments in dialogue and argumentation models using temporal defeasible logic
In this paper, we take the view that any formalization of commitments has to come together with a formalization of time, events/actions and change. We enrich a suitable formalism for reasoning about time, event/action and change in order to represent and reason about commitments. We employ a three-valued based temporal first-order non-monotonic logic (TFONL) that allows an explicit representation of time and events/action. TFONL subsumes the action languages presented in the literature and takes into consideration the frame, qualification and ramification problems, and incorporates to a domain description the set of rules governing change. It can handle protocols for the different types of dialogues such as information seeking, inquiry and negotiation. We incorporate commitments into TFONL to obtain Com-TFONL. Com-TFONL allows an agent to reason about its commitments and about other agents’ behaviour during a dialogue. Thus, agents can employ social commitments to act on, argue with and reason about during interactions with other agents. Agents may use their reasoning and argumentative capabilities in order to determine the appropriate communicative acts during conversations. Furthermore, Com-TFONL allows for an integration of commitments and arguments which helps in capturing the public aspects of a conversation and the reasoning aspects required in coherent conversations.
Expectations, perceptions, and physiotherapy predict prolonged sick leave in subacute low back pain
BACKGROUND Brief intervention programs for subacute low back pain (LBP) result in significant reduction of sick leave compared to treatment as usual. Although effective, a substantial proportion of the patients do not return to work. This study investigates predictors of return to work in LBP patients participating in a randomized controlled trial comparing a brief intervention program (BI) with BI and physical exercise. METHODS Predictors for not returning to work was examined in 246 patients sick listed 8-12 weeks for low back pain. The patients had participated in a randomized controlled trial, with BI (n = 122) and BI + physical exercise (n = 124). There were no significant differences between the two intervention groups on return to work. The groups were therefore merged in the analyses of predictors. Multiple logistic regression analysis was used to identify predictors for non return to work at 3, 12, and 24 months of follow-up. RESULTS At 3 months of follow-up, the strongest predictors for not returning to work were pain intensity while resting (OR = 5.6; CI = 1.7-19), the perception of constant back strain when working (OR = 4.1; CI = 1.5-12), negative expectations for return to work (OR = 4.2; CI = 1.7-10), and having been to a physiotherapist prior to participation in the trial (OR = 3.3; CI = 1.3-8.3). At 12 months, perceived reduced ability to walk far due to the complaints (OR = 2.6; CI = 1.3-5.4), pain during activities (OR = 2.4; CI = 1.1-5.1), and having been to a physiotherapist prior to participation in the trial (OR = 2.1; CI = 1.1-4.3) were the strongest predictors for non return to work. At 24 months age below 41 years (OR = 2.9; CI = 1.4-6.0) was the only significant predictor for non return to work. CONCLUSION It appears that return to work is highly dependant on individual and cognitive factors. Patients not returning to work after the interventions were characterized by negative expectations, perceptions about pain and disability, and previous physiotherapy treatment. This is the first study reporting that previous treatment by physiotherapists is a risk factor for long-term sick leave. This has not been reported before and is an interesting finding that deserves more scrutiny.
Flexible control of small wind turbines with grid failure detection operating in stand-alone and grid-connected mode
This paper presents the development and test of a flexible control strategy for an 11-kW wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connection mode. The stand-alone control is featured with a complex output voltage controller capable of handling nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and online tuning is used to implement and test the different control strategies. The back-to-back power conversion configuration is chosen where the generator converter uses a built-in standard flux vector control to control the speed of the turbine shaft while the grid-side converter uses a standard pulse-width modulation active rectifier control strategy implemented in a DSP controller. The design of the longitudinal conversion loss filter and of the involved PI-controllers are described in detail. Test results show the proposed methods works properly.
Tooth brushing Pattern Classification using Three-Axis Accelerometer and Magnetic Sensor for Smart Toothbrush
The concept of intelligent toothbrush, capable of monitoring brushing motion, orientation through the grip axis, during toothbrushing was suggested in our previous study. In this study, we describe a tooth brushing pattern classification algorithm using three-axis accelerometer and three-axis magnetic sensor. We have found that inappropriate tooth brushing pattern showed specific moving patterns. In order to trace the position and orientation of toothbrush in a mouth, we need to know absolute coordinate information of toothbrush. By applying tilt-compensated azimuth (heading) calculation algorithm, which is generally used in small telematics devices, we could find the inclination and orientation information of toothbrush. To assess the feasibility of the proposed algorithm, 8 brushing patterns were preformed by 6 individual healthy subjects. The proposed algorithm showed the detection ratio of 98%. This study showed that the proposed monitoring system was conceived to aid dental care personnel in patient education and instruction in oral hygiene regarding brushing style.
The search for critical dimensions of personality pathology to inform diagnostic assessment and treatment planning: a commentary on Hopwood et al.
Hopwood, Wright, Ansell, and Pincus (2013) present an interpersonal theoretical orientation in an effort to demonstrate that the personality disorders are—at their core—interpersonal in nature. This is not a new idea as Hopwood and colleagues are joining a parade of individuals who identify interpersonal difficulties as central to the personality disorders (Bender & Skodol, 2007; Benjamin, 2005; Fonagy & Target, 2006; Gunderson & Lyons-Ruth, 2008; Kernberg, 1984; Livesley, 2001; Meyer & Pilkonis, 2005; Mikulincer & Shaver, 2007). What is new is the argument that interpersonal theory should have preeminence in future renditions of the American Psychiatric Association’s diagnostic system because of its empirically investigated assessment instruments. In this commentator’s view, the two issues are related but involve important differences in their goals and methods. It is important to place the issues in context. The leadership of the effort to generate DSM-5 posed the question: What are the crucial dimensions of pathology—with less emphasis on categories—that are central to the diagnosis and assessment of patients, dimensions that can be reliably and dimensionally rated both at assessment and as treatment progresses? The answer given by the personality disorders work group was judged inadequate in its current state and was placed in Section III of DSM-5 for further consideration and empirical investigation. There is much in the article by Hopwood and colleagues that is important to progress in the field and with which this commentator agrees with enthusiasm, not the least of which is their positive, energetic approach to the refinement of the diagnosis of personality disorders. The authors value personality theory and related empirical efforts in informing the individual markers of interpersonal and emotional difficulties as listed in DSM-IV.
Guided internet cognitive behavioral therapy for insomnia compared to a control treatment - A randomized trial.
AIM To evaluate if internet-delivered Cognitive Behavioral Therapy for insomnia (ICBT-i) with brief therapist support outperforms an active control treatment. METHOD Adults diagnosed with insomnia were recruited via media (n = 148) and randomized to either eight weeks of ICBT-i or an active internet-based control treatment. Primary outcome was the insomnia severity index (ISI) assessed before and after treatment, with follow-ups after 6 and 12 months. Secondary outcomes were use of sleep medication, sleep parameters (sleep diary), perceived stress, and a screening of negative treatment effects. Hierarchical Linear Mixed Models were used for intent-to-treat analyses and handling of missing data. RESULTS ICBT-i was significantly more effective than the control treatment in reducing ISI (Cohen's d = 0.85), sleep medication, sleep efficiency, sleep latency, and sleep quality at post-treatment. The positive effects were sustained. However, after 12 months the difference was no longer significant due to a continuous decrease in ISI among controls, possibly due to their significantly higher utilization of insomnia relevant care after treatment. Forty-six negative effects were reported but did not differ between interventions. CONCLUSIONS Supported ICBT-i is more effective than an active control treatment in reducing insomnia severity and treatment gains remain stable one year after treatment.
Hormone replacement therapy and stroke: are the results surprising?
Despite major progress in treatment and prevention, stroke remains the leading cause of disability and the third leading cause of death, surpassed only by heart disease and cancer, in the United States.1 An estimated 500 000 to 600 000 first and 100 000 recurrent strokes occur each year, and ≈160 000 of these are fatal.2 Among stroke survivors, the burden of long-term disability is great. In the Framingham Heart Study, 71% had impairments that affected their ability to work in their previous capacity and 31% needed help in caring for themselves.3 Stroke rates in women increase sharply with age, doubling in each successive decade after the age of 55 years. Stroke incidence is substantially lower in younger women than in age-matched men, but it tends to equalize in the two sexes in the postmenopausal years.1 Thus, stroke is a major health problem for postmenopausal women and one that merits aggressive preventive strategies. A number of preventive strategies have been proven effective in reducing the risk of stroke. A review of 14 prospective, randomized, controlled trials demonstrated a 42% risk reduction for stroke when the diastolic blood pressure was reduced by 5 to 6 mm Hg,4 and the Systolic Hypertension in the Elderly Program (SHEP) study showed that treating isolated systolic hypertension in the elderly reduced stroke by 36%.5 Similarly, pharmacological intervention with 3-hydroxy-3-methylglutaryl coenzyme A reductase inhibitors (statin agents), aspirin, and warfarin in patients with decreased left ventricular function or evidence of left ventricular thrombi after myocardial infarction has proven effective in stroke prevention. Oral anticoagulation and antiplatelet therapy also reduce the risk of stroke in patients with atrial fibrillation, and carotid endarterectomy is effective in preventing stroke in persons with asymptomatic carotid stenosis with 60% to 99% occlusion. Lifestyle modifications, including smoking cessation, …
MicroTE: fine grained traffic engineering for data centers
The effects of data center traffic characteristics on data center traffic engineering is not well understood. In particular, it is unclear how existing traffic engineering techniques perform under various traffic patterns, namely how do the computed routes differ from the optimal routes. Our study reveals that existing traffic engineering techniques perform 15% to 20% worse than the optimal solution. We find that these techniques suffer mainly due to their inability to utilize global knowledge about flow characteristics and make coordinated decision for scheduling flows. To this end, we have developed MicroTE, a system that adapts to traffic variations by leveraging the short term and partial predictability of the traffic matrix. We implement MicroTE within the OpenFlow framework and with minor modification to the end hosts. In our evaluations, we show that our system performs close to the optimal solution and imposes minimal overhead on the network making it appropriate for current and future data centers.
A smart watch-based gesture recognition system for assisting people with visual impairments
Modern mobile devices provide several functionalities and new ones are being added at a breakneck pace. Unfortunately browsing the menu and accessing the functions of a mobile phone is not a trivial task for visual impaired users. Low vision people typically rely on screen readers and voice commands. However, depending on the situations, screen readers are not ideal because blind people may need their hearing for safety, and automatic recognition of voice commands is challenging in noisy environments. Novel smart watches technologies provides an interesting opportunity to design new forms of user interaction with mobile phones. We present our first works towards the realization of a system, based on the combination of a mobile phone and a smart watch for gesture control, for assisting low vision people during daily life activities. More specifically we propose a novel approach for gesture recognition which is based on global alignment kernels and is shown to be effective in the challenging scenario of user independent recognition. This method is used to build a gesture-based user interaction module and is embedded into a system targeted to visually impaired which will also integrate several other modules. We present two of them: one for identifying wet floor signs, the other for automatic recognition of predefined logos.
Prostate Cancer Incidence and PSA Testing Patterns in Relation to USPSTF Screening Recommendations.
IMPORTANCE Prostate cancer incidence in men 75 years and older substantially decreased following the 2008 US Preventive Services Task Force (USPSTF) recommendation against prostate-specific antigen (PSA)-based screening for this age group. It is unknown whether incidence has changed since the USPSTF recommendation against screening for all men in May 2012. OBJECTIVE To examine recent changes in stage-specific prostate cancer incidence and PSA screening rates following the 2008 and 2012 USPSTF recommendations. DESIGN AND SETTINGS Ecologic study of age-standardized prostate cancer incidence (newly diagnosed cases/100,000 men aged ≥50 years) by stage from 2005 through 2012 using data from 18 population-based Surveillance, Epidemiology, and End Results (SEER) registries and PSA screening rate in the past year among men 50 years and older without a history of prostate cancer who responded to the 2005 (n = 4580), 2008 (n = 3476), 2010 (n = 4157), and 2013 (n = 6172) National Health Interview Survey (NHIS). EXPOSURES The USPSTF recommendations to omit PSA-based screening for average-risk men. MAIN OUTCOMES AND MEASURES Prostate cancer incidence and incidence ratios (IRs) comparing consecutive years from 2005 through 2012 by age (≥50, 50-74, and ≥75 years) and SEER summary stage categorized as local/regional or distant and PSA screening rate and rate ratios (SRRs) comparing successive survey years by age. RESULTS Prostate cancer incidence per 100,000 in men 50 years and older (N = 446,009 in SEER areas) was 534.9 in 2005, 540.8 in 2008, 505.0 in 2010, and 416.2 in 2012; rates began decreasing in 2008 and the largest decrease occurred between 2011 and 2012, from 498.3 (99% CI, 492.8-503.9) to 416.2 (99% CI, 411.2-421.2). The number of men 50 years and older diagnosed with prostate cancer nationwide declined by 33,519, from 213,562 men in 2011 to 180,043 men in 2012. Declines in incidence since 2008 were confined to local/regional-stage disease and were similar across age and race/ethnicity groups. The percentage of men 50 years and older reporting PSA screening in the past 12 months was 36.9% in 2005, 40.6% in 2008, 37.8% in 2010, and 30.8% in 2013. In relative terms, screening rates increased by 10% (SRR, 1.10; 99% CI, 1.01-1.21) between 2005 and 2008 and then decreased by 18% (SRR, 0.82; 99% CI, 0.75-0.89) between 2010 and 2013. Similar screening patterns were found in age subgroups 50 to 74 years and 75 years and older. CONCLUSIONS AND RELEVANCE Both the incidence of early-stage prostate cancer and rates of PSA screening have declined and coincide with 2012 USPSTF recommendation to omit PSA screening from routine primary care for men. Longer follow-up is needed to see whether these decreases are associated with trends in mortality.
Discriminative Learning of Deep Convolutional Feature Point Descriptors
Deep learning has revolutionalized image-level tasks such as classification, but patch-level tasks, such as correspondence, still rely on hand-crafted features, e.g. SIFT. In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches. We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify. By using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT. We demonstrate consistent performance gains over the state of the art, and generalize well against scaling and rotation, perspective transformation, non-rigid deformation, and illumination changes. Our descriptors are efficient to compute and amenable to modern GPUs, and are publicly available.
Semantic Relations Between Nominals Vivi Nastase1, Preslav Nakov2, Diarmuid Ó Séaghdha3, and Stan Szpakowicz4 (1FBK, Trento; 2QCRI, Qatar Foundation; 3University of Cambridge; 4University of Ottawa) Morgan & Claypool (Synthesis Lectures on Human Language Technologies, edited by Graeme Hirst, volume
Understanding noun compounds is the challenge that drew me to study computational linguistics. Think about how just two words, side by side, evoke a whole story: cacao seeds evokes the tree on which the cacao seeds grow, and to understand cacao powder we need to also imagine the seeds of the cacao tree that are crushed to powder. What conjures up these concepts of tree and grow, and seeds and crush, which are not explicitly present in the written word but are essential for our complete understanding of the compounds? The mechanisms by which we make sense of noun compounds can illuminate how we understand language more generally. And because the human mind is so wily as to provide interpretations even when we do not ask it to, I have always found it useful to study these phenomena of language on the computer, because the computer surely does not (yet) have the type of knowledge that must be brought to bear on the problem. If you find these phenomena equally intriguing and puzzling, then you will find this book by Nastase, Nakov, Ó Séaghdga, and Szpakowicz a wonderful summary of past research efforts and a good introduction to the current methods for analyzing semantic relations. To be clear, this book is not only about noun compounds, but explores all types of relations that can hold between what is expressed linguistically as nominal. Such nominals include entities (e.g., Godiva, Belgium) as well as nominals that refer to events (cultivation, roasting) and nominals with complex structure (delicious milk chocolate). In doing so, describing the different semantic relations between chocolate in the 20th century and chocolate in Belgium is within the scope of this book. This is a wise choice as there are then some linguistic cues that will help define and narrow the types of semantic relations (e.g., the prepositions above). Noun compounds are degenerate in the sense that there are few if any overt linguistic cues as to the semantic relations between the nominals.
FedX: Optimization Techniques for Federated Query Processing on Linked Data
Motivated by the ongoing success of Linked Data and the growing amount of semantic data sources available on the Web, new challenges to query processing are emerging. Especially in distributed settings that require joining data provided by multiple sources, sophisticated optimization techniques are necessary for efficient query processing. We propose novel join processing and grouping techniques to minimize the number of remote requests, and develop an effective solution for source selection in the absence of preprocessed metadata. We present FedX, a practical framework that enables efficient SPARQL query processing on heterogeneous, virtually integrated Linked Data sources. In experiments, we demonstrate the practicability and efficiency of our framework on a set of real-world queries and data sources from the Linked Open Data cloud. With FedX we achieve a significant improvement in query performance over state-of-the-art federated query engines.
Sample size calculation in medical studies
Optimum sample size is an essential component of any research. The main purpose of the sample size calculation is to determine the number of samples needed to detect significant changes in clinical parameters, treatment effects or associations after data gathering. It is not uncommon for studies to be underpowered and thereby fail to detect the existing treatment effects due to inadequate sample size. In this paper, we explain briefly the basic principles of sample size calculations in medical studies.
The Design and Implementation of a Mobile RFID Tag Sorting Robot
Libraries, manufacturing lines, and offices of the future all stand to benefit from knowing the exact spatial order of RFID-tagged books, components, and folders, respectively. To this end, radio-based localization has demonstrated the potential for high accuracy. Key enabling ideas include motion-based synthetic aperture radar, multipath detection, and the use of different frequencies (channels). But indoors in real-world situations, current systems often fall short of the mark, mainly because of the prevalence and strength of multipath reflections of the radio signal off nearby objects. In this paper we describe the design and implementation of MobiTagbot, an autonomous wheeled robot reader that conducts a roving survey of the above such areas to achieve an exact spatial order of RFID-tagged objects in very close (1--6 cm) spacings. Our approach leverages a serendipitous correlation between the changes in multipath reflections that occur with motion and the effect of changing the carrier frequency (channel) of the RFID query. By carefully observing the relationship between channel and phase, MobiTagbot detects if multipath is likely prevalent at a given robot reader location. If so, MobiTagbot excludes phase readings from that reader location, and generates a final location estimate using phase readings from other locations as the robot reader moves in space. Experimentally, we demonstrate that cutting-edge localization algorithms including Tagoram are not accurate enough to exactly order items in very close proximity, but MobiTagbot is, achieving nearly 100% ordering accuracy for items at low (3--6 cm) spacings and 86% accuracy for items at very low (1--3 cm) spacings.
Upregulation of beta(3)-adrenoceptors and altered contractile response to inotropic amines in human failing myocardium.
BACKGROUND Contrary to beta(1)- and beta(2)-adrenoceptors, beta(3)-adrenoceptors mediate a negative inotropic effect in human ventricular muscle. To assess their functional role in heart failure, our purpose was to compare the expression and contractile effect of beta(3)-adrenoceptors in nonfailing and failing human hearts. METHODS AND RESULTS We analyzed left ventricular samples from 29 failing (16 ischemic and 13 dilated cardiomyopathic) hearts (ejection fraction 18.6+/-2%) and 25 nonfailing (including 12 innervated) explanted hearts (ejection fraction 64.2+/-3%). beta(3)-Adrenoceptor proteins were identified by immunohistochemistry in ventricular cardiomyocytes from nonfailing and failing hearts. Contrary to beta(1)-adrenoceptor mRNA, Western blot analysis of beta(3)-adrenoceptor proteins showed a 2- to 3-fold increase in failing compared with nonfailing hearts. A similar increase was observed for Galpha(i-2) proteins that couple beta(3)-adrenoceptors to their negative inotropic effect. Contractile tension was measured in electrically stimulated myocardial samples ex vivo. In failing hearts, the positive inotropic effect of the nonspecific amine isoprenaline was reduced by 75% compared with that observed in nonfailing hearts. By contrast, the negative inotropic effect of beta(3)-preferential agonists was only mildly reduced. CONCLUSIONS Opposite changes occur in beta(1)- and beta(3)-adrenoceptor abundance in the failing left ventricle, with an imbalance between their inotropic influences that may underlie the functional degradation of the human failing heart.
Propminer: A Workflow for Interactive Information Extraction and Exploration using Dependency Trees
The use of deep syntactic information such as typed dependencies has been shown to be very effective in Information Extraction. Despite this potential, the process of manually creating rule-based information extractors that operate on dependency trees is not intuitive for persons without an extensive NLP background. In this system demonstration, we present a tool and a workflow designed to enable initiate users to interactively explore the effect and expressivity of creating Information Extraction rules over dependency trees. We introduce the proposed five step workflow for creating information extractors, the graph query based rule language, as well as the core features of the PROPMINER tool.
11-Level cascaded H-bridge grid-tied inverter interface with solar panels
This paper presents a single-phase 11-level (5 H-bridges) cascade multilevel DC-AC grid-tied inverter. Each inverter bridge is connected to a 200 W solar panel. OPAL-RT lab was used as the hardware in the loop (HIL) real-time control system platform where a Maximum Power Point Tracking (MPPT) algorithm was implemented based on the inverter output power to assure optimal operation of the inverter when connected to the power grid as well as a Phase Locked Loop (PLL) for phase and frequency match. A novel SPWM scheme is proposed in this paper to be used with the solar panels that can account for voltage profile fluctuations among the panels during the day. Simulation and experimental results are shown for voltage and current during synchronization mode and power transferring mode to validate the methodology for grid connection of renewable resources.
Prophet: what app you wish to use next
A variety of applications (app) installed on smart phones do greatly enrich our lives, but make it more difficult to organize our screens and folders. Predicting apps that will be in use next can benefit users a lot. In this poster, we propose some light-weighted Bayesian methods to predict the next app based on the app usage history. The evaluation on Mobile Data Challenge (MDC) dataset gives very encouraging results. In addition, we suggest a natural way to integrate the app prediction features to the user interface. Users would find it convenient to access the predicted apps with simple touches.
Lightweight DDoS flooding attack detection using NOX/OpenFlow
Distributed denial-of-service (DDoS) attacks became one of the main Internet security problems over the last decade, threatening public web servers in particular. Although the DDoS mechanism is widely understood, its detection is a very hard task because of the similarities between normal traffic and useless packets, sent by compromised hosts to their victims. This work presents a lightweight method for DDoS attack detection based on traffic flow features, in which the extraction of such information is made with a very low overhead compared to traditional approaches. This is possible due to the use of the NOX platform which provides a programmatic interface to facilitate the handling of switch information. Other major contributions include the high rate of detection and very low rate of false alarms obtained by flow analysis using Self Organizing Maps.
Construction of Yan Fu’s view on social history and the turning of modern history
The construction of Yan Fu’s view on social history has combined the indigenization of Western historiography and the modernization of traditional Chinese historiography, which reflects the characteristic of a change towards modern historiography. The academic sources of Yan’s view on social history include some Western thoughts such as Herbert Spencer’s social Darwinist theory, Edward Jenks’ patriarchal clan system theory, John Seeley’s political historiography, etc.; and also many indigenous sources such as Yang Zhu’s self benefit, Mozi’s selfless love, Buddhist views on mood, etc.
American Finance Association On the Existence of an Optimal Capital Structure : Theory and Evidence
On the Existence of an Optimal Capital Structure: Theory and Evidence Author(s): Michael Bradley, Gregg A. Jarrell, E. Han Kim Source: The Journal of Finance, Vol. 39, No. 3, Papers and Proceedings, Forty-Second Annual Meeting, American Finance Association, San Francisco, CA, December 28-30, 1983 (Jul., 1984), pp. 857-878 Published by: Blackwell Publishing for the American Finance Association Stable URL: http://www.jstor.org/stable/2327950 Accessed: 22/01/2009 14:04
Pattern specification and application in metamodels in ecore
The increased use of domain-specific languages (DSLs) and the absence of adequate tooling to take advantage of commonalities among DSLs has led to a situation where the same structure is duplicated in multiple DSLs. This observation has lead to the work described in this paper: an investigation of methods and tools for pattern specification and application and two extensions of a state-of-the-art tool for patterns in DSLs, DSL-tao. The extensions make patterns more understandable and they also make the tool suitable for more complex pattern applications. The first extension introduces a literal specification for patterns and the second extension introduces a merge function for the application of patterns. These two extensions are demonstrated on an often-occurring pattern in DSLs.
Privacy free indoor action detection system using top-view depth camera based on key-poses
In this paper, we propose an indoor action detection system which can automatically keep the log of users' activities of daily life since each activity generally consists of a number of actions. The hardware setting here adopts top-view depth cameras which makes our system less privacy sensitive and less annoying to the users, too. We regard the series of images of an action as a set of key-poses in images of the interested user which are arranged in a certain temporal order and use the latent SVM framework to jointly learn the appearance of the key-poses and the temporal locations of the key-poses. In this work, two kinds of features are proposed. The first is the histogram of depth difference value which can encode the shape of the human poses. The second is the location-signified feature which can capture the spatial relations among the person, floor, and other static objects. Moreover, we find that some incorrect detection results of certain type of action are usually associated with another certain type of action. Therefore, we design an algorithm that tries to automatically discover the action pairs which are the most difficult to be differentiable, and suppress the incorrect detection outcomes. To validate our system, experiments have been conducted, and the experimental results have shown effectiveness and robustness of our proposed method.
Symmetric Variational Autoencoder and Connections to Adversarial Learning
A new form of the variational autoencoder (VAE) is proposed, based on the symmetric KullbackLeibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.
Vasomotor symptoms decrease in women with breast cancer randomized to treatment with applied relaxation or electro-acupuncture: a preliminary study.
OBJECTIVE To evaluate the effect of applied relaxation and electro-acupuncture on vasomotor symptoms in women treated for breast cancer. METHODS Thirty-eight postmenopausal women with breast cancer and vasomotor symptoms were randomized to treatment with electro-acupuncture (n = 19) or applied relaxation (n = 19) during 12 weeks. The number of hot flushes was registered daily in a logbook before and during treatment and after 3 and 6 months of follow-up. RESULTS Thirty-one women completed 12 weeks of treatment and 6 months of follow-up. After 12 weeks of applied relaxation, the number of flushes/24 h had decreased from 9.2 (95% confidence interval (CI) 6.6-11.9) at baseline to 4.5 (95% CI 3.2-5.8) and to 3.9 (95% CI 1.8-6.0) at 6 months follow-up (n = 14). The flushes/24 h were reduced from 8.4 (95% CI 6.6-10.2) to 4.1 (95% CI 3.0-5.2) after 12 weeks of treatment with electro-acupuncture and to 3.5 (95% CI 1.7-5.3) after 6 months follow-up (n = 17). In both groups, the mean Kupperman Index score was significantly reduced after treatment and remained unchanged 6 months after end of treatment. CONCLUSION We suggest that applied relaxation and electro-acupuncture should be further evaluated as possible treatments for vasomotor symptoms in postmenopausal women with breast cancer.
Social Balance on Networks : The Dynamics of Friendship and Enmity
How do social networks evolve when both friendly and unfriendly relations exist? Here we propose a simple dynamics for social networks in which the sense of a relationship can change so as to eliminate imbalanced triads—relationship triangles that contains 1 or 3 unfriendly links. In this dynamics, a friendly link changes to unfriendly or vice versa in an imbalanced triad to make the triad balanced. Such networks undergo a dynamic phase transition from a steady state to “utopia”—all friendly links—as the amount of network friendliness is changed. Basic features of the long-time dynamics and the phase transition are discussed.
Association between second-generation antipsychotics and newly diagnosed treated diabetes mellitus: does the effect differ by dose?
BACKGROUND The benefits of some second-generation antipsychotics (SGAs) must be weighed against the increased risk for diabetes mellitus. This study examines whether the association between SGAs and diabetes differs by dose. METHODS Patients were ≥18 years of age from three US healthcare systems and exposed to an SGA for ≥45 days between November 1, 2002 and March 31, 2005. Patients had no evidence of diabetes before index date and no previous antipsychotic prescription filled within 3 months before index date.49,946 patients were exposed to SGAs during the study period. Person-time exposed to antipsychotic dose (categorized by tertiles for each drug) was calculated. Newly treated diabetes was identified using pharmacy data to determine patients exposed to anti-diabetic therapies. Adjusted hazard ratios for diabetes across dose tertiles of SGA were calculated using the lowest dose tertile as reference. RESULTS Olanzapine exhibited a dose-dependent relationship for risk for diabetes, with elevated and progressive risk across intermediate (diabetes rate per 100 person-years = 1.9; adjusted Hazard Ratio (HR), 1.7, 95% confidence interval (CI), 1.0-3.1) and top tertile doses (diabetes rate per 100 person-years = 2.7; adjusted HR, 2.5, 95% CI, 1.4-4.5). Quetiapine and risperidone exhibited elevated risk at top dose tertile with no evidence of increased risk at intermediate dose tertile. Unlike olanzapine, quetiapine, and risperidone, neither aripiprazole nor ziprasidone were associated with risk of diabetes at any dose tertile. CONCLUSIONS In this large multi-site epidemiologic study, within each drug-specific stratum, the risk of diabetes for persons exposed to olanzapine, risperidone, and quetiapine was dose-dependent and elevated at therapeutic doses. In contrast, in aripiprazole-specific and ziprasidone-specific stratum, these newer agents were not associated with an increased risk of diabetes and dose-dependent relationships were not apparent. Although, these estimates should be interpreted with caution as they are imprecise due to small numbers.
Dynamic pickup and delivery problems
In the last decade, there has been an increasing body of research in dynamic vehicle routing problems. This article surveys the subclass of those problems called dynamic pickup and delivery problems, in which objects or people have to be collected and delivered in real-time. It discusses some general issues as well as solution strategies. 2009 Elsevier B.V. All rights reserved.
Learning to summarize web image and text mutually
We consider the problem of learning to summarize images by text and visualize text utilizing images, which we call Mutual-Summarization. We divide the web image-text data space into three subspaces, namely pure image space (PIS), pure text space (PTS) and image-text joint space (ITJS). Naturally, we treat the ITJS as a knowledge base. For summarizing images by sentence issue, we map images from PIS to ITJS via image classification models and use text summarization on the corresponding texts in ITJS to summarize images. For text visualization problem, we map texts from PTS to ITJS via text categorization models and generate the visualization by choosing the semantic related images from ITJS, where the selected images are ranked by their confidence. In above approaches images are represented by color histograms, dense visual words and feature descriptors at different levels of spatial pyramid; and the texts are generated according to the Latent Dirichlet Allocation (LDA) topic model. Multiple Kernel (MK) methodologies are used to learn classifiers for image and text respectively. We show the Mutual-Summarization results on our newly collected dataset of six big events ("Gulf Oil Spill", "Haiti Earthquake", etc.) as well as demonstrate improved cross-media retrieval performance over existing methods in terms of MAP, Precision and Recall.
Polar format algorithm for bistatic SAR
Matched filtering (MF) of phase history data is a mathematically ideal but computationally expensive approach to bistatic synthetic aperture radar (SAR) image formation. Fast backprojection algorithms (BPAs) for image formation have recently been shown to give improved O(N/sup 2/ log/sub 2/N) performance. An O(N/sup 2/ log/sub 2/N) bistatic polar format algorithm (PFA) based on a bistatic far-field assumption is derived. This algorithm is a generalization of the popular PFA for monostatic SAR image formation and is highly amenable to implementation with existing monostatic image formation processors. Limits on the size of an imaged scene, analogous to those in monostatic systems, are derived for the bistatic PFA.
FlexFlow: A Flexible Dataflow Accelerator Architecture for Convolutional Neural Networks
Convolutional Neural Networks (CNN) are verycomputation-intensive. Recently, a lot of CNN accelerators based on the CNN intrinsic parallelism are proposed. However, we observed that there is a big mismatch between the parallel types supported by computing engine and the dominant parallel types of CNN workloads. This mismatch seriously degrades resource utilization of existing accelerators. In this paper, we propose aflexible dataflow architecture (FlexFlow) that can leverage the complementary effects among feature map, neuron, and synapse parallelism to mitigate the mismatch. We evaluated our design with six typical practical workloads, it acquires 2-10x performance speedup and 2.5-10x power efficiency improvement compared with three state-of-the-art accelerator architectures. Meanwhile, FlexFlow is highly scalable with growing computing engine scale.
Performance analysis of two open source intrusion detection systems
Several studies have been conducted where authors compared the performance of open source Intrusion detection systems, namely Snort and Suricata. However, most studies were limited to either security indicators or performance measurements under the same operating system. The objective of this study is to give a comprehensive analysis of both products in terms of several security related and performance related indicators. In addition, we tested the products under two different operating systems. Several experiments were run to evaluate the effects of open source intrusion detection and prevention systems Snort and Suricata, operating systems Windows, Linux and various attack types on system resource usage, dropped packets rate and ability to detect intrusions. The results show that Suricata has a higher CPU and RAM utilization than Snort in all cases on both operating systems, but lower percentage of dropped packets when evaluated during five of six simulated attacks. Both products had the same number of correctly identified intrusions. The results show that Linux-based solutions consume more system resources, but Windows-based systems had a higher rate of dropped packets. This indicates that these two intrusion detection and prevention systems should be run on Linux. However, both systems are inappropriate for high volumes of traffic in single-server setting.
Face recognition with learning-based descriptor
We present a novel approach to address the representation issue and the matching issue in face recognition (verification). Firstly, our approach encodes the micro-structures of the face by a new learning-based encoding method. Unlike many previous manually designed encoding methods (e.g., LBP or SIFT), we use unsupervised learning techniques to learn an encoder from the training examples, which can automatically achieve very good tradeoff between discriminative power and invariance. Then we apply PCA to get a compact face descriptor. We find that a simple normalization mechanism after PCA can further improve the discriminative ability of the descriptor. The resulting face representation, learning-based (LE) descriptor, is compact, highly discriminative, and easy-to-extract. To handle the large pose variation in real-life scenarios, we propose a pose-adaptive matching method that uses pose-specific classifiers to deal with different pose combinations (e.g., frontal v.s. frontal, frontal v.s. left) of the matching face pair. Our approach is comparable with the state-of-the-art methods on the Labeled Face in Wild (LFW) benchmark (we achieved 84.45% recognition rate), while maintaining excellent compactness, simplicity, and generalization ability across different datasets.
Laser grooving of semiconductor wafers: Comparing a simplified numerical approach with experiments
Laser grooving is used for the singulation of advanced CMOS wafers since it is believed that it exerts lower mechanical stress than traditional blade dicing. The very local heating of wafers, however, might result in high thermal stress around the heat affected zone. In this work we present a model to predict the temperature distribution, material removal, and the resulting stress, in a sandwiched structure of metals and dielectric materials that are commonly found in the back-end of line of semiconductor wafers. Simulation results on realistic three dimensional back-end structures reveal that the presence of metals clearly affects both the ablation depth, and the stress in the material. Experiments showed a similar observation for the ablation depth. The shape of the crater, however, was found to be more uniform than predicted by simulations, which is probably due to the redistribution of molten metal.