title
stringlengths
8
300
abstract
stringlengths
0
10k
Regularized linear and kernel redundancy analysis
Redundancy analysis (RA) is a versatile technique used to predict multivariate criterion variables from multivariate predictor variables. The reduced-rank feature of RA captures redundant information in the criterion variables in a most parsimonious way. A ridge type of regularization was introduced in RA to deal with the multicollinearity problem among the predictor variables. The regularized linear RA was extended to nonlinear RA using a kernel method to enhance the predictability. The usefulness of the proposed procedures was demonstrated by a Monte Carlo study and through the analysis of two real data sets.
Unsupervised feature selection for linked social media data
The prevalent use of social media produces mountains of unlabeled, high-dimensional data. Feature selection has been shown effective in dealing with high-dimensional data for efficient data mining. Feature selection for unlabeled data remains a challenging task due to the absence of label information by which the feature relevance can be assessed. The unique characteristics of social media data further complicate the already challenging problem of unsupervised feature selection, (e.g., part of social media data is linked, which makes invalid the independent and identically distributed assumption), bringing about new challenges to traditional unsupervised feature selection algorithms. In this paper, we study the differences between social media data and traditional attribute-value data, investigate if the relations revealed in linked data can be used to help select relevant features, and propose a novel unsupervised feature selection framework, LUFS, for linked social media data. We perform experiments with real-world social media datasets to evaluate the effectiveness of the proposed framework and probe the working of its key components.
TECA: A Parallel Toolkit for Extreme Climate Analysis
We present TECA, a parallel toolkit for detecting extreme events in large climate datasets. Modern climate datasets expose parallelism across a number of dimensions: spatial locations, timesteps and ensemble members. We design TECA to exploit these modes of parallelism and demonstrate a prototype implementation for detecting and tracking three classes of extreme events: tropical cyclones, extra-tropical cyclones and atmospheric rivers. We process a massive 10TB CAM5 simulation dataset with TECA, and demonstrate good runtime performance for the three case studies.
CLEAR: Cross-layer exploration for architecting resilience: Combining hardware and software techniques to tolerate soft errors in processor cores
We present a first of its kind framework which overcomes a major challenge in the design of digital systems that are resilient to reliability failures: achieve desired resilience targets at minimal costs (energy, power, execution time, area) by combining resilience techniques across various layers of the system stack (circuit, logic, architecture, software, algorithm). This is also referred to as cross-layer resilience. In this paper, we focus on radiation-induced soft errors in processor cores. We address both single-event upsets (SEUs) and single-event multiple upsets (SEMUs) in terrestrial environments. Our framework automatically and systematically explores the large space of comprehensive resilience techniques and their combinations across various layers of the system stack (798 cross-layer combinations in this paper), derives cost-effective solutions that achieve resilience targets at minimal costs, and provides guidelines for the design of new resilience techniques. We demonstrate the practicality and effectiveness of our framework using two diverse designs: a simple, in-order processor core and a complex, out-of-order processor core. Our results demonstrate that a carefully optimized combination of circuit-level hardening, logic-level parity checking, and micro-architectural recovery provides a highly cost-effective soft error resilience solution for general-purpose processor cores. For example, a 50× improvement in silent data corruption rate is achieved at only 2.1% energy cost for an out-of-order core (6.1% for an in-order core) with no speed impact. However, selective circuit-level hardening alone, guided by a thorough analysis of the effects of soft errors on application benchmarks, provides a cost-effective soft error resilience solution as well (with ~1% additional energy cost for a 50× improvement in silent data corruption rate).
Interoception: the sense of the physiological condition of the body
Converging evidence indicates that primates have a distinct cortical image of homeostatic afferent activity that reflects all aspects of the physiological condition of all tissues of the body. This interoceptive system, associated with autonomic motor control, is distinct from the exteroceptive system (cutaneous mechanoreception and proprioception) that guides somatic motor activity. The primary interoceptive representation in the dorsal posterior insula engenders distinct highly resolved feelings from the body that include pain, temperature, itch, sensual touch, muscular and visceral sensations, vasomotor activity, hunger, thirst, and 'air hunger'. In humans, a meta-representation of the primary interoceptive activity is engendered in the right anterior insula, which seems to provide the basis for the subjective image of the material self as a feeling (sentient) entity, that is, emotional awareness.
Spatial Features for Handwritten Kannada and English Character Recognition
This paper presents a handwritten Kannada and English Character recognition system based on spatial features. Directional spatial features viz stroke density, stroke length and the number of stokes are employed as potential features to characterize the handwritten Kannada numerals/vowels and English uppercase alphabets. KNN classifier is used to classify the characters based on these features with four fold cross validation. The proposed system achieves the recognition accuracy as 96.2%, 90.1% and 91.04% for handwritten Kannada numerals, vowels and English uppercase alphabets respectively.
CapsuleGAN: Generative Adversarial Capsule Network
We present Generative Adversarial Capsule Network (CapsuleGAN), a framework that uses capsule networks (CapsNets) instead of the standard convolutional neural networks (CNNs) as discriminators within the generative adversarial network (GAN) setting, while modeling image data. We provide guidelines for designing CapsNet discriminators and the updated GAN objective function, which incorporates the CapsNet margin loss, for training CapsuleGAN models. We show that CapsuleGAN outperforms convolutional-GAN at modeling image data distribution on MNIST and CIFAR-10 datasets, evaluated on the generative adversarial metric and at semi-supervised image classification.
Leveraging microservices architecture by using Docker technology
Microservices architecture is not a hype and for awhile, started getting attention from organizations who want to shorten time to market of a software product by improving productivity effect through maximizing the automation in all life circle of the product. However, microservices architecture approach also introduces a lot of new complexity and requires application developers a certain level of maturity in order to confidently apply the architectural style. Docker has been a disruptive technology which changes the way applications are being developed and distributed. With a lot of advantages, Docker is a very good fit to implementing microservices architecture. In this paper we will discuss about how Docker can effectively help in leveraging mircoservices architecture with a real working model as a case study.
Meshworm: A Peristaltic Soft Robot With Antagonistic Nickel Titanium Coil Actuators
This paper presents the complete development and analysis of a soft robotic platform that exhibits peristaltic locomotion. The design principle is based on the antagonistic arrangement of circular and longitudinal muscle groups of Oligochaetes. Sequential antagonistic motion is achieved in a flexible braided mesh-tube structure using a nickel titanium (NiTi) coil actuators wrapped in a spiral pattern around the circumference. An enhanced theoretical model of the NiTi coil spring describes the combination of martensite deformation and spring elasticity as a function of geometry. A numerical model of the mesh structures reveals how peristaltic actuation induces robust locomotion and details the deformation by the contraction of circumferential NiTi actuators. Several peristaltic locomotion modes are modeled, tested, and compared on the basis of speed. Utilizing additional NiTi coils placed longitudinally, steering capabilities are incorporated. Proprioceptive potentiometers sense segment contraction, which enables the development of closed-loop controllers. Several appropriate control algorithms are designed and experimentally compared based on locomotion speed and energy consumption. The entire mechanical structure is made of flexible mesh materials and can withstand significant external impact during operation. This approach allows a completely soft robotic platform by employing a flexible control unit and energy sources.
Efficacy of Sofosbuvir, Velpatasvir, and GS-9857 in Patients With Hepatitis C Virus Genotype 2, 3, 4, or 6 Infections in an Open-Label, Phase 2 Trial.
BACKGROUND & AIMS Studies are needed to determine the optimal regimen for patients with chronic hepatitis C virus (HCV) genotype 2, 3, 4, or 6 infections whose prior course of antiviral therapy has failed, and the feasibility of shortening treatment duration. We performed a phase 2 study to determine the efficacy and safety of the combination of the nucleotide polymerase inhibitor sofosbuvir, the NS5A inhibitor velpatasvir, and the NS3/4A protease inhibitor GS-9857 in these patients. METHODS We performed a multicenter, open-label trial at 32 sites in the United States and 2 sites in New Zealand from March 3, 2015 to April 27, 2015. Our study included 128 treatment-naïve and treatment-experienced patients (1 with HCV genotype 1b; 33 with HCV genotype 2; 74 with HCV genotype 3; 17 with genotype HCV 4; and 3 with HCV genotype 6), with or without compensated cirrhosis. All patients received sofosbuvir-velpatasvir (400 mg/100 mg fixed-dose combination tablet) and GS-9857 (100 mg) once daily for 6-12 weeks. The primary end point was sustained virologic response 12 weeks after treatment (SVR12). RESULTS After 6 weeks of treatment, SVR12s were achieved by 88% of treatment-naïve patients without cirrhosis (29 of 33; 95% confidence interval, 72%-97%). After 8 weeks of treatment, SVR12s were achieved by 93% of treatment-naïve patients with cirrhosis (28 of 30; 95% CI, 78%-99%). After 12 weeks of treatment, SVR12s were achieved by all treatment-experienced patients without cirrhosis (36 of 36; 95% CI, 90%-100%) and 97% of treatment-experienced patients with cirrhosis (28 of 29; 95% CI, 82%-100%). The most common adverse events were headache, diarrhea, fatigue, and nausea. Three patients (1%) discontinued treatment due to adverse events. CONCLUSIONS In a phase 2 open-label trial, we found sofosbuvir-velpatasvir plus GS-9857 (8 weeks in treatment-naïve patients or 12 weeks in treatment-experienced patients) to be safe and effective for patients with HCV genotype 2, 3, 4, or 6 infections, with or without compensated cirrhosis. ClinicalTrials.gov ID: NCT02378961.
From Discourse to Logic - Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory
Dordrecht: Kluwer Academic Publishers (Studies in Linguistics and Philosophy, edited by Gennaro Chierchia, Pauline Jacobson, and Francis J. Pelletier, volume 42), 1993, viii + 713 pp. Hardbound in two volumes, Part 1 ISBN 0-7923-1027-6, no price listed; Part 2, ISBN 0-7923-2402-1, no price listed; two volume set, ISBN 0-7923-2403-X, $172.00, £112.00, Dfl 280.Paperbound in one volume, ISBN 0-7923-1028-4, no price listed
Electricity generation using membrane and salt bridge microbial fuel cells.
Microbial fuel cells (MFCs) can be used to directly generate electricity from the oxidation of dissolved organic matter, but optimization of MFCs will require that we know more about the factors that can increase power output such as the type of proton exchange system which can affect the system internal resistance. Power output in a MFC containing a proton exchange membrane was compared using a pure culture (Geobacter metallireducens) or a mixed culture (wastewater inoculum). Power output with either inoculum was essentially the same, with 40+/-1mW/m2 for G. metallireducens and 38+/-1mW/m2 for the wastewater inoculum. We also examined power output in a MFC with a salt bridge instead of a membrane system. Power output by the salt bridge MFC (inoculated with G. metallireducens) was 2.2mW/m2. The low power output was directly attributed to the higher internal resistance of the salt bridge system (19920+/-50 Ohms) compared to that of the membrane system (1286+/-1Ohms) based on measurements using impedance spectroscopy. In both systems, it was observed that oxygen diffusion from the cathode chamber into the anode chamber was a factor in power generation. Nitrogen gas sparging, L-cysteine (a chemical oxygen scavenger), or suspended cells (biological oxygen scavenger) were used to limit the effects of gas diffusion into the anode chamber. Nitrogen gas sparging, for example, increased overall Coulombic efficiency (47% or 55%) compared to that obtained without gas sparging (19%). These results show that increasing power densities in MFCs will require reducing the internal resistance of the system, and that methods are needed to control the dissolved oxygen flux into the anode chamber in order to increase overall Coulombic efficiency.
Long-term efficacy of routine access to antiretroviral-resistance testing in HIV type 1-infected patients: results of the clinical efficacy of resistance testing trial.
The long-term efficacy of making resistance testing routinely available to clinicians has not been established. We conducted a clinical trial at 6 US military hospitals in which volunteers infected with human immunodeficiency virus type-1 were randomized to have routine access to phenotype resistance testing (PT arm), access to genotype resistance testing (GT arm), or no access to either test (VB arm). The primary outcome measure was time to persistent treatment failure despite change(s) in antiretroviral therapy (ART) regimen. Overall, routine access to resistance testing did not significantly increase the time to end point. Time to end point was significantly prolonged in the PT arm for subjects with a history of treatment with > or =4 different ART regimens or a history of treatment with nonnucleoside reverse-transcriptase inhibitors before the study, compared with that in the VB arm. These results suggest that routine access to resistance testing can improve long-term virologic outcomes in HIV-infected patients who are treatment experienced but may not impact outcome in patients who are naive to or have had limited experience with ART.
Subspace Clustering with Priors via Sparse Quadratically Constrained Quadratic Programming
This paper considers the problem of recovering a subspace arrangement from noisy samples, potentially corrupted with outliers. Our main result shows that this problem can be formulated as a convex semi-definite optimization problem subject to an additional rank constrain that involves only a very small number of variables. This is established by first reducing the problem to a quadratically constrained quadratic problem and then using its special structure to find conditions guaranteeing that a suitably built convex relaxation is indeed exact. When combined with the standard nuclear norm relaxation for rank, the results above lead to computationally efficient algorithms with optimality guarantees. A salient feature of the proposed approach is its ability to incorporate existing a-priori information about the noise, co-ocurrences, and percentage of outliers. These results are illustrated with several examples.
Hidden attribute-based signatures without anonymity revocation
We propose a new notion called hidden attribute-based signature, which is inspired by the recent developments in attribute-based cryptosystem. With this technique, users are able to sign messages with any subset of their attributes issued from an attribute center. In this notion, a signature attests not to the identity of the individual who endorsed a message, but instead to a claim regarding the attributes the underlying signer possesses. Users cannot forge signature with attributes which they have not been issued. Furthermore, signer remains anonymous without the fear of revocation, among all users with the attributes purported in the signature. After formalizing the security model, we propose two constructions of hidden attributebased signature from pairings. The first construction supports a large universe of attributes and its security proof relies on the random oracle assumption, which can be removed in the second construction. Both constructions have proven to be secure under the standard computational Diffie–Hellman assumption. 2010 Elsevier Inc. All rights reserved.
Dynamical Systems : Some Computational Problems
We present several topics involving the computation of dynamical systems. The emphasis is on work in progress and the presentation is informal – there are many technical details which are not fully discussed. The topics are chosen to demonstrate the various interactions between numerical computation and mathematical theory in the area of dynamical systems. We present an algorithm for the computation of stable manifolds of equilibrium points, describe the computation of Hopf bifurcations for equilibria in parametrized families of vector fields, survey the results of studies of codimension two global bifurcations, discuss a numerical analysis of the Hodgkin and Huxley equations, and describe some of the effects of symmetry on local bifurcation.
Early onset of treatment effects with oral risperidone
BACKGROUND The dogma of a delayed onset of antipsychotic treatment effects has been maintained over the past decades. However, recent studies have challenged this concept. We therefore performed an analysis of the onset of antipsychotic treatment effects in a sample of acutely decompensated patients with schizophrenia. METHODS In this observational study, 48 inpatients with acutely decompensated schizophrenia were offered antipsychotic treatment with oral risperidone. PANSS-ratings were obtained on day 0, day 1, day 3, day 7 and day 14. RESULTS Significant effects of treatment were already present on day 1 and continued throughout the study. The PANSS positive subscore and the PANSS total score improved significantly more than the PANSS negative subscore. CONCLUSION Our results are consistent with the growing number of studies suggesting an early onset of antipsychotic treatment effects. However, non-pharmacological effects of treatment also need to be taken into consideration.
Specification and design of a prototype filter for filter bank based multicarrier transmission
The specifications of filter banks for multicarrier transmission systems with a large number of subchannels are discussed, with application to xDSL and power line communication in mind.The near PR modulated approach is considered and the importance, for the system, of the protype filter delay is stressed. An existing design technique known to be particularly relevant to the context is revisited from a frequency sampling perspective. The performance results in terms of subchannel noise floor and delay are given for several filter lengths and an experimental validation is provided. Finally, an improvement to the design technique is proposed, which brings a gain of 3.3 dB in subchannel interference power level.
Toward a definition of competency-based education in medicine: a systematic review of published definitions.
BACKGROUND Competency-based education (CBE) has emerged in the health professions to address criticisms of contemporary approaches to training. However, the literature has no clear, widely accepted definition of CBE that furthers innovation, debate, and scholarship in this area. AIM To systematically review CBE-related literature in order to identify key terms and constructs to inform the development of a useful working definition of CBE for medical education. METHODS We searched electronic databases and supplemented searches by using authors' files, checking reference lists, contacting relevant organizations and conducting Internet searches. Screening was carried out by duplicate assessment, and disagreements were resolved by consensus. We included any English- or French-language sources that defined competency-based education. Data were analyzed qualitatively and summarized descriptively. RESULTS We identified 15,956 records for initial relevancy screening by title and abstract. The full text of 1,826 records was then retrieved and assessed further for relevance. A total of 173 records were analyzed. We identified 4 major themes (organizing framework, rationale, contrast with time, and implementing CBE) and 6 sub-themes (outcomes defined, curriculum of competencies, demonstrable, assessment, learner-centred and societal needs). From these themes, a new definition of CBE was synthesized. CONCLUSION This is the first comprehensive systematic review of the medical education literature related to CBE definitions. The themes and definition identified should be considered by educators to advance the field.
Reliable and Privacy-Preserving Selective Data Aggregation for Fog-Based IoT
Internet of Things (IoT) is reshaping our daily lives by bridging the gaps between physical and digital world. To enable ubiquitous sensing, seamless connection and real-time processing for IoT applications, fog computing is considered as a key component in a heterogeneous IoT architecture, which deploys storage and computing resources to network edges. However, the fog-based IoT architecture can lead to various security and privacy risks, such as compromised fog nodes that may impede developments of IoT by attacking the data collection and gathering period. In this paper, we propose a novel privacy-preserving and reliable scheme for the fog-based IoT to address the data privacy and reliability challenges of the selective data aggregation service. Specifically, homomorphic proxy re-encryption and proxy re-authenticator techniques are respectively utilized to deal with the data privacy and reliability issues of the service, which supports data aggregation over selective data types for any type-driven applications. We define a new threat model to formalize the non-collusive and collusive attacks of compromised fog nodes, and it is demonstrated that the proposed scheme can prevent both non-collusive and collusive attacks in our model. In addition, performance evaluations show the efficiency of the scheme in terms of computational costs and communication overheads.
Risk factors for the presence of varices in cirrhotic patients without a history of variceal hemorrhage.
BACKGROUND Current medical management dictates that all cirrhotic patients without a history of variceal hemorrhage undergo endoscopic screening to detect large varices. However, referral for endoscopic screening of only patients at highest risk for varices may be most cost-effective. The aim of this case-control study was to identify clinical, laboratory, and radiologic findings that predict the presence of varices in patients with cirrhosis. METHODS Three hundred patients without a history of variceal hemorrhage underwent upper endoscopy as part of an evaluation before liver transplantation. Cases defined as the presence of any varices and cases defined as the presence of large varices were used for examining the risks associated with finding varices on upper endoscopy. Logistic regression was performed to evaluate associations between the presence of varices and patient characteristics. RESULTS Platelet count and Child-Pugh class were independent risk factors for the presence of any varices and the presence of large varices. For the presence of any varices, a platelet count of 90 x 10(3)/microL or less (odds ratio [OR], 2.4; 95% confidence interval [CI], 1.4-4.0) and advanced Child-Pugh class (OR, 3.0; 95% CI, 1.6-5.6) were independent risk factors. For large varices, a platelet count of 80 x 10(3)/microL or less (OR, 2.3; 95% CI, 1.4-3.9) and advanced Child-Pugh class (OR, 2.8; 95% CI, 1.3-5.8) were independent risk factors associated with varices. CONCLUSIONS Low platelet count and advanced Child-Pugh class were associated with the presence of any varices and with large varices. These factors allow identification of a subgroup of cirrhotic patients who would benefit most from referral for endoscopic screening for varices.
A Framework for Structural Risk Minimisation
The paper introduces a framework for studying structural risk minimisation. The model views structural risk minimisation in a PAC context. It then considers the more general case when the hierarchy of classes is chosen in response to the data. This theoretically explains the impressive performance of the maximal margin hyperplane algorithm of Vapnik. It may also provide a general technique for exploiting serendipitous simplicity in observed data to obtain better prediction accuracy from small training sets.
E-teaching and learning preferences of dental and dental hygiene students.
This project was conducted to identify student preferences for e-teaching and learning. An online Student Preferences for Learning with E-Technology Survey was developed to assess computer experiences, the use and effectiveness of e-resources, preferences for various environments, need for standardization, and preferred modes of communication. The survey was administered in May 2008 to all dental and dental hygiene students at Baylor College of Dentistry. There was an 85 percent response rate (n=366/432). About two-thirds of the students found college e-resources effective for learning. They preferred printed text over digital (64 percent) and wanted e-materials to supplement but not replace lectures (74 percent). They reported e-materials would "extensively" enhance learning, such as e-lectures (59 percent), clinical videos (54 percent), and podcasts (45 percent). They reported the need for a central location for e-resources (98 percent) and an e-syllabus for every course (86 percent) in a standard format (77 percent). One difficulty reported was accessing e-materials from external locations (33 percent). Students commented on the need for faculty training and standardization of grade posting. A qualitative theme was that e-resources should not replace interactions with faculty. Some infrastructure problems have been corrected. Planning has begun for standardization and expansion of e-resources. These improvements should enhance learning and increase the options for individualizing instruction, study strategies, and course remediation.
Multispectral Deep Neural Networks for Pedestrian Detection
Multispectral pedestrian detection is essential for around-the-clock applications, e.g., surveillance and autonomous driving. We deeply analyze Faster R-CNN for multispectral pedestrian detection task and then model it into a convolutional network (ConvNet) fusion problem. Further, we discover that ConvNet-based pedestrian detectors trained by color or thermal images separately provide complementary information in discriminating human instances. Thus there is a large potential to improve pedestrian detection by using color and thermal images in DNNs simultaneously. We carefully design four ConvNet fusion architectures that integrate two-branch ConvNets on different DNNs stages, all of which yield better performance compared with the baseline detector. Our experimental results on KAIST pedestrian benchmark show that the Halfway Fusion model that performs fusion on the middle-level convolutional features outperforms the baseline method by 11% and yields a missing rate 3.5% lower than the other proposed architectures.
Large-scale malware classification using random projections and neural networks
Automatically generated malware is a significant problem for computer users. Analysts are able to manually investigate a small number of unknown files, but the best large-scale defense for detecting malware is automated malware classification. Malware classifiers often use sparse binary features, and the number of potential features can be on the order of tens or hundreds of millions. Feature selection reduces the number of features to a manageable number for training simpler algorithms such as logistic regression, but this number is still too large for more complex algorithms such as neural networks. To overcome this problem, we used random projections to further reduce the dimensionality of the original input space. Using this architecture, we train several very large-scale neural network systems with over 2.6 million labeled samples thereby achieving classification results with a two-class error rate of 0.49% for a single neural network and 0.42% for an ensemble of neural networks.
Experimental Internet Stream Protocol: Version 2 (ST-II)
Status of this Memo This memo defines a revised version of the Internet Stream Protocol, originally defined in IEN-119 [8], based on results from experiments with the original version, and subsequent requests, discussion, and suggestions for improvements. This is a Limited-Use Experimental Protocol. Please refer to the current edition of the "IAB Official Protocol Standards" for the standardization state and status of this protocol. Distribution of this memo is unlimited. 1. Abstract This memo defines the Internet Stream Protocol, Version 2 (ST-II), an IP-layer protocol that provides end-to-end guaranteed service across an internet. This specification obsoletes IEN 119 "ST-A Proposed Internet Stream Protocol" written by Jim Forgie in 1979, the previous specification of ST. ST-II is not compatible with Version 1 of the protocol, but maintains much of the architecture and philosophy of that version. It is intended to fill in some of the areas left unaddressed, to make it easier to implement, and to support a wider range of applications. CIP Working Group [Page 1]
XFS: The Big Storage File System for Linux
[email protected] X F s I s a F I l e s y s t e m t h at w a s d e signed from day one for computer systems with large numbers of CPUs and large disk arrays. It focuses on supporting large files and good streaming I/O performance. It also has some interesting administrative features not supported by other Linux file systems. This article gives some background information on why XFS was created and how it differs from the familiar Linux file systems. You may discover that XFS is just what your project needs instead of making do with the default Linux file system.
A Bus-Based SoC Architecture for Flexible Module Placement on Reconfigurable FPGAs
This paper proposes an FPGA-based System-on-Chip (SoC) architecture with support for dynamic runtime reconfiguration. The SoC is divided into two parts, the static embedded CPU sub-system and the dynamically reconfigurable part. An additional bus system connects the embedded CPU sub-system with modules within the dynamic area, offering a flexible way to communicate among all SoC components. This makes it possible to implement a reconfigurable design with support for free module placement. An enhanced memory access method is included for high-speed access to an external memory. The dynamic part includes a streaming technology which implements a direct connection between reconfigurable modules. The paper describes the architecture and shows the advantages in a smart camera case study.
Reduced Size Planar Dual-Polarized Log-Periodic Antenna for Bidirectional High Power Transmit and Receive Applications
An improved method of more than halving the turn on frequency of a 4-arm log-periodic (LP) antenna for dual-circularly polarized (CP) bidirectional use is demonstrated. The ground plane currents of a slot-LP are manipulated by varying “ground plane” shape to form a ring-turnstile-slot LP aperture and thus extend the overall antenna bandwidth. The proposed concept is used to design and fabricate an antenna that is 2.3 times smaller than the conventional 4-arm LP and works from 500 MHz to 3 GHz. Microstrip impedance transformer/balun is used to feed the LP thus allowing for bidirectional use and much simpler beamforming network for dual-polarized operation. VSWR and far-field behavior of the antenna are characterized and its high-power thermal stability is demonstrated through a high-power test. The advantages over the previously designed ring-turnstile-LP configuration are highlighted and the design principles undertaken to make the antenna high-power capable are illustrated. The prototyped antenna is meant not only to illustrate the proposed concept but also as a viable high-power bidirectional antenna.
A Critical Study of Selected Classification Algorithms for Liver Disease Diagnosis
Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. Automatic classification tools may reduce burden on doctors. This paper evaluates the selected classification algorithms for the classification of some liver patient datasets. The classification algorithms considered here are Naïve Bayes classifier, C4.5, Back propagation Neural Network algorithm, and Support Vector Machines. These algorithms are evaluated based on four criteria: Accuracy, Precision, Sensitivity and Specificity.
EMailAnalyzer : an e-mail mining plug-in for the ProM framework
Increasingly information systems log historic information in a systematic way. Workflow management systems, but also ERP, CRM, SCM, and B2B systems often provide a so-called “event log”, i.e., a log recording the execution of activities. Thus far, process mining has been focusing on such structured event logs resulting in powerful analysis techniques and tools for discovering process, control, data, organizational, and social structures from event logs. Unfortunately, many work processes are not supported by systems providing structured logs. Instead very basic tools such as a text editors, spreadsheets, and e-mail are used. This report explores the application of process mining to e-mail, i.e., unstructured or semi-structured e-mail messages are converted in event logs suitable for the application of process mining tools. This report presents the tool EMailAnalyzer which analyzes and transforms e-mail messages in MS Outlook to a format that can be used by our process mining tools. The main innovative aspect of this work is that our analysis is not restricted to the social network, the main goal is to discover interaction patterns and processes.
Automating Image Morphing Using Structural Similarity on a Halfway Domain
The main challenge in achieving good image morphs is to create a map that aligns corresponding image elements. Our aim is to help automate this often tedious task. We compute the map by optimizing the compatibility of corresponding warped image neighborhoods using an adaptation of structural similarity. The optimization is regularized by a thin-plate spline and may be guided by a few user-drawn points. We parameterize the map over a halfway domain and show that this representation offers many benefits. The map is able to treat the image pair symmetrically, model simple occlusions continuously, span partially overlapping images, and define extrapolated correspondences. Moreover, it enables direct evaluation of the morph in a pixel shader without mesh rasterization. We improve the morphs by optimizing quadratic motion paths and by seamlessly extending content beyond the image boundaries. We parallelize the algorithm on a GPU to achieve a responsive interface and demonstrate challenging morphs obtained with little effort.
Trajectory generation for continuous leg forces during double support and heel-to-toe shift based on divergent component of motion
This paper works with the concept of Divergent Component of Motion (DCM), also called `(instantaneous) Capture Point'. We present two real-time DCM trajectory generators for uneven (three-dimensional) ground surfaces, which lead to continuous leg (and corresponding ground reaction) force profiles and facilitate the use of toe-off motion during double support. Thus, the resulting DCM trajectories are well suited for real-world robots and allow for increased step length and step height. The performance of the proposed methods was tested in numerous simulations and experiments on IHMC's Atlas robot and DLR's humanoid robot TORO.
Prevention of calcification in bioprosthetic heart valves: challenges and perspectives.
Surgical replacement with artificial devices has revolutionised the care of patients with severe valvular diseases. Mechanical valves are very durable, but require long-term anticoagulation. Bioprosthetic heart valves (BHVs), devices manufactured from glutaraldehyde-fixed animal tissues, do not need long-term anticoagulation, but their long-term durability is limited to 15 - 20 years, mainly because of mechanical failure and tissue calcification. Although mechanisms of BHV calcification are not fully understood, major determinants are glutaraldehyde fixation, presence of devitalised cells and alteration of specific extracellular matrix components. Treatments targeted at the prevention of calcification include those that target neutralisation of the effects of glutaraldehyde, removal of cells, and modifications of matrix components. Several existing calcification-prevention treatments are in clinical use at present, and there are excellent mid-term clinical follow-up reports available. The purpose of this review is to appraise basic knowledge acquired in the field of prevention of BHV calcification, and to provide directions for future research and development.
Survey on routing in data centers: insights and future directions
Recently, a series of data center network architectures have been proposed. The goal of these works is to interconnect a large number of servers with significant bandwidth requirements. Coupled with these new DCN structures, routing protocols play an important role in exploring the network capacities that can be potentially delivered by the topologies. This article conducts a survey on the current state of the art of DCN routing techniques. The article focuses on the insights behind these routing schemes and also points out the open research issues hoping to spark new interests and developments in this field.
Soft-tissue and cortical-bone thickness at orthodontic implant sites.
INTRODUCTION To obtain sufficient stability of implants, the thickness of the soft tissue and the cortical bone in the placement site must be considered. However, the literature contains few anatomical studies of orthodontic implants. METHODS To measure soft-tissue and cortical-bone thicknesses, maxillae from 23 Korean cadavers were decalcified, and buccopalatal cross-sectional specimens were obtained. These specimens were made at 3 maxillary midpalatal suture areas: the interdental area between the first and second premolars (group 1), the interdental area between the second premolar and the first molar (group 2), and the interdental area between the first and second molars (group 3). RESULTS In all groups, buccal soft tissues were thickest closest to and farthest from the cementoenamel junction (CEJ) and thinnest in the middle. Palatal soft-tissue thickness increased gradually from the CEJ toward the apical region in all groups. Buccal cortical-bone was thickest closest to and farthest from the CEJ and thinnest in the middle in groups 1 and 2. Palatal cortical-bone thickness was greatest 6 mm apical to the CEJ in groups 1 and 3, and 2 mm apical to the CEJ in group 2. Along the midpalatal suture, palatal mucosa remained uniformly 1 mm thick posterior to the incisive papilla. CONCLUSIONS Surgical placement of miniscrew implants for orthodontic anchorage in the maxillary molar region requires consideration of the placement site and angle based on anatomical characteristics.
Ultrafiltration in decompensated heart failure with cardiorenal syndrome.
BACKGROUND Ultrafiltration is an alternative strategy to diuretic therapy for the treatment of patients with acute decompensated heart failure. Little is known about the efficacy and safety of ultrafiltration in patients with acute decompensated heart failure complicated by persistent congestion and worsened renal function. METHODS We randomly assigned a total of 188 patients with acute decompensated heart failure, worsened renal function, and persistent congestion to a strategy of stepped pharmacologic therapy (94 patients) or ultrafiltration (94 patients). The primary end point was the bivariate change from baseline in the serum creatinine level and body weight, as assessed 96 hours after random assignment. Patients were followed for 60 days. RESULTS Ultrafiltration was inferior to pharmacologic therapy with respect to the bivariate end point of the change in the serum creatinine level and body weight 96 hours after enrollment (P=0.003), owing primarily to an increase in the creatinine level in the ultrafiltration group. At 96 hours, the mean change in the creatinine level was -0.04±0.53 mg per deciliter (-3.5±46.9 μmol per liter) in the pharmacologic-therapy group, as compared with +0.23±0.70 mg per deciliter (20.3±61.9 μmol per liter) in the ultrafiltration group (P=0.003). There was no significant difference in weight loss 96 hours after enrollment between patients in the pharmacologic-therapy group and those in the ultrafiltration group (a loss of 5.5±5.1 kg [12.1±11.3 lb] and 5.7±3.9 kg [12.6±8.5 lb], respectively; P=0.58). A higher percentage of patients in the ultrafiltration group than in the pharmacologic-therapy group had a serious adverse event (72% vs. 57%, P=0.03). CONCLUSIONS In a randomized trial involving patients hospitalized for acute decompensated heart failure, worsened renal function, and persistent congestion, the use of a stepped pharmacologic-therapy algorithm was superior to a strategy of ultrafiltration for the preservation of renal function at 96 hours, with a similar amount of weight loss with the two approaches. Ultrafiltration was associated with a higher rate of adverse events. (Funded by the National Heart, Lung, and Blood Institute; ClinicalTrials.gov number, NCT00608491.).
Spatial Correlation and Mobility-Aware Traffic Modeling for Wireless Sensor Networks
Recently, there has been a great deal of research on using mobility in wireless sensor networks (WSNs) to facilitate surveillance and reconnaissance in a wide deployment area. Besides providing an extended sensing coverage, node mobility along with spatial correlation introduces new network dynamics, which could lead to the traffic patterns fundamentally different from the traditional (Markovian) models. In this paper, a novel traffic modeling scheme for capturing these dynamics is proposed that takes into account the statistical patterns of node mobility and spatial correlation. The contributions made in this paper are twofold. First, it is shown that the joint effects of mobility and spatial correlation can lead to bursty traffic. More specifically, a high mobility variance and small spatial correlation can give rise to pseudo-long-range-dependent (LRD) traffic (high bursty traffic), whose autocorrelation function decays slowly and hyperbolically up to a certain cutoff time lag. Second, due to the ad hoc nature of WSNs, certain relay nodes may have several routes passing through them, necessitating local traffic aggregations. At these relay nodes, our model predicts that the aggregated traffic also exhibits the bursty behavior characterized by a scaled power-law decayed autocovariance function. According to these findings, a novel traffic shaping protocol using movement coordination is proposed to facilitate effective and efficient resource provisioning strategy. Finally, simulation results reveal a close agreement between the traffic pattern predicted by our theoretical model and the simulated transmissions from multiple independent sources, under specific bounds of the observation intervals.
Dynamic modeling and LabVIEW simulation of a Photovoltaic Thermal Collector
Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).
A new multilevel PWM method: a theoretical analysis
A generalization of the PWM (pulse width modulation) subharmonic method for controlling single-phase or three-phase multilevel voltage source inverters (VSIs) is considered. Three multilevel PWM techniques for VSI inverters are presented. An analytical expression of the spectral components of the output waveforms covering all the operating conditions is derived. The analysis is based on an extension of Bennet's method. The improvements in harmonic spectrum are pointed out, and several examples are presented which prove the validity of the multilevel modulation. Improvements in the harmonic contents were achieved due to the increased number of levels.<<ETX>>
Automatic text extraction and character segmentation using maximally stable extremal regions
Text detection and segmentation is an important prerequisite for many content based image analysis tasks. The paper proposes a novel text extraction and character segmentation algorithm using Maximally Stable Extremal Regions as basic letter candidates. These regions are then subjected to thresholding and thereafter various connected components are determined to identify separate characters. The algorithm is tested along a set of various JPEG, PNG and BMP images over four different character sets; English, Russian, Hindi and Urdu. The algorithm gives good results for English and Russian character set; however character segmentation in Urdu and Hindi language is not much accurate. The algorithm is simple, efficient, involves no overhead as required in training and gives good results for even low quality images. The paper also proposes various challenges in text extraction and segmentation for multilingual inputs.
Learning Safe Policies with Expert Guidance
We propose a framework for ensuring safe behavior of a reinforcement learning agent when the reward function may be difficult to specify. In order to do this, we rely on the existence of demonstrations from expert policies, and we provide a theoretical framework for the agent to optimize in the space of rewards consistent with its existing knowledge. We propose two methods to solve the resulting optimization: an exact ellipsoid-based method and a method in the spirit of the "follow-the-perturbed-leader" algorithm. Our experiments demonstrate the behavior of our algorithm in both discrete and continuous problems. The trained agent safely avoids states with potential negative effects while imitating the behavior of the expert in the other states.
Manson's point: A facial landmark to identify the facial artery.
INTRODUCTION The anatomy of the facial artery, its tortuosity, and branch patterns are well documented. To date, a reliable method of identifying the facial artery, based on surface landmarks, has not been described. The purpose of this study is to characterize the relationship of the facial artery with several facial topographic landmarks, and to identify a location where the facial artery could predictably be identified. METHODS Following institutional review board approval, 20 hemifacial dissections on 10 cadaveric heads were performed. Distances from the facial artery to the oral commissure, mandibular angle, lateral canthus, and Manson's point were measured. Distances were measured and confirmed clinically using Doppler examination in 20 hemifaces of 10 healthy volunteers. RESULTS Manson's point identifies the facial artery with 100% accuracy and precision, within a 3 mm radius in both cadaveric specimens and living human subjects. Cadaveric measurements demonstrated that the facial artery is located 19 mm ± 5.5 from the oral commissure, 31 mm ± 6.8 from the mandibular angle, 92 mm ± 8.0 from the lateral canthus. Doppler examination on healthy volunteers (5 male, 5 female) demonstrated measurements of 18 mm ± 4.0, 50 mm ± 6.4, and 79 mm ± 8.2, respectively. CONCLUSIONS The identification of the facial artery is critical for the craniofacial surgeon in order to avoid inadvertent injury, plan for local flaps, and in preparation of a recipient vessel for free tissue microvascular reconstruction. Manson's point can aid the surgeon in consistently indentifying the facial artery.
Intervenable factors associated with suicide risk in transgender persons: a respondent driven sampling study in Ontario, Canada
BACKGROUND Across Europe, Canada, and the United States, 22-43 % of transgender (trans) people report a history of suicide attempts. We aimed to identify intervenable factors (related to social inclusion, transphobia, or sex/gender transition) associated with reduced risk of past-year suicide ideation or attempt, and to quantify the potential population health impact. METHODS The Trans PULSE respondent-driven sampling (RDS) survey collected data from trans people age 16+ in Ontario, Canada, including 380 who reported on suicide outcomes. Descriptive statistics and multivariable logistic regression models were weighted using RDS II methods. Counterfactual risk ratios and population attributable risks were estimated using model-standardized risks. RESULTS Among trans Ontarians, 35.1 % (95 % CI: 27.6, 42.5) seriously considered, and 11.2 % (95 % CI: 6.0, 16.4) attempted, suicide in the past year. Social support, reduced transphobia, and having any personal identification documents changed to an appropriate sex designation were associated with large relative and absolute reductions in suicide risk, as was completing a medical transition through hormones and/or surgeries (when needed). Parental support for gender identity was associated with reduced ideation. Lower self-reported transphobia (10(th) versus 90(th) percentile) was associated with a 66 % reduction in ideation (RR = 0.34, 95 % CI: 0.17, 0.67), and an additional 76 % reduction in attempts among those with ideation (RR = 0.24; 95 % CI: 0.07, 0.82). This corresponds to potential prevention of 160 ideations per 1000 trans persons, and 200 attempts per 1,000 with ideation, based on a hypothetical reduction of transphobia from current levels to the 10(th) percentile. CONCLUSIONS Large effect sizes were observed for this controlled analysis of intervenable factors, suggesting that interventions to increase social inclusion and access to medical transition, and to reduce transphobia, have the potential to contribute to substantial reductions in the extremely high prevalences of suicide ideation and attempts within trans populations. Such interventions at the population level may require policy change.
Secure and light IoT protocol (SLIP) for anti-hacking
In the elemental technologies, it is necessary to realize the Internet service of things (IoT), sensors and devices, network, platform (hardware platforms, open software platform, such as specific OS platforms). Web services, data analysis and prediction, big data processing, such as security and privacy protection technology, there are a variety of techniques. These elements technology provide a specific function. The element technology is integrated with each other. However, by several techniques are integrated, it can be problems with integration of security technologies that existed for each element technology. Even if individual technologies basic security features are constituting Internet Services (CIA: Confidentiality, integrity, authentication or authorization). It also offers security technology not connected to each other. Therefore, I will look at the security technology and proposes a lightweight routing protocol indispensable for realizing a secure Internet services things.
Large Margin Neural Language Models
Neural language models (NLMs) are generative, and they model the distribution of grammatical sentences. Trained on huge corpus, NLMs are pushing the limit of modeling accuracy. Besides, they have also been applied to supervised learning tasks that decode text, e.g., automatic speech recognition (ASR). By re-scoring the n-best list, NLM can select grammatically more correct candidate among the list, and significantly reduce word/char error rate. However, the generative nature of NLM may not guarantee a discrimination between “good” and “bad” (in a task-specific sense) sentences, resulting in suboptimal performance. This work proposes an approach to adapt a generative NLM to a discriminative one. Different from the commonly used maximum likelihood objective, the proposed method aims at enlarging the margin between the “good” and “bad” sentences. It is trained end-to-end and can be widely applied to tasks that involve the re-scoring of the decoded text. Significant gains are observed in both ASR and statistical machine translation (SMT) tasks.
Diffeomorphic Atlas Estimation using Geodesic Shooting on Volumetric Images
In this paper, we propose a new algorithm to compute intrinsic means of organ shapes from 3D medical images. More specifically, we explore the feasibility of Karcher means in the framework of the large deformations by diffeomorphisms (LDDMM). This setting preserves the topology of the averaged shapes and has interesting properties to quantitatively describe their anatomical variability. Estimating Karcher means requires to perform multiple registrations between the averaged template image and the set of reference 3D images. Here, we use a recent algorithm based on an optimal control method to satisfy the geodesicity of the deformations at any step of each registration. We also combine this algorithm with organ specific metrics. We demonstrate the efficiency of our methodology with experimental results on different groups of anatomical 3D images. We also extensively discuss the convergence of our method and the bias due to the initial guess. A direct perspective of this work is the computation of 3D+time atlases.
The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches
We used two methods to estimate short-wave (S) cone spectral sensitivity. Firstly, we measured S-cone thresholds centrally and peripherally in five trichromats, and in three blue-cone monochromats, who lack functioning middle-wave (M) and long-wave (L) cones. Secondly, we analyzed standard color-matching data. Both methods yielded equivalent results, on the basis of which we propose new S-cone spectral sensitivity functions. At short and middle-wavelengths, our measurements are consistent with the color matching data of Stiles and Burch (1955, Optica Acta, 2, 168-181; 1959, Optica Acta, 6, 1-26), and other psychophysically measured functions, such as pi 3 (Stiles, 1953, Coloquio sobre problemas opticos de la vision, 1, 65-103). At longer wavelengths, S-cone sensitivity has previously been over-estimated.
Consistent feature attribution for tree ensembles
It is critical in many applications to understand what features are important for a model, and why individual predictions were made. For tree ensemble methods these questions are usually answered by attributing importance values to input features, either globally or for a single prediction. Here we show that current feature attribution methods are inconsistent, which means changing the model to rely more on a given feature can actually decrease the importance assigned to that feature. To address this problem we develop fast exact solutions for SHAP (SHapley Additive exPlanation) values, which were recently shown to be the unique additive feature attribution method based on conditional expectations that is both consistent and locally accurate. We integrate these improvements into the latest version of XGBoost, demonstrate the inconsistencies of current methods, and show how using SHAP values results in significantly improved supervised clustering performance. Feature importance values are a key part of understanding widely used models such as gradient boosting trees and random forests. We believe our work improves on the state-of-the-art in important ways, and so impacts any current user of tree ensemble methods.
[Effects of acute octreotide infusion on renal function in patients with cirrhosis and portal hypertension].
BACKGROUND Octreotide is used in the treatment of acute variceal bleeding, based on its inhibitory effects of post-prandial splanchnic hyperemia and splanchnic venoconstriction. The consequences of these haemodynamic changes on renal circulation are not well known in cirrhotic patients. AIM To evaluate the effects of acute octreotide administration on several parameters of renal function, including free water clearance, in patients with cirrhosis with or without ascites. PATIENTS AND METHODS Twenty cirrhotic patients, Child-Pugh A orB, with or without ascites, with esophageal varices, normal renal function and free of medications (vasoactive drugs or diuretics) were assigned to 2 different protocols. Protocol 1: 10 patients were randomized to receive octreotide or placebo, as a bolus followed by a continuous infusion. Glomerular filtration rate (GFR) and renal plasma flow (PRF) were measured, in basal conditions and during the drug or placebo administration. Protocol 2: 10 additional patients were randomized in the same way and free water clearance and urinary sodium excretion were again measured in the basal period and during the drug or placebo infusion. RESULTS After octreotide or placebo administration no significant changes were observed neither in GFR nor in PRF. The free water clearance decreased significantly during octreotide administration (3.12 ml/min+/-1.04 SE vs 0.88+/-0.39, p<.03). In both protocols no changes in mean arterial pressure were observed. CONCLUSIONS Acute administration of octreotide to cirrhotic patients with portal hypertension, with or without ascites, did not produce any change in glomerular filtration rate or in estimated renal plasm blood flow. However the free water clearance decreased significantly. This effect, under chronic administration, could be clinically important and deserves further studies.
[Diagnostic yield and safety of bronchoscopic cryotechnique in routine diagnostics for suspected lung cancer].
BACKGROUND Cryoprobes with flexible catheters are an additional important tool for endobronchial interventional therapy and histologic diagnosis. Different studies compared the diagnostic effectiveness and complications to the forceps as a standard. However, routine endoscopic procedures require a combined use of different methods in order to achieve the highest diagnostic yield. We investigated the impact of cryotechnique in comparison with combined diagnostic tools during routine diagnostics of malignant tumors. PATIENTS AND METHODS A consecutive series of patients undergoing routine diagnostic for lung cancer was included over a 30 months period (n = 469). The use of the cryotechnique, the complication rates and diagnostic value were prospectively documented. Cryotechnique was used on top of conventional technologies. RESULTS A histologic proof of tumor by cryotechnique in centrally located tumors was delivered more frequently compared to forceps biopsies alone (81.4 versus 59.9% and 66.2 versus 37.7% in peripheral lesions). However, when the other non-cryotechniques were taken into account, the value was reduced in central probes (7.4%; p = 0.02), but remained high for peripheral findings (19.3%; p < 0.002). The frequency of complications seemed unchanged, however severe bleeding occurred. CONCLUSION The cryotechnique bears high diagnostic potential beside its therapeutic value, also in routine investigations. The changed complication profile of this technology needs to be addressed in the informed consent and secured airway management may be helpful.
Gender and Smile Classification Using Deep Convolutional Neural Networks
Facial gender and smile classification in unconstrained environment is challenging due to the invertible and large variations of face images. In this paper, we propose a deep model composed of GNet and SNet for these two tasks. We leverage the multi-task learning and the general-to-specific fine-tuning scheme to enhance the performance of our model. Our strategies exploit the inherent correlation between face identity, smile, gender and other face attributes to relieve the problem of over-fitting on small training set and improve the classification performance. We also propose the tasks-aware face cropping scheme to extract attribute-specific regions. The experimental results on the ChaLearn 16 FotW dataset for gender and smile classification demonstrate the effectiveness of our proposed methods.
Image-Dependent Gamut Mapping as Optimization Problem
We explore the potential of image-dependent gamut mapping as a constrained optimization problem. The performance of our new approach is compared to standard reference gamut mapping algorithms in psycho-visual tests.
A one-way quantum computer.
We present a scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states. The measurements are used to imprint a quantum logic circuit on the state, thereby destroying its entanglement at the same time. Cluster states are thus one-way quantum computers and the measurements form the program.
Impact of inadequate adherence on response to subcutaneously administered anti-tumour necrosis factor drugs: results from the Biologics in Rheumatoid Arthritis Genetics and Genomics Study Syndicate cohort
OBJECTIVE Non-adherence to DMARDs is common, but little is known about adherence to biologic therapies and its relationship to treatment response. The purpose of this study was to investigate the association between self-reported non-adherence to s.c. anti-TNF therapy and response in individuals with RA. METHODS Participants about to start s.c. anti-TNF therapy were recruited to a large UK multicentre prospective observational cohort study. Demographic information and disease characteristics were assessed at baseline. Self-reported non-adherence, defined as whether the previous due dose of biologic therapy was reported as not taken on the day agreed with the health care professional, was recorded at 3 and 6 months following the start of therapy. The 28-joint DAS (DAS28) was recorded at baseline and following 3 and 6 months of therapy. Multivariate linear regression was used to examine these relationships. RESULTS Three hundred and ninety-two patients with a median disease duration of 7 years [interquartile range (IQR) 3-15] were recruited. Adherence data were available in 286 patients. Of these, 27% reported non-adherence to biologic therapy according to the defined criteria at least once within the first 6-month period. In multivariate linear regression analysis, older age, lower baseline DAS28 and ever non-adherence at either 3 or 6 months from baseline were significantly associated with a poorer DAS28 response at 6 months to anti-TNF therapy. CONCLUSION Patients with RA who reported not taking their biologic on the day agreed with their health care professional showed poorer clinical outcomes than their counterparts, emphasizing the need to investigate causes of non-adherence to biologics.
Toward Domain Independence for Learning-Based Monocular Depth Estimation
Modern autonomous mobile robots require a strong understanding of their surroundings in order to safely operate in cluttered and dynamic environments. Monocular depth estimation offers a geometry-independent paradigm to detect free, navigable space with minimum space, and power consumption. These represent highly desirable features, especially for microaerial vehicles. In order to guarantee robust operation in real-world scenarios, the estimator is required to generalize well in diverse environments. Most of the existent depth estimators do not consider generalization, and only benchmark their performance on publicly available datasets after specific fine tuning. Generalization can be achieved by training on several heterogeneous datasets, but their collection and labeling is costly. In this letter, we propose a deep neural network for scene depth estimation that is trained on synthetic datasets, which allow inexpensive generation of ground truth data. We show how this approach is able to generalize well across different scenarios. In addition, we show how the addition of long short-term memory layers in the network helps to alleviate, in sequential image streams, some of the intrinsic limitations of monocular vision, such as global scale estimation, with low computational overhead. We demonstrate that the network is able to generalize well with respect to different real-world environments without any fine tuning, achieving comparable performance to state-of-the-art methods on the KITTI dataset.
ConceptVector: Text Visual Analytics via Interactive Lexicon Building Using Word Embedding
Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.
Fundamental characteristics of a ferrite permanent magnet axial gap motor with segmented rotor structure for the hybrid electric vehicle
In newest hybrid electric vehicle (HEV) propulsion systems, permanent magnet synchronous motors (PMSMs) using high-performance rare-earth permanent magnets play an important role. However, rare earth materials employed in the high-performance rare-earth permanent magnets have problems such as price rise and export restriction. Thus, it is necessary to develop a rare-earth-free motor for the HEVs. In this paper, a novel ferrite permanent magnet axial gap motor with segmented rotor structure is introduced. In order to obtain the fundamental characteristics of this motor, a prototype is fabricated and tested. The experiment results of the prototype are shown in detail. It can be confirmed with the experiment results of the prototype that the proposed motor with the segmented rotor structure has the possibility of the development of the rare-earth-free motor enough.
Generalized channel inversion methods for multiuser MIMO systems
Block diagonalization (BD) is a well-known precoding method in multiuser multi-input multi-output (MIMO) broadcast channels. This scheme can be considered as a extension of the zero-forcing (ZF) channel inversion to the case where each receiver is equipped with multiple antennas. One of the limitation of the BD is that the sum rate does not grow linearly with the number of users and transmit antennas at low and medium signal-to-noise ratio regime, since the complete suppression of multi-user interference is achieved at the expense of noise enhancement. Also it performs poorly under imperfect channel state information. In this paper, we propose a generalized minimum mean-squared error (MMSE) channel inversion algorithm for users with multiple antennas to overcome the drawbacks of the BD for multiuser MIMO systems. We first introduce a generalized ZF channel inversion algorithm as a new approach of the conventional BD. Applying this idea to the MMSE channel inversion for identifying orthonormal basis vectors of the precoder, and employing the MMSE criterion for finding its combining matrix, the proposed scheme increases the signal-to-interference-plus-noise ratio at each user's receiver. Simulation results confirm that the proposed scheme exhibits a linear growth of the sum rate, as opposed to the BD scheme. For block fading channels with four transmit antennas, the proposed scheme provides a 3 dB gain over the conventional BD scheme at 1% frame error rate. Also, we present a modified precoding method for systems with channel estimation errors and show that the proposed algorithm is robust to channel estimation errors.
Sampling for Bayesian Program Learning
Towards learning programs from data, we introduce the problem of sampling programs from posterior distributions conditioned on that data. Within this setting, we propose an algorithm that uses a symbolic solver to efficiently sample programs. The proposal combines constraint-based program synthesis with sampling via random parity constraints. We give theoretical guarantees on how well the samples approximate the true posterior, and have empirical results showing the algorithm is efficient in practice, evaluating our approach on 22 program learning problems in the domains of text editing and computer-aided programming.
Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro
The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market- 1501, CUHK03 and DukeMTMC-reID, we obtain +4.37%, +1.6% and +2.46% improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6% improvement over a strong baseline. The code is available at https://github.com/layumi/ Person-reID_GAN.
Machine learning in digital games: a survey
Artificial intelligence for digital games constitutes the implementation of a set of algorithms and techniques from both traditional and modern artificial intelligence in order to provide solutions to a range of game dependent problems. However, the majority of current approaches lead to predefined, static and predictable game agent responses, with no ability to adjust during game-play to the behaviour or playing style of the player. Machine learning techniques provide a way to improve the behavioural dynamics of computer controlled game agents by facilitating the automated generation and selection of behaviours, thus enhancing the capabilities of digital game artificial intelligence and providing the opportunity to create more engaging and entertaining game-play experiences. This paper provides a survey of the current state of academic machine learning research for digital game environments, with respect to the use of techniques from neural networks, evolutionary computation and reinforcement learning for game agent control.
U-shaped slot-array antenna for RFID shelf in the UHF
This paper presents a linearly polarized slot-array antenna that consists of slots series-fed by a microstrip line for RFID shelf applications in the UHF band in which the end of the line is shorted to the ground plane by the via-hole. The radiating element of the antenna is the U-shaped slot fed by the travelling and standing current wave formed on the microstrip line left shorted-end. The gain of the designed 3×6 slot-array antenna is about 4dBi and the fractional return-loss bandwidth of less than −10dB about 5.5% at 920MHz.
Recycling misconceptions of perceived self-efficacy
This commentary addresses misconceptions concerning perceived self-efficacy contained in the article by Eastman and Marzillier. People who regard themselves as highly efficacious act, think, and feel differently from those who perceive themselves as inefficacious. Self-percepts of efficacy thus contribute significantly to performance accomplishments rather than residing in the host organism simply as inert predictors of behaviors to come. A substantial body of converging evidence is reviewed, lending validity to the proposition that perceived self-efficacy operates as one common mechanism through which diverse influences affect human action, thought, and affective arousal.
Growth disturbance following ACL reconstruction with use of an epiphyseal femoral tunnel: a case report.
A nterior cruciate ligament (ACL) tears in skeletally immature patients are increasing in prevalence. This appears to be the result of a heightened suspicion for such injuries, more readily available imaging, and the ever-increasing physical demands and competitiveness of organized youth sports. As a result, there is increased interest in early ACL reconstruction to potentially prevent further intra-articular damage to the knee. Multiple surgical techniques for ACL reconstruction described for adults are not appropriate in skeletally immature patients because they involve large drill holes across the physis, fixation or bone blocks that cross the physis, or tensioning of a graft across the physis, all of which have been shown to result in growth disturbances. Because of these concerns, multiple surgical techniques to address ACL tears in patients with open physes have been described, including primary repair, extraarticular tenodesis, transphyseal reconstruction, partial transphyseal reconstruction, and physeal sparing techniques. Growth disturbances have been reported with the use of most of these techniques. Anderson described a technique of ACL reconstruction that avoids drilling tunnels across the physis. Theoretically, this should decrease the incidence of growth disturbance following ACL reconstruction in skeletally immature patients. With this technique, both the femoral and the tibial tunnels are drilled entirely within the epiphysis, and an all-soft-tissue graft is used for the reconstruction. Good results were found in twelve patients at the time of follow-up, two to eight years postoperatively, and to our knowledge a growth disturbance has never been reported after the use of this technique. Here, we describe a case of distal femoral valgus angulation associated with use of an epiphyseal femoral tunnel for a revision ACL reconstruction in an immature patient. The patient and his family consented to, and institutional review board approval was obtained for, presentation of this case.
Scalable Network Forensics
Scalable Network Forensics by Matthias Vallentin Doctor of Philosophy in Computer Science University of California, Berkeley Professor Vern Paxson, Chair Network forensics and incident response play a vital role in site operations, but for large networks can pose daunting difficulties to cope with the ever-growing volume of activity and resulting logs. On the one hand, logging sources can generate tens of thousands of events per second, which a system supporting comprehensive forensics must somehow continually ingest. On the other hand, operators greatly benefit from interactive exploration of disparate types of activity when analyzing an incident, which often leaves network operators scrambling to ferret out answers to key questions: How did the attackers get in? What did they do once inside? Where did they come from? What activity patterns serve as indicators reflecting their presence? How do we prevent this attack in the future? Operators can only answer such questions by drawing upon high-quality descriptions of past activity recorded over extended time. A typical analysis starts with a narrow piece of intelligence, such as a local system exhibiting questionable behavior, or a report from another site describing an attack they detected. The analyst then tries to locate the described behavior by examining past activity, often cross-correlating information of different types to build up additional context. Frequently, this process in turn produces new leads to explore iteratively (“peeling the onion”), continuing and expanding until ultimately the analyst converges on as complete of an understanding of the incident as they can extract from the available information. This process, however, remains manual and time-consuming, as no single storage system efficiently integrates the disparate sources of data that investigations often involve. While standard Security Information and Event Management (SIEM) solutions aggregate logs from different sources into a single database, their data models omit crucial semantics, and they struggle to scale to the data rates that large-scale environments require.
Train a 3D U-Net to segment cranial vasculature in CTA volume without manual annotation
Computed tomography angiography (CTA) is now applied as the gold standard in clinical diagnosis of cranial vascular diseases. Segmenting vasculature is a critical step of computer aided diagnosis. In this paper, we adopted a deep learning network architecture 3D U-Net to segment cranial vasculature from CTA images. Different from other traditional methods that require a large amount of manual annotation for network training, we adopted the incomplete vascular segmentation automatically obtained from time-of-flight magnetic resonance angiography (TOF-MRA) volume to train a segmentation network for CTA images. Our results showed that, by carefully tuning the network parameters, relatively complete cranial vascular segmentation can be achieved from CTA volume though the training truth is under-segmented. Our method does not require any human annotation.
Improvement in low back pain following spinal decompression: observational study of 119 patients
Prospective clinical observational study of low back pain (LBP) in patients undergoing laminectomy or laminotomy surgery for lumbar spinal stenosis (LSS). To quantify any change in LBP following laminectomy or laminotomy spinal decompression surgery. 119 patients with LSS completed Oswestry Disability Index questionnaire (ODI) and Visual Analogue Scale for back and leg pain, preoperatively, 6 weeks and 1 year postoperatively. There was significant (p < 0.0001) reduction in mean LBP from a baseline of 5.14/10 to 3.03/10 at 6 weeks. Similar results were seen at 1 year where mean LBP score was 3.07/10. There was a significant (p < 0.0001) reduction in the mean ODI at 6 weeks and 1 year postoperatively. Mean ODI fell from 44.82 to 25.13 at 6 weeks and 28.39 at 1 year. The aim of surgery in patients with LSS is to improve the resulting symptoms that include radicular leg pain and claudication. This observational study reports statistically significant improvement of LBP after LSS surgery. This provides frequency distribution data, which can be used to inform prospective patients of the expected outcomes of such surgery.
Laparoscopic rectal resection with anal sphincter preservation for rectal cancer
Total mesorectal excision (TME) is the surgical gold standard treatment for middle and low third rectal carcinoma. Laparoscopy has gradually become accepted for the treatment of colorectal malignancy after a long period of questions regarding its safety. The purposes of this study were to examine prospectively our experience with laparoscopic TME and high rectal resections, to evaluate the surgical outcomes and oncologic adequacy, and to discuss the role of this procedure in the treatment of rectal cancer. Between December 1992 and December 2004, all patients who underwent elective laparoscopic sphincter preserving rectal resection for rectal cancer were enrolled prospectively in this study. Data collection included preoperative, operative, postoperative and oncologic results with long-term follow-up. A total of 218 patients were operated on during the study period: 142 patients underwent laparoscopic TME and 76 patients underwent anterior resection. Of the TME patients, 122 patients were operated using the double-stapling technique, and 20 patients underwent colo-anal anastomosis with hand-sewn sutures. Mean operative time was 138 min (range, 107–205), and mean blood loss was 120 ml (range, 30–350). Conversion to open surgery occurred in 26 cases (12%). Mortality rate during the first 30 days was 1%. Anastomotic leaks were observed in 10.5% of the patients. Of these, 61.9% needed reoperation and diverting stoma, and the rest were treated conservatively. Three patients had postoperative bleeding requiring relaparoscopy. Other minor complications (infection and urinary retention) occurred in 9.1% of patients. Mean ambulation time and mean hospital stay were 1.6 days (range, 1–5) and 6.4 days (range, 3–28) , respectively. Patients were followed for a mean period of 57 months. No port site metastases were observed during follow-up. The recurrence rate was 6.8 %. Overall survival rate was 67% after 5 years and 53.5% after 10 years. Laparoscopic anterior resection and TME with anal sphincter preservation for rectal cancer is feasible and safe. The short- and long-term outcomes reported in this series are comparable with those of conventional surgery.
Chapter 2 Developing a Sustainable Supply Chain Strategy
Long-term trends pose challenges for supply chain managers and make increasing requirements on the strategic management expertise of today’s companies. These trends include ongoing globalisation and the increasing intensity of competition, the growing demands of security, environmental protection and resource scarcity and, last but not least, the need for reliable, flexible and cost-efficient business systems capable of supporting customer differentiation. More than ever, modern supply chain managers are confronted with dynamic and complex supply chains and therefore with trends and developments that are hard to predict. In years to come, supply chain management will therefore take on additional strategic tasks that extend beyond its current more operational scope of activity. In order to respond to these changes and remain competitive, supply chain managers need to be able to identify and understand new sustainability issues in their company and business environment. This calls, especially in respect of global, international, and fragmented supply chains, not only for highly efficient supply chain operations, but also for networking skills that must continuously adapt to sustainability demands to create sustainable,
Automatic Retrieval and Clustering of Similar Words
Bootstrapping semantics from text is one of the greatest challenges in natural language learning. We first define a word similarity measure based on the distributional pattern of words. The similarity measure allows us to construct a thesaurus using a parsed corpus. We then present a new evaluation methodology for the automatically constructed thesaurus. The evaluation results show that the thesaurus is significantly closer to WordNet than Roget Thesaurus is.
Neither cytochrome P450 family genes nor neuroendocrine factors could independently predict the SSRIs treatment in the Chinese Han population.
OBJECTIVE This study was intended to explore the relationship between the genetic polymorphisms of the 8 single nucleotide polymorphisms (SNPs) at CYP genes, neuroendocrine factors and the response to selective serotonin reuptake inhibitors (SSRIs) in Chinese Han depressive patients. METHOD This was a 6-week randomized controlled trial consisting of 290 Chinese Han depressive patients treated with SSRIs. 8 SNPs of CYP450 genes and 7 neuroendocrine factors were detected. Allele and genotype frequencies were compared between responders and non-responders. The relationships between neuroendocrine factors and treatment response were also analyzed. RESULTS No significant differences were found in clinical features between 2 groups at the baseline. No statistical correlation was found between either the genotype or allele frequencies of SNPs in CYP1A2, CYP2C19, or CYP2D6 gene and the -efficacy of SSRIs. There were strong linkage disequilibria between rs4986894, rs1853205, and rs12767583 of CYP2C19 genes, and rs2472299, rs2472300 of CYP1A2 genes. No associations were found between the above haplotypes and the antidepressant response. No neuroendocrine factor was a significant predictor for a response to SSRI antidepressants independently. The combination of neuroendocrine factors, however, predicted the response by 76.1%. CONCLUSION There were no significant associations between the 6 SNPs of CYP gene polymorphisms and SSRI response. Neither cytochrome P450 family genes nor neuroendocrine factors independently predict the patients' response to the antidepressants separately. A combination of neuroendocrine factors, however, does have the potential to predict the response.
DESH: Database evaluation system with hibernate ORM framework
Relational databases have been the predominant choice for back-ends in enterprise applications for several decades. JDBC - a Java API - that is used for developing such applications and persisting data on the back-end requires enormous time and effort. JDBC makes the application logic to become tightly coupled with the database and consequently is inadequate for building enterprise applications that need to adopt to dynamic requirements. Hence, ORM frameworks such as Hibernate, became prominent. However, even with ORM, the relational back-end often faces a drawback of lack of scalability and flexibility. In this context, NoSQL databases are increasingly gaining popularity. Existing research works have either benchmarked Hibernate with an RDBMS or with one of the NoSQL databases. However, it has not been attempted in the literature to test both an RDBMS and a NoSQL database for their performance within the context of a single application developed using the features of Hibernate. This kind of a study will provide an insight that using Hibernate ORM solution would help us to develop database-independent applications and how much performance gain can be achieved when an application is ported from an RDBMS back-end to a NoSQL database backend. The objective of this work is to develop a business application using Hibernate framework that can individually communicate with an RDBMS as well as a specific NoSQL database and to evaluate the performance of both these databases.
Responsive Supply Chain : A Competitive Strategy in a Networked Economy
Supply chain management (SCM) has been considered as the most popular operations strategy for improving organizational competitiveness in the twenty-first century. In the early 1990s, agile manufacturing (AM) gained momentum and received due attention from both researchers and practitioners. In the mid-1990s, SCM began to attract interest. Both AM and SCM appear to differ in philosophical emphasis, but each complements the other in objectives for improving organizational competitiveness. For example, AM relies more on strategic alliances/partnerships (virtual enterprise environment) to achieve speed and flexibility. But the issues of cost and the integration of suppliers and customers have not been given due consideration in AM. By contrast, cost is given a great deal of attention in SCM, which focuses on the integration of suppliers and customers to achieve an integrated value chain with the help of information technologies and systems. Considering the significance of both AM and SCM for firms to improve their performance, an attempt has been made in this paper to analyze both AM and SCM with the objective of developing a framework for responsive supply chain (RSC). We compare their characteristics and objectives, review the selected literature, and analyze some case experiences on AM and SCM, and develop an integrated framework for a RSC. The proposed framework can be employed as a competitive strategy in a networked economy in which customized products/services are produced with virtual organizations and exchanged using e-commerce. 2007 Elsevier Ltd. All rights reserved.
The principal continuation and the killer heuristic
An algorithm is presented for obtaining the <italic>principal continuation</italic> in trees searched by two-person game playing programs based on the <italic>Alpha-Beta algorithm</italic>. Moves saved while determining the principal continuation are shown to be good candidates for killer moves when the <italic>killer heuristic</italic> supplements the Alpha-Beta search.
Joint DOA and TDOA estimation for 3D localization of reflective surfaces using eigenbeam MVDR and spherical microphone arrays
Methods of 3D direction of arrival (DOA) estimation, coherent source detection and reflective surface localization are studied, based on recordings by a spherical microphone array. First, the spherical harmonics domain minimum variance distortionless response (EB-MVDR) beamformer is employed for the localization of broadband coherent sources, which is characterized by simpler frequency focusing matrices than the corresponding element-space implementation, and by a higher resolution than conventional spherical array beamformers. After the DOA estimation step, the source signals are extracted by EB-MVDRs. Then, by computing the crosscorrelation functions between the extracted signals, the coherent sources are detected and their time differences of arrival (TDOA) are estimated. Given the positions of the array and the reference source, and the estimated DOA and TDOA of the coherent sources, the positions of the major reflectors can be inferred. Experimental results in a real room validate the proposed method.
Unsupervised Methods for Learning and Using Semantics of Natural Language
Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure— e.g. grammar, semantics or syntax— from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources. In this thesis, we present so-called unsupervisedmethods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data. In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis. Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be
Impact of neuromuscular fatigue on match exercise intensity and performance in elite Australian football.
This study aimed to quantify the influence of neuromuscular fatigue (NMF) via flight time to contraction time ratio (FT:CT) obtained from a countermovement jump (CMJ) on the relationships between yo-yo intermittent recovery (level 2) test (yo-yo IR2), match exercise intensity (high-intensity running [HIR] m·min(-1) and Load·min(-1)) and Australian football (AF) performance. Thirty-seven data sets were collected from 17 different players across 22 elite AF matches. Each data set comprised an athlete's yo-yo IR2 score before the start of the season, match exercise intensity via global positioning system and on-field performance rated by coaches' votes and number of ball disposals. Each data set was categorized as normal (>92% baseline FT:CT, n = 20) or fatigued (<92% baseline FT:CT, n = 17) from a single CMJ performed 96 hours after the previous match. Moderation-mediation analysis was completed with yo-yo IR2 (independent variable), match exercise intensity (mediator), and AF performance (dependent variable) with NMF status as the conditional variable. Isolated interactions between variables were analyzed by Pearson's correlation and effect size statistics. The Yo-yo IR2 score showed an indirect influence on the number of ball disposals via HIR m·min(-1) regardless of NMF status (normal FT:CT indirect effect = 0.019, p < 0.1, reduced FT:CT indirect effect = 0.022, p < 0.1). However, the yo-yo IR2 score only influenced coaches' votes via Load·min(-1) in the nonfatigued state (normal: FT:CT indirect effect = 0.007, p <0.1, reduced: FT:CT indirect effect = -0.001, p > 0.1). In isolation, NMF status also reduces relationships between yo-yo IR2 and load·min(-1), yo-yo IR2 and coaches votes, Load·min(-1) and coaches' votes (Δr > 0.1). Routinely testing yo-yo IR2 capacity, NMF via FT:CT and monitoring Load·min(-1) in conjunction with HIR m·min(-1) as exercise intensity measures in elite AF is recommended.
Ensemble Learning with Active Example Selection for Imbalanced Biomedical Data Classification
In biomedical data, the imbalanced data problem occurs frequently and causes poor prediction performance for minority classes. It is because the trained classifiers are mostly derived from the majority class. In this paper, we describe an ensemble learning method combined with active example selection to resolve the imbalanced data problem. Our method consists of three key components: 1) an active example selection algorithm to choose informative examples for training the classifier, 2) an ensemble learning method to combine variations of classifiers derived by active example selection, and 3) an incremental learning scheme to speed up the iterative training procedure for active example selection. We evaluate the method on six real-world imbalanced data sets in biomedical domains, showing that the proposed method outperforms both the random under sampling and the ensemble with under sampling methods. Compared to other approaches to solving the imbalanced data problem, our method excels by 0.03-0.15 points in AUC measure.
Racial Faces in-the-Wild: Reducing Racial Bias by Deep Unsupervised Domain Adaptation
Despite of the progress achieved by deep learning in face recognition (FR), more and more people find that racial bias explicitly degrades the performance in realistic FR systems. Facing the fact that existing training and testing databases consist of almost Caucasian subjects, there are still no independent testing databases to evaluate racial bias and even no training databases and methods to reduce it. To facilitate the research towards conquering those unfair issues, this paper contributes a new dataset called Racial Faces in-the-Wild (RFW) database with two important uses, 1) racial bias testing: four testing subsets, namely Caucasian, Asian, Indian and African, are constructed, and each contains about 3000 individuals with 6000 image pairs for face verification, 2) racial bias reducing: one labeled training subset with Caucasians and three unlabeled training subsets with Asians, Indians and Africans are offered to encourage FR algorithms to transfer recognition knowledge from Caucasians to other races. For we all know, RFW is the first database for measuring racial bias in FR algorithms. After proving the existence of domain gap among different races and the existence of racial bias in FR algorithms, we further propose a deep information maximization adaptation network (IMAN) to bridge the domain gap, and comprehensive experiments show that the racial bias could be narrowed-down by our algorithm.
A smooth-walled spline-profile horn as an alternative to the corrugated horn for wide band millimeter-wave applications
At millimeter-wave frequencies, corrugated horns can be difficult and expensive to manufacture. As an alternative we present here the results of a theoretical and measurement study of a smooth-walled spline-profile horn for specific application in the 80-120 GHz band. While about 50% longer than its corrugated counterpart, the smooth-walled horn is shown to give improved performance across the band as well as being much easier to manufacture.
Encode, Review, and Decode: Reviewer Module for Caption Generation
We propose a novel module, the reviewer module, to improve the encoder-decoder learning framework. The reviewer module is generic, and can be plugged into an existing encoder-decoder model. The reviewer module performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a fact vector after each review step; the fact vectors are used as the input of the attention mechanism in the decoder. We show that the conventional encoderdecoders are a special case of our framework. Empirically, we show that our framework can improve over state-of-the-art encoder-decoder systems on the tasks of image captioning and source code captioning.
Randomized trial of liberal versus restrictive guidelines for red blood cell transfusion in preterm infants.
OBJECTIVE Although many centers have introduced more restrictive transfusion policies for preterm infants in recent years, the benefits and adverse consequences of allowing lower hematocrit levels have not been systematically evaluated. The objective of this study was to determine if restrictive guidelines for red blood cell (RBC) transfusions for preterm infants can reduce the number of transfusions without adverse consequences. DESIGN, SETTING, AND PATIENTS We enrolled 100 hospitalized preterm infants with birth weights of 500 to 1300 g into a randomized clinical trial comparing 2 levels of hematocrit threshold for RBC transfusion. INTERVENTION The infants were assigned randomly to either the liberal- or the restrictive-transfusion group. For each group, transfusions were given only when the hematocrit level fell below the assigned value. In each group, the transfusion threshold levels decreased with improving clinical status. MAIN OUTCOME MEASURES We recorded the number of transfusions, the number of donor exposures, and various clinical and physiologic outcomes. RESULTS Infants in the liberal-transfusion group received more RBC transfusions (5.2 +/- 4.5 [mean +/- SD] vs 3.3 +/- 2.9 in the restrictive-transfusion group). However, the number of donors to whom the infants were exposed was not significantly different (2.8 +/- 2.5 vs 2.2 +/- 2.0). There was no difference between the groups in the percentage of infants who avoided transfusions altogether (12% in the liberal-transfusion group versus 10% in the restrictive-transfusion group). Infants in the restrictive-transfusion group were more likely to have intraparenchymal brain hemorrhage or periventricular leukomalacia, and they had more frequent episodes of apnea, including both mild and severe episodes. CONCLUSIONS Although both transfusion programs were well tolerated, our finding of more frequent major adverse neurologic events in the restrictive RBC-transfusion group suggests that the practice of restrictive transfusions may be harmful to preterm infants.
Recombinant human deoxyribonuclease for the treatment of acute asthma in children.
BACKGROUND Airway obstruction in acute asthma is the result of airway smooth muscle contraction, inflammation and mucus plugging. Case reports suggest that mucolytic therapy might be beneficial in acute asthma. The aim of this study was to determine the efficacy of the mucolytic drug recombinant human deoxyribonuclease (rhDNase) in addition to standard treatment at the emergency department in children with an asthma exacerbation. METHODS In a multicentre randomised double-blind controlled clinical trial, 121 children brought to the emergency room for a moderate to severe asthma exacerbation were randomly assigned to receive either a single dose of 5 mg nebulised rhDNase or placebo following the second dose of bronchodilators. An asthma score (scale 5-15) was assessed at baseline and at 1, 2, 6, 12 and 24 h. The primary outcome variable was the asthma score 1 h after the study medication. RESULTS One hour after the study medication the asthma score in the rhDNase group showed an adjusted mean decrease from baseline of 1.0 (95% CI 0.5 to 1.6) points compared with 0.7 (95% CI 0.3 to 1.2) points in the placebo group (mean difference 0.4 (95% CI -0.2 to 1.0) points; p = 0.23). The asthma score over the study period of 24 h also did not differ significantly between the rhDNase and placebo group (mean difference 0.2 (95% CI -0.3 to 0.7) points, p = 0.40). The duration of oxygen supplementation and number of bronchodilator treatments in the first 24 h were similar in both groups. CONCLUSION Adding a single dose of nebulised rhDNase to standard treatment in the emergency room has no beneficial effects in children with moderate to severe acute asthma.
Locating a documentary cinema of accountability: the emergence of activist film practices as a socio-political movement in contemporary pakistan
Maintaining trends of resistance movements, activist agendas, and advocacy campaigns initiated in opposition to the Islamization period and the dictatorship of General Zia-ul-Haq (1977-1988), contemporary expressions of resistance in Pakistan have also begun to include ‘activist documentary’ film practices. As issues of religious fundamentalism and extremism, gendered violence, violation of human rights, impact of Islamization and rigid Sharia laws, particularly on women and minorities, besides the violent socio-cultural and tribal practices such as stoveburning, acid-attacks, honour-killing, honour-rape, and swara continue to haunt the civil society, a new generation of creative activists are using documentary film as their activist vehicle of communication, resistance and consciousness-raising. This thesis will focus on independent documentary filmmakers, productions, Non-governmental Organizations (NGOs), as well as a government body, that have contributed to the emergence of an activist documentary film movement in contemporary Pakistan since the Islamization period. It will discuss their contribution and significance to the growth and progress of this emerging film category in the country, and argue for an investigative filmic body of work that can be identified as a critical documentary ‘cinema of accountability’ from within a Muslim society that seeks to provoke debate on crucial issues, stress legislative reforms, and promote social change.
The Creative Process: A Computer Model of Storytelling and Creativity, by Scott R. Turner
Within cognitive science and psychology, there has been a good deal of interest recently in the topic of creativity. In this book, Scott Turner of the University of California, Los Angeles, presents a theory of creativity applied to generating small stories. Turner can be thought of as a member of the third generation of the Schank family: first, of course, there was grandfather Roger Schank, who in the 1970s with Robert Abelson at Yale, embarked on the research project of understanding narrative text using the ideas of goals, plans, and scripts. The attempt was to propose computational models that would accomplish aspects of narrative understanding. The second generation was a talented group of people, including Wendy Lehnert and Robert Wilensky, who did their Ph.D.s at Yale on story understanding. Turner is a member of a third generation, advised by Michael Dyer who also obtained his Ph.D. at Yale and then moved to an academic position at UCLA. Dyer had turned his attention to story generation as well as story understanding. In the classic mode, Turner took on rather too much for his Ph.D. He wrote a large artificial intelligence program, 17,000 lines of Lisp code, that produces a reasonable output that could, with the suspension of a certain amount of disbelief, pass for the production of a human author. He calls his program "Minstrel." It generates stories of half a page or so about knights and ladies at the court of King Arthur. The program was the core of his Ph.D. thesis, and this is the book of the program. Here is a sample from one of Minstrel's stories (p. 72):
ActivityNet: A large-scale video benchmark for human activity understanding
In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new large-scale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.
Patterns of Entry and Correction in Large Vocabulary Continuous Speech Recognition System
A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.
Development of 2.4GHz one-sided directional planar antenna with quarter wavelength top metal
For mobile communication systems, small size and low profile antennas are necessary. Planar printed antennas such as slot antenna and microstrip antenna are attractive for their use in mobile and wireless communication systems due to their low profile compact size [1], [2]. However, because of the electro-magnetic interference, radiation of omni directional antenna such as the slot antenna is remarkably deteriorated if the metal block of RFIC (radio frequency integrated circuit) or body of a car approaches to the back of the antenna. Patch antenna is one of the solutions to overcome these problems [3]. However, radiation efficiency and bandwidth of a patch antenna decreases rapidly as the thickness of the substrate decreases, and also one-sided directional patch antenna has large ground plane. Therefore, miniaturized antennas on thinner substrate are necessary for future 3-dimension packaging techniques in integrating with RF-chips. In our previous works, we presented the design theory of the one-sided directional electrically small antenna (ESA) composed of an impedance matching circuit, a half wavelength (λ/2) top metal and a bottom floating metal layer for IMS (@2.4GHz) application [4].
Large-Scale Taxonomy Mapping for Restructuring and Integrating Wikipedia
We present a knowledge-rich methodology for disambiguating Wikipedia categories with WordNet synsets and using this semantic information to restructure a taxonomy automatically generated from the Wikipedia system of categories. We evaluate against a manual gold standard and show that both category disambiguation and taxonomy restructuring perform with high accuracy. Besides, we assess these methods on automatically generated datasets and show that we are able to effectively enrich WordNet with a large number of instances from Wikipedia. Our approach produces an integrated resource, thus bringing together the fine-grained classification of instances in Wikipedia and a wellstructured top-level taxonomy from WordNet.
The Topsfield Foundation: Fostering Democratic Community Building through Face-to-Face Dialogue.
The authors interview Paul Aicher, founder of the Topsfield Foundation, Inc., and Martha McCoy, executive director of the Study Circles Resource Center, about the challenges of creating an effective, replicable model for citizen engagement through communitywide study circle programs.
Psychological factors and delayed healing in chronic wounds.
OBJECTIVE Studies have shown that stress can delay the healing of experimental punch biopsy wounds. This study examined the relationship between the healing of natural wounds and anxiety and depression. METHODS Fifty-three subjects (31 women and 22 men) were studied. Wound healing was rated using a five-point Likert scale. Anxiety and depression were measured using the Hospital Anxiety and Depression Scale (HAD), a well-validated psychometric questionnaire. Psychological and clinical wound assessments were each conducted with raters and subjects blinded to the results of the other assessment. RESULTS Delayed healing was associated with a higher mean HAD score (p = .0348). Higher HAD anxiety and depression scores (indicating "caseness") were also associated with delayed healing (p = .0476 and p = .0311, respectively). Patients scoring in the top 50% of total HAD scores were four times more likely to have delayed healing than those scoring in the bottom 50% (confidence interval = 1.06-15.08). CONCLUSIONS The relationship between healing of chronic wounds and anxiety and depression as measured by the HAD was statistically significant. Further research in the form of a longitudinal study and/or an interventional study is proposed.
Simply Typed Lambda-Calculus Modulo Type Isomorphisms
We define a simply typed, non-deterministic lambda-calculus where isomorphic types are equated. To this end, an equivalence relation is settled at the term level. We then provide a proof of strong normalisation modulo equivalence. Such a proof is a non-trivial adaptation of the reducibility method.
Development and Validation of the Standard Chinese Version of the CARE Item Set (CARE-C) for Stroke Patients
The Continuity Assessment Record and Evaluation (CARE) item set is a standardized, integrative scale for evaluation of functional status across acute and postacute care (PAC) providers. The aim of this study was to develop a Chinese version of the CARE (CARE-C) item set and to examine its reliability and validity for assessment of functional outcomes among stroke patients.The CARE-C was administered in two samples. Sample 1 included 30 stroke patients in the outpatient clinic setting for the purpose of examining interrater and test-retest reliabilities and internal consistency. Sample 2 included 138 stroke patients admitted to rehabilitation units for the purpose of investigating criterion-related validity with the Barthel index, Lawton Instrumental Activities of Daily Living (IADL) scale, EuroQOL five dimensions questionnaire (EQ-5D), and Mini-Mental State Examination (MMSE).The CARE-C was categorized into 11 subscales, 52 items of which were analyzed. At the subscale level, the interrater reliability and test-retest reliability expressed by intraclass correlation coefficient (ICC) ranged from 0.72 to 0.99 and 0.60 to 1.00, respectively. Six of the 11 subscales met acceptable levels of internal consistency (Cronbach alpha > 0.7). The criterion-related validity of the CARE-C showed moderate to high correlations of its subscales of cognition and basic and instrumental activities of daily living with the Barthel index, IADL scale, and MMSE.The CARE-C is a useful instrument for evaluating functional quality metrics in the Chinese stroke population. The development of the CARE-C also facilitates the assessment of the PAC program in Taiwan and future research is warranted for validating the capability of CARE-C to identify patients' functional change over time and its generalizability for nonstroke populations.
Determinants of Effective Information Technology Governance : A Study of IT Intensity
Recent increases in the importance of information technology (IT), as a strategic factor for organisations in achieving their objectives, have raised the concern of organisations in establishing and implementing effective IT governance. This study seeks to examine empirically the individual IT governance mechanisms that influence the overall effectiveness of IT governance, by taking into account the level of IT intensity within organisations. Obtaining the sample data by using web based survey from 176 members of ISACA (Information Systems and Audit Control Association) Australia, this study examined the influences of six proposed IT governance mechanisms on the overall effectiveness of IT governance. Using Factor Analysis and Multiple Regression techniques, this study found significant positive relationships between the overall level of effective IT governance and the following four IT governance mechanisms: an IT strategy committee, the involvement of senior management in IT, the existence of ethics/ culture of compliance in IT, and corporate communication systems.
E-commerce Product Recommendation by Personalized Promotion and Total Surplus Maximization
Existing recommendation algorithms treat recommendation problem as rating prediction and the recommendation quality is measured by RMSE or other similar metrics. However, we argued that when it comes to E-commerce product recommendation, recommendation is more than rating prediction by realizing the fact price plays a critical role in recommendation result. In this work, we propose to build E-commerce product recommender systems based on fundamental economic notions. We first proposed an incentive compatible method that can effectively elicit consumer's willingness-to-pay in a typical E-commerce setting and in a further step, we formalize the recommendation problem as maximizing total surplus. We validated the proposed WTP elicitation algorithm through crowd sourcing and the results demonstrated that the proposed approach can achieve higher seller profit by personalizing promotion. We also proposed a total surplus maximization (TSM) based recommendation framework. We specified TSM by three of the most representative settings - e-commerce where the product quantity can be viewed as infinity, P2P lending where the resource is bounded and freelancer marketing where the resource (job) can be assigned to one freelancer. The experimental results of the corresponding datasets shows that TSM exceeds existing approach in terms of total surplus.
Adolescent daily cigarette smoking: is rural residency a risk factor?
INTRODUCTION Daily cigarette smoking among US adolescents remains a significant public health problem. Understanding risk is important in order to develop strategies to reduce this type of tobacco use. PURPOSE The primary objective of this research was to examine whether rural residency is an independent risk factor for being a daily smoker among adolescents ages 12 to 18 years. METHODS This is a cross-sectional study where univariate, bivariate, and multivariate analyses were performed on a merged 1997-2003 Youth Risk Behavior Surveillance System dataset to determine whether rural residence was a significant risk factor for daily cigarette smoking, after adjusting for demographic factors. RESULTS Using daily smoking as the dependent variable, initial multivariate analyses revealed that adolescents who lived either in suburban (OR=.34, CI=.32, .36) or urban (OR=.33, CI=.31, .35) locales were less likely to become daily smokers than adolescents living in rural locales. Subsequent logistic regression analysis yielded that rural youths who became daily smokers were more likely to: have used smokeless tobacco products in the past 12 months (OR=1.25, CI=1.04,1.51); be female (OR=1.42, CI=1.23, 1.64); be Caucasian (OR=1.53, CI=1.28, 1.84); have first smoked a whole cigarette when they were 12 years of age or younger (OR=2.08, CI=1.82, 2.38); and have smoked at school in the past 30 days (OR=14.52, CI=11.97, 17.60). CONCLUSIONS The results indicate that rural residency is a risk factor for tobacco use among US youth.
A comparison of CNN-based face and head detectors for real-time video surveillance applications
Detecting faces and heads appearing in video feeds are challenging tasks in real-world video surveillance applications due to variations in appearance, occlusions and complex backgrounds. Recently, several CNN architectures have been proposed to increase the accuracy of detectors, although their computational complexity can be an issue, especially for realtime applications, where faces and heads must be detected live using high-resolution cameras. This paper compares the accuracy and complexity of state-of-the-art CNN architectures that are suitable for face and head detection. Single pass and region-based architectures are reviewed and compared empirically to baseline techniques according to accuracy and to time and memory complexity on images from several challenging datasets. The viability of these architectures is analyzed with real-time video surveillance applications in mind. Results suggest that, although CNN architectures can achieve a very high level of accuracy compared to traditional detectors, their computational cost can represent a limitation for many practical real-time applications.
Identification of patient-reported distress by clinical nurse specialists in routine oncology practice: a multicentre UK study.
BACKGROUND There is uncertainty regarding how well clinical nurse specialists are able to identify distress in cancer settings. METHODS We examined recognition of patient-reported distress by nurse specialists across three sites in the East Midlands (UK). Clinicians were asked to report on their clinical opinion regarding the presence of distress or any mental health complication after routine assessment of 401 mixed cancer patients. Patient-reported distress was defined by the distress thermometer at a cut-off of 4 or higher. RESULTS We found that the prevalence of patient-reported distress was 45.4%. The rates for mild, moderate and severe distress were: 23.4, 13.7 and 8.2, respectively. When looking for distress (or any mental health complication) nurse practitioners had a detection sensitivity of 50.5% and specificity 80.0%. Cohen's kappa suggested fair agreement between staff and patients. Examining predictors of distress, clinicians were better able to recognise higher severities of distress (adjusted R(2) =0.87 P=0.001). There was lower sensitivity in palliative stages but no differences according to the type of cancer. There was also higher sensitivity but lower specificity in those clinicians with high self-rated confidence. CONCLUSIONS Nurses working in cancer settings have difficulty identifying distress using their routine clinical judgement and tend to make more false-negative than false-positive errors. Evidence-based strategies that improve detection of mild and moderate distress are required in routine cancer care.
Virtual Reality und Augmented Reality (VR/AR)
Preise, Perspektiven, Potenziale Unter 300 € soll sie kosten. Ende 2014 soll sie als Produkt erscheinen. Die Oculus Rift, eine Brille, die ihre Träger in eine Virtuelle Realität (VR) versetzen kann. Bisherige VR-Brillen kosten oft mehr als das Zehnfache und vermitteln aufgrund eines eingeschränkteren Sichtfelds keinen derart guten Eindruck einer virtuellen 3D Welt. Bessere Hardware für einen Bruchteil des Preises? Kein Einzelfall. Ein neues Anwendungsfeld macht es möglich: Entertainment. Statt wie bisherige VR Hardware eine kleine Zielgruppe z. B. für industrielle Anwendungen zu adressieren, zielen neue Geräte auf den Computerspielemarkt, einen Massenmarkt. Nach einer vom Bundesverband Interaktive Unterhaltungssoftware in Auftrag gegebene Studie der GfK wird das Marktvolumen allein in Deutschland mit 1,82 Milliarden Euro beziffert. Die Firma Oculus VR brauchte vor dem Hintergrund derartiger Marktperspektiven nur vier Stunden, um über Crowdfunding mittels der Online-Plattform Kickstarter 250.000 US$ als Startkapital zu sammeln. Das Unternehmen wurde inzwischen für ca. zwei Milliarden US$ aufgekauft. Insgesamt bieten derart erschwingliche Hardware und derart hohe Investitionen für die Anwendbarkeit und die Verbreitung von VR neue Perspektiven. Wird VR also massentauglich werden können? VR verfolgt das Ziel, Nutzer in eine scheinbare Welt zu versetzen, in der sie sich präsent fühlen. Dazu werden Technologien eingesetzt, die das Eintauchen, die Immersion, in diese virtuelle Welt erleichtern sollen, indem künstliche Reize für die visuelle und auditive Wahrnehmung erzeugt werden, manchmal auch für weitere Sinne wie den haptischen Sinn oder den Gleichgewichtssinn. Spezielle VR-Brillen spielen über ein Display dem rechten und linken Auge Bilder einer virtuellen 3D Welt ein. Außerdem ermittelt ein Sensor die aktuelle Kopfposition und Blickrichtung, sodass man sich in der virtuellen Welt durch Drehen des Kopfes einfach umschauen kann, genauso wie man es auch aus der Realität gewohnt ist. Im Extremfall einer perfekten VR könnte man virtuelle Welt und Realität nicht mehr unterscheiden. Dies wird in einigen Science Fiction Filmen, z. B. ,,Die Matrix“ dargestellt, in denen die künstlichen Reize über eine Art Steckdose direkt in das Gehirn eingespielt werden. Soweit muss man aber nicht gehen, auch heute schafft man es schon, überzeugende virtuelle Umgebungen zu realisieren. Bei Menschen, die man an die Dachkante eines virtuellen Wolkenkratzers stellt, kann man erhöhten Pulsschlag und feuchte Hände feststellen. Und das obwohl diese Menschen wissen, dass sie nicht an einem gefährlichen Abgrund, sondern in einer sicheren VR Umgebung stehen. Hier kommt eine Eigenschaft von Menschen zum Tragen, die der Philosoph Samuel T. Coleridge ,,willing