title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Background subtraction for static & moving camera | Background subtraction is one of the most commonly used components in machine vision systems. Despite the numerous algorithms proposed in the literature and used in practical applications, key challenges remain in designing a single system that can handle diverse environmental conditions. In this paper we present Multiple Background Model based Background Subtraction Algorithm as such a candidate. The algorithm was originally designed for handling sudden illumination changes. The new version has been refined with changes at different steps of the process, specifically in terms of selecting optimal color space, clustering of training images for Background Model Bank and parameter for each channel of color space. This has allowed the algorithm's applicability to wide variety of challenges associated with change detection including camera jitter, dynamic background, Intermittent Object Motion, shadows, bad weather, thermal, night videos etc. Comprehensive evaluation demonstrates the superiority of algorithm against state of the art. |
Measuring abstract reasoning in neural networks | Whether neural networks can learn abstract reasoning or whether they merely rely on superficial statistics is a topic of recent debate. Here, we propose a dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test. To succeed at this challenge, models must cope with various generalisation ‘regimes’ in which the training and test data differ in clearlydefined ways. We show that popular models such as ResNets perform poorly, even when the training and test sets differ only minimally, and we present a novel architecture, with a structure designed to encourage reasoning, that does significantly better. When we vary the way in which the test questions and training data differ, we find that our model is notably proficient at certain forms of generalisation, but notably weak at others. We further show that the model’s ability to generalise improves markedly if it is trained to predict symbolic explanations for its answers. Altogether, we introduce and explore ways to both measure and induce stronger abstract reasoning in neural networks. Our freely-available dataset should motivate further progress in this direction. |
Meeting Report: Atmospheric Pollution and Human Reproduction | BACKGROUND
There is a growing body of epidemiologic literature reporting associations between atmospheric pollutants and reproductive outcomes, particularly birth weight and gestational duration.
OBJECTIVES
The objectives of our international workshop were to discuss the current evidence, to identify the strengths and weaknesses of published epidemiologic studies, and to suggest future directions for research.
DISCUSSION
Participants identified promising exposure assessment tools, including exposure models with fine spatial and temporal resolution that take into account time-activity patterns. More knowledge on factors correlated with exposure to air pollution, such as other environmental pollutants with similar temporal variations, and assessment of nutritional factors possibly influencing birth outcomes would help evaluate importance of residual confounding. Participants proposed a list of points to report in future publications on this topic to facilitate research syntheses. Nested case-control studies analyzed using two-phase statistical techniques and development of cohorts with extensive information on pregnancy behaviors and biological samples are promising study designs. Issues related to the identification of critical exposure windows and potential biological mechanisms through which air pollutants may lead to intrauterine growth restriction and premature birth were reviewed.
CONCLUSIONS
To make progress, this research field needs input from toxicology, exposure assessment, and clinical research, especially to aid in the identification and exposure assessment of feto-toxic agents in ambient air, in the development of early markers of adverse reproductive outcomes, and of relevant biological pathways. In particular, additional research using animal models would help better delineate the biological mechanisms underpinning the associations reported in human studies. |
Hand-Eye Calibration | Whenever a sensor is mounted on a robot hand it is important to know the relation ship between the sensor and the hand The problem of determining this relationship is referred to as the hand eye calibration problem Hand eye calibration is important in at least two types of tasks i map sensor centered measurements into the robot workspace frame and ii allow the robot to precisely move the sensor In the past some solutions were proposed in the particular case of the sensor being a TV camera With almost no exception all existing solutions attempt to solve a homogeneous matrix equation of the form AX XB This paper has the following main contributions First we show that there are two possible formulations of the hand eye calibration problem One formulation is the classical one that we just mentioned A second formulation takes the form of the following homogeneous matrix equation MY M Y B The advantage of the latter formulation is that the extrinsic and intrinsic parameters of the camera need not be made explicit Indeed this formulation directly uses the perspective matrices M and M associated with positions of the camera with respect to the calibration frame Moreover this formulation together with the classical one cover a wider range of camera based sensors to be calibrated with respect to the robot hand single scan line cameras stereo heads range nders etc Second we develop a common mathematical framework to solve for the hand eye calibration problem using either of the two formulations We represent rotation by a unit quaternion We present two methods i a closed form solution for solving for rotation using unit quaternions and then solving for translation and ii a non linear technique for simultaneously solving for rotation and translation Third we perform a stability analysis both for our two methods and for the classical linear method developed by Tsai Lenz TL This analysis allows the comparison of the three methods In the light of this comparison the non linear optimization method that solves for rotation and translation simultaneously seems to be the most robust one with respect to noise and to measurement errors This work has been supported by the Esprit programme through the SECOND project Esprit BRA No |
A Two-Dimensional Topic-Aspect Model for Discovering Multi-Faceted Topics | This paper presents the Topic-Aspect Model (TAM), a Bayesian mixture model which jointly discovers topics and aspects. We broadly define an aspect of a document as a characteristic that spans the document, such as an underlying theme or perspective. Unlike previous models which cluster words by topic or aspect, our model can generate token assignments in both of these dimensions, rather than assuming words come from only one of two orthogonal models. We present two applications of the model. First, we model a corpus of computational linguistics abstracts, and find that the scientific topics identified in the data tend to include both a computational aspect and a linguistic aspect. For example, the computational aspect of GRAMMAR emphasizes parsing, whereas the linguistic aspect focuses on formal languages. Secondly, we show that the model can capture different viewpoints on a variety of topics in a corpus of editorials about the Israeli-Palestinian conflict. We show both qualitative and quantitative improvements in TAM over two other state-of-the-art topic models. Probabilistic topic models such as LDA (Blei, Ng, and Jordan 2003) have emerged in recent years as a popular approach to uncovering hidden structures in text collections, and offer a powerful way to represent the content of documents. These models, however, typically learn distributions over words along only a single dimension of topicality, and ignore the fact that words may fall along other dimensions such as sentiment, perspective, or theme. Some work has been done to simultaneously model both topics and other types of groupings. For example, in the topic and perspective model (Lin, Xing, and Hauptmann 2008), each word is modeled as having some weight of topicality and perspective (e.g., liberal or conservative), however, this model assumes that all documents are about the same topic. The topic-sentiment mixture model (Mei et al. 2007) models each document as both a mixture of topics and a mixture of different sentiments (i.e. negative/positive), however, words come from either the topic model or the sentiment model rather than a combination of both. In these approaches, there is no inter-dependency of topics and perspectives, and they cannot capture how these perCopyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. spectives appear in different topics. Recently we have presented a new model, cross-collection latent Dirichlet allocation (ccLDA) (Paul and Girju 2009a), which can both discover topics among multiple text collections as well as differences between them, and we used this to capture the perspectives of different cultures in blogs. Each topic is associated with a probability distribution over words that is shared across all collections, as well as a distribution that is unique to each collection. For example, the topic of FOOD found in documents from different countries might contain the words food and eat among all collections, but curry would be more likely to appear in the India collection. In many settings, however, it is more realistic to assume that a document is a mixture of such aspects, rather than belonging to exactly one aspect. For example, a recipe for curry pizza would contain elements from both Indian and American food, rather than strictly one or the other. In this paper we introduce a novel topic model, TAM, which not only allows documents to contain multiple aspects, but it can learn these aspects automatically. Unlike ccLDA, the model can be applied to a single collection and can discover patterns without document labels. A common application of topic models is topic discovery in scientific literature (Griffiths and Steyvers 2004), which is useful for browsing large collections of literature. Topic models can also be used to assign research papers to reviewers (Mimno and McCallum 2007). In computational linguistics, (Hall, Jurafsky, and Manning 2008) and (Paul and Girju 2009b) model topics in this field and study their history. These studies, however, have ignored the multi-faceted and interdisciplinary nature of many scientific topics. The only work in this direction we are aware of is our recent work (Paul and Girju 2009b) where we model scientific literature from multiple disciplines such as computational linguistics and linguistics. However, in that approach the fields are modeled independently, whereas TAM incorporates this directly into the model. In this paper we show how TAM can be used for the discovery of multi-faceted scientific topics. Additionally, we model a corpus of editorials on the Israeli-Palestinian conflict. We improve upon studies of this corpus (Lin et al. 2006; Lin, Xing, and Hauptmann 2008) by modeling how different perspectives on this issue affect multiple topics within the data. |
LEARNING AND TEACHING LANGUAGES ONLINE: A CONSTRUCTIVIST APPROACH | The recent advances in technology have necessitated first new approaches and then new methodologies in the area of foreign language learning and thoroughly teaching. The Internet and the virtual learning environments have diversified the opportunities for school teachers, instructional designers as well as learners by varying and broadening the alternatives for learning and teaching of languages. Employing tools and applications, other than classroom and course books, in the learning of foreign languages requires reconsidering the pedagogy, methodology, applications, teacher roles, interaction types, and teaching environment itself. And also multiple selections of channels, through which the teaching materials can be implemented mandate the revision of traditional one way communication between the teachers and the learners. An acknowledgement is brought about by the constructivist approach with its assumptions about learning and knowledge, multiple perspectives and modes of learning and the complexity of learning environments. Constructivist approach is promising at promoting learners’ language and communicative skills as well as at fostering their autonomy, social and interactive skills contributing to their development into more confident, pro-active and responsible individuals by supporting incentives on diverse media in language learning and teaching. |
TargetVue: Visual Analysis of Anomalous User Behaviors in Online Communication Systems | Users with anomalous behaviors in online communication systems (e.g. email and social medial platforms) are potential threats to society. Automated anomaly detection based on advanced machine learning techniques has been developed to combat this issue; challenges remain, though, due to the difficulty of obtaining proper ground truth for model training and evaluation. Therefore, substantial human judgment on the automated analysis results is often required to better adjust the performance of anomaly detection. Unfortunately, techniques that allow users to understand the analysis results more efficiently, to make a confident judgment about anomalies, and to explore data in their context, are still lacking. In this paper, we propose a novel visual analysis system, TargetVue, which detects anomalous users via an unsupervised learning model and visualizes the behaviors of suspicious users in behavior-rich context through novel visualization designs and multiple coordinated contextual views. Particularly, TargetVue incorporates three new ego-centric glyphs to visually summarize a user's behaviors which effectively present the user's communication activities, features, and social interactions. An efficient layout method is proposed to place these glyphs on a triangle grid, which captures similarities among users and facilitates comparisons of behaviors of different users. We demonstrate the power of TargetVue through its application in a social bot detection challenge using Twitter data, a case study based on email records, and an interview with expert users. Our evaluation shows that TargetVue is beneficial to the detection of users with anomalous communication behaviors. |
The Neurological Examination in Aging , Dementia and Cerebrovascular Disease Part 4 : Reflexes and Sensory Examination | This four-part series of articles provides an overview of the neurological examination of the elderly patient, particularly as it applies to patients with cognitive impairment, dementia or cerebrovascular disease.The focus is on the method and interpretation of the bedside physical examination; the mental state and cognitive examinations are not covered in this review.Part 1 (featured in the September issue) began with an approach to the neurological examination in normal aging and in disease, and reviewed components of the general physical,head and neck,neurovascular and cranial nerve examinations relevant to aging and dementia.Part 2 (featured in the October issue) covered the motor examination with an emphasis on upper motor neuron signs and movement disorders. Part 3(featured in the November issue) reviewed the assessment of coordination,balance and gait,and Part 4, featured here, discusses the muscle stretch reflexes, pathological and primitive reflexes, and sensory examination, and offers concluding remarks.Throughout this series, special emphasis is placed on the evaluation and interpretation of neurological signs in light of findings considered normal in the elderly. |
Description and consequences of prescribing off-label antiretrovirals in the Madrid Cohort of HIV-infected children over a quarter of a century (1988-2012). | BACKGROUND
Licensing data for paediatric dosing is often sparse and subsequent studies may result in changes to recommended doses. We measured the extent and consequences of off-label antiretroviral (ARV) use in an HIV-infected paediatric cohort.
METHODS
In this multicentre cohort study involving 318 HIV-infected children and adolescents from the Madrid Cohort, all off-label prescriptions from March 1988 to March 2012 were recorded from the clinical records. The reasons for prescribing ARV off-label, the side effects and the consequences of incorrect dosing of ARVs are discussed.
RESULTS
Among the 318 patients of the cohort, 221 (69%) received off-label ARVs according to EMA licensing at the time of prescription, representing 23% (540) of the 2,353 prescribed ARVs. The main reason for starting an off-label drug was treatment failure. Adverse events led to treatment discontinuation in 12% of the prescriptions. Problems taking the drug led to withdrawal in 5%, more likely when formulation was not suitable for age (P<0.05). Up to 10% were overdosed and 10% underdosed, defined as 25% above or below the current recommended dose, respectively. Treatment failure occurred significantly more frequently among underdosed compared to overdosed patients (50% versus 26%; P<0.05).
CONCLUSIONS
Off-label use of ARVs was common in our HIV-1 paediatric patients. Adverse events were common but rarely led to withdrawal. Suitable formulation is important in younger children. Pharmacokinetic studies are needed as frequent incorrect dosing may occur when prescribing off-label and underdosing may lead to treatment failure. |
Combining Syntactic and Sequential Patterns for Unsupervised Semantic Relation Extraction. | This work investigates the impact of syntactic features in a completely unsupervised semantic relation extraction experiment. Automated relation extraction deals with identifying semantic relation instances in a text and classifying them according to the type of relation. This task is essential in information and knowledge extraction and in knowledge base population. Supervised relation extraction systems rely on annotated examples [ , – , ] and extract di erent kinds of features from the training data, and eventually from external knowledge sources. The types of extracted relations are necessarily limited to a pre-defined list. In Open Information Extraction (OpenIE) [ , ] relation types are inferred directly from the data: concept pairs representing the same relation are grouped together and relation labels can be generated from context segments or through labeling by domain experts [ , , ]. A commonly used method [ , ] is to represent entity couples by a pair-pattern matrix, and cluster relation instances according to the similarity of their distribution over patterns. Pattern-based approaches [ , , , , ] typically use lexical context patterns, assuming that the semantic relation between two entities is explicitly mentioned in the text. Patterns can be defined manually [ ], obtained by Latent Relational Analysis [ ], or from a corpus by sequential pattern mining [ , , ]. Previous works, especially in the biomedical domain, have shown that not only lexical patterns, but also syntactic dependency trees can be beneficial in supervised and semi-supervised relation extraction [ , , – ]. Early experiments on combining lexical patterns with di erent types of distributional information in unsupervised relation clustering did not bring significant improvement [ ]. The underlying di culty is that while supervised classifiers can learn to weight attributes from di erent sources, it is not trivial to combine di erent types of features in a single clustering feature space. In our experiments, we propose to combine syntactic features with sequential lexical patterns for unsupervised clustering of semantic relation instances in the context of (NLP-related) scientific texts. We replicate the experiments of [ ] and augment them with dependency-based syntactic features. We adopt a pairpattern matrix for clustering relation instances. The task can be described as follows: if a1, a2, b1, b2 are pre-annotated domain concepts extracted from a corpus, we would like to classify concept pairs a = (a1, a2) and b = (b1, b2) in homogeneous groups according to their semantic relation. We need an e cient |
[Episodic memory: from mind to brain]. | Episodic memory is a neurocognitive (brain/mind) system, uniquely different from other memory systems, that enables human beings to remember past experiences. The notion of episodic memory was first proposed some 30 Years ago. At that time it was defined in terms of materials and tasks. It was subsequently refined and elaborated in terms of ideas such as self, subjective time, and autonoetic consciousness. This chapter provides a brief history of the concept of episodic memory, describes how it has changed (indeed greatly changed) since its inception, considers criticisms of it, and then discusses supporting evidence provided by (a) neuropsychological studies of patterns of memory impairment caused by brain damage, and (b) functional neuroimaging studies of patterns of brain activity of normal subjects engaged in various memory tasks. I also suggest that episodic memory is a true, even if as yet generally unappreciated, marvel of nature. |
Survey on Vision-based Path Prediction | Path prediction is a fundamental task for estimating how pedestrians or vehicles are going to move in a scene. Because path prediction as a task of computer vision uses video as input, various information used for prediction, such as the environment surrounding the target and the internal state of the target, need to be estimated from the video in addition to predicting paths. Many prediction approaches that include understanding the environment and the internal state have been proposed. In this survey, we systematically summarize methods of path prediction that take video as input and and extract features from the video. Moreover, we introduce datasets used to evaluate path prediction methods quantitatively. |
Origin, Basic Design Philosophy and Evolution | Background to the Design Development of the Hawker Siddeley 748 leading to the Series 2 aircraft, with an Outline of its Principal Features. TO build and produce an aircraft which is cheaper than its competitors, has higher performance and sufficient market appeal to enable it to sell in large quantities, was the aim stated in the directive which heralded A. V. Roe's re‐entry into the field of civil aviation. For many years the Company (now the Avro Whitworth Division of Hawker Siddeley Aviation) had concentrated on military types of aeroplanes, but when the Sandys White Paper on Defence appeared in 1957, with its forecast of no more manned military aircraft, the Avro design team began to examine the possibility of building a civil transport. |
Supervised Sequential Classification Under Budget Constraints | In this paper we develop a framework for a sequential decision making under budget constraints for multi-class classification. In many classification systems, such as medical diagnosis and homeland security, sequential decisions are often warranted. For each instance, a sensor is first chosen for acquiring measurements and then based on the available information one decides (rejects) to seek more measurements from a new sensor/modality or to terminate by classifying the example based on the available information. Different sensors have varying costs for acquisition, and these costs account for delay, throughput or monetary value. Consequently, we seek methods for maximizing performance of the system subject to budget constraints. We formulate a multi-stage multi-class empirical risk objective and learn sequential decision functions from training data. We show that reject decision at each stage can be posed as supervised binary classification. We derive bounds for the VC dimension of the multi-stage system to quantify the generalization error. We compare our approach to alternative strategies on several multi-class real world datasets. |
The neuroanatomical and functional organization of speech perception | A striking property of speech perception is its resilience in the face of acoustic variability (among speech sounds produced by different speakers at different times, for example). The robustness of speech perception might, in part, result from multiple, complementary representations of the input, which operate in both acoustic-phonetic feature-based and articulatory-gestural domains. Recent studies of the anatomical and functional organization of the non-human primate auditory cortical system point to multiple, parallel, hierarchically organized processing pathways that involve the temporal, parietal and frontal cortices. Functional neuroimaging evidence indicates that a similar organization might underlie speech perception in humans. These parallel, hierarchical processing 'streams', both within and across hemispheres, might operate on distinguishable, complementary types of representations and subserve complementary types of processing. Two long-opposing views of speech perception have posited a basis either in acoustic feature processing or in gestural motor processing; the view put forward here might help reconcile these positions. |
Robot Arm Platform for Additive Manufacturing : MultiPlane Printing | A conventional 3D printer utilizes horizontal plane layerings to produce a 3D printed part. However, there are drawbacks associated with horizontal plane layerings motions, e.g., support material needed to printed an overhang structure. To enable multi-plane printing, an industrial robot arm platform is proposed for additive manufacturing. The concept being explored is the integration of existing additive manufacturing process technologies with an industrial robot arm to create a 3D printer with a multi-plane layering capability. The objective is to perform multi-plane toolpath motions that will leverage the increased capability of the robot arm platform compared to conventional gantry-style 3D printers. This approach enables print layering in multiple planes whereas existing conventional 3D printers are restricted to a single toolpath plane (e.g. x-y plane). This integration combines the fused deposition modeling techniques using an extruder head that is typically used in 3D printing and a 6 degree of freedom robot arm. Here, a Motoman SV3X is used as the platform for the robot arm. A higher level controller is used to coordinate the robot and extruder. For the higher level controller to communicate with the robot arm controller, interface software based on the MotoCom SDK libraries was implemented. The integration of the robotic arm and extruder enables multiplane toolpath motions to be utilized in the production of 3D printed parts. Using this integrated system, a test block with an overhang structure has been 3D printed without the use of support material . |
Monte-Carlo Tree Search: A New Framework for Game AI | Classic approaches to game AI require either a high quality of domain knowledge, or a long time to generate effective AI behaviour. These two characteristics hamper the goal of establishing challenging game AI. In this paper, we put forward Monte-Carlo Tree Search as a novel, unified framework to game AI. In the framework, randomized explorations of the search space are used to predict the most promising game actions. We will demonstrate that Monte-Carlo Tree Search can be applied effectively to (1) classic board-games, (2) modern board-games, and (3) video games. |
Holoprosencephaly and midline facial anomalies: redefining classification and management. | Holoprosencephaly encompasses a series of midline defects of the brain and face. Most cases are associated with severe malformations of the brain which are incompatible with life. At the other end of the spectrum, however, are patients with midline facial defects and normal or near-normal brain development. Although some are mentally retarded, others have the potential for achieving near-normal mentality and a full life expectancy. The latter patients do not fit clearly into the previously defined classification system. Proposed is a new classification focusing on those patients with normal or lobar brain morphology but with a wide range of facial anomalies. The classification aids in planning treatment. Coupled with CT scan findings of the brain and a period of observation, patients unlikely to thrive can be distinguished from those who will benefit from surgical intervention. Repair of the false median cleft lip and palate may suffice in patients with moderate mental retardation. Patients exhibiting normal or near-normal mentality with hypotelorbitism and nasomaxillary hypoplasia can be treated with a simultaneous midface advancement, facial bipartition expansion, and nasal reconstruction. |
Positive externality, increasing returns, and the rise in cybercrimes | Introduction
The meteoric rise in cybercrime has been an issue of pressing concern to our society. According to Federal Bureau of Investigation (FBI), nine out of 10 U.S. companies experienced computer security incidents in 2005 which led to a loss of $67.2 billion. A survey conducted by IBM found that U.S. businesses worry more about cybercrimes than about physical crimes. Internet-related frauds accounted for 46% of consumer complaints made to the Federal Trade Commission (FTC) in 2005. Total losses of Internet fraud victims reporting to FTC increased from $205 million in 2003 to $336 million in 2005. In a July 2007 interview with USA Today, McAfee CEO reported that his company received 3,000--5,000 threat submissions per day from customers and 10% of them were new.
This paper offers an economic analysis to explain cybercrimes' escalation. We define cybercrimes as criminal activities in which computers or computer networks are the principal means of committing an offense. Examples include cyber-theft, cyber-trespass, cyber-obscenity, critical infrastructure attacks and cyber-extortions.6 The most notable features of the cybercrime environment include newness, technology and skill-intensiveness, and a high degree of globalization. Factors such as a wide online availability of hacking tools, information sharing in the cyber-criminal community, availability of experienced hackers' help to less skillful criminals and congestion in law enforcement systems produce externality effects within the cyber-criminal community as well as across society and businesses.
We focus on three positive or self-reinforcing feedback systems to examine increasing returns in cybercrime related activities. In this article, we first provide an overview of the positive feedback loops that reinforce cyber-criminals' behavior. Then, we describe mechanisms associated with externality in cybercrime related activities. |
The developing role of prosody in novel word interpretation. | This study examined whether children use prosodic correlates to word meaning when interpreting novel words. For example, do children infer that a word spoken in a deep, slow, loud voice refers to something larger than a word spoken in a high, fast, quiet voice? Participants were 4- and 5-year-olds who viewed picture pairs that varied along a single dimension (e.g., big vs. small flower) and heard a recorded voice asking them, for example, "Can you get the blicket one?" spoken with either meaningful or neutral prosody. The 4-year-olds failed to map prosodic cues to their corresponding meaning, whereas the 5-year-olds succeeded (Experiment 1). However, 4-year-olds successfully mapped prosodic cues to word meaning following a training phase that reinforced children's attention to prosodic information (Experiment 2). These studies constitute the first empirical demonstration that young children are able to use prosody-to-meaning correlates as a cue to novel word interpretation. |
Evaluation of the Spiritual Well-Being Scale in a Sample of Korean Adults | This study explored the psychometric qualities and construct validity of the Spiritual Well-Being Scale (SWBS; Ellison in J Psychol Theol 11:330–340, 1983) using a sample of 470 Korean adults. Two factor analyses, exploratory factor analysis and confirmatory factor analysis, were conducted in order to test the validity of the SWBS. The results of the factor analyses supported the original two-dimensional structure of the SWBS—religious well-being (RWB) and existential well-being (EWB) with method effects associated with negatively worded items. By controlling for method effects, the evaluation of the two-factor structure of SWBS is confirmed with clarity. Further, the differential pattern and magnitude of correlations between the SWB subscales and the religious and psychological variables suggested that two factors of the SWBS were valid for Protestant, Catholic, and religiously unaffiliated groups except Buddhists. The Protestant group scored higher in RWB compared to the Buddhist, Catholic, and unaffiliated groups. The Protestant group scored higher in EWB compared to the unaffiliated groups. Future studies may need to include more Buddhist samples to gain solid evidence for validity of the SWBS on a non-Western religious tradition. |
Crystalline Graphene Nanoribbons with Atomically Smooth Edges via a Novel Physico- Chemical Route | A novel physico-chemical route to produce few layer graphene nanoribbons with atomically smooth edges is reported, via acid treatment (H2SO4:HNO3) followed by characteristic thermal shock processes involving extremely cold substances. Samples were studied by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Raman spectroscopy and X-ray photoelectron spectroscopy. This method demonstrates the importance of having the nanotubes open ended for an efficient uniform unzipping along the nanotube axis. The average dimensions of these nanoribbons are approximately ca. 210 nm wide and consist of few layers, as observed by transmission electron microscopy. The produced nanoribbons exhibit different chiralities, as observed by high resolution transmission electron microscopy. This method is able to provide graphene nanoribbons with atomically smooth edges which could be used in various applications including sensors, gas adsorption materials, composite fillers, among others. A. Morelos-Gomez, K. Fujisawa, H. Muramatsu, T. Hayashi, Y. A. Kim and M. Endo are with the Faculty of Engineering, Shinshu University, 4-17-1 Wakasato, Nagano 380-853, Japan (e-mail: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]). S. M. Vega-Díaz, F. Tristán-López, R. Cruz-Silva, H. Sakamoto, F. Khoerunnisa, K. Kaneko, M. Endo and M. Terrones are with Research Center for Exotic Nanocarbons (JST), Shinshu University, Wakasato 4-17-1, Nagano 380-8553, Japan (e-mail: [email protected], [email protected], [email protected], [email protected], [email protected]) . V. J. González is with Departamento de Ciencia e Ingeniería de Materiales e Ingeniería Química, Universidad Carlos III. Av. Universidad 30, 28911 Leganés, Madrid, Spain (e-mail: [email protected]). Xi Miand, Yunfeng Shi and V. Meunier are with Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, New York 12180-3590, USA ([email protected]). B. G. Sumpter is with Center for Nanophase Materials Sciences and Computer Science & Mathematics Division, Oak Ridge National Laboratory, P.O. Box 2008, MS6367, Oak Ridge (e-mail: [email protected]). V. Meunier is also with Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, New York 12180-3590, USA. E. Muñoz-Sandoval is with Instituto de Microelectrónica de Madrid, IMM (CNM-CSIC), Newton 8, Tres Cantos, Spain. (e-mail: [email protected]) M. Terrones is also with Department of Physics, Department of Materials Science and Engineering and Materials Research Institute. The Pennsylvania State University, 104 Davey Lab., University Park, Pennsylvania 16802, USA. E. Muñoz-Sandoval and V. J. González are on leave from the Advanced Materials Department, Instituto Potosino de Investigación Científica y Tecnológica, A.C., Camino a la Presa San José 2055, Col. Lomas 4a. sección, San Luis Potosí, SLP 78216, México. |
Autonomous Automobile Trajectory Tracking for Off-Road Driving: Controller Design, Experimental Validation and Racing | This paper presents a nonlinear control law for an automobile to autonomously track a trajectory, provided in real-time, on rapidly varying, off-road terrain. Existing methods can suffer from a lack of global stability, a lack of tracking accuracy, or a dependence on smooth road surfaces, any one of which could lead to the loss of the vehicle in autonomous off-road driving. This work treats automobile trajectory tracking in a new manner, by considering the orientation of the front wheels - not the vehicle's body - with respect to the desired trajectory, enabling collocated control of the system. A steering control law is designed using the kinematic equations of motion, for which global asymptotic stability is proven. This control law is then augmented to handle the dynamics of pneumatic tires and of the servo-actuated steering wheel. To control vehicle speed, the brake and throttle are actuated by a switching proportional integral (PI) controller. The complete control system consumes a negligible fraction of a computer's resources. It was implemented on a Volkswagen Touareg, "Stanley", the Stanford Racing Team's entry in the DARPA Grand Challenge 2005, a 132 mi autonomous off-road race. Experimental results from Stanley demonstrate the ability of the controller to track trajectories between obstacles, over steep and wavy terrain, through deep mud puddles, and along cliff edges, with a typical root mean square (RMS) crosstrack error of under 0.1 m. In the DARPA National Qualification Event 2005, Stanley was the only vehicle out of 40 competitors to not hit an obstacle or miss a gate, and in the DARPA Grand Challenge 2005 Stanley had the fastest course completion time. |
Decapitation in suicidal hanging--vital reaction patterns. | Complete or incomplete decapitation as a consequence of suicidal hanging is very rare, few cases having been reported in the worldwide literature. Posthanging decapitation is typically related to a drop of several meters. Three cases of complete decapitation and one case of incomplete decapitation by suicidal hanging are reported with particular emphasis on internal findings and vital reaction patterns. Personal, circumstantial, autopsy, and toxicological data were analyzed to define basic characteristics of such extreme injuries. The crucial factor for the state of decapitation itself is the kinetic energy of the falling body, the strength of the human neck tissue, and the diameter and elasticity of the used ligature. Results of our case study suggest Simon's hemorrhage and air embolism as useful autopsy findings in posthanging beheading cases. Simon's hemorrhage was demonstrated in three cases of four. The test for air embolism was positive in all four cases. |
Effectiveness of a multi-level asthma intervention in increasing controller medication use: a randomized control trial. | INTRODUCTION
Poor self-management by families is an important factor in explaining high rates of asthma morbidity in Puerto Rico, and for this reason we previously tested a family intervention called CALMA that was found effective in improving most asthma outcomes, but not effective in increasing the use of controller medications. CALMA-plus was developed to address this issue by adding to CALMA, components of provider training and screening for asthma in clinics.
METHODS
Study participants were selected from claims Medicaid data in San Juan, Puerto Rico. After screening, 404 children in eight clinics were selected after forming pairs of clinics and randomizing the clinics) to CALMA-only or CALMA-plus.
RESULTS
For all three primary outcomes at 12 months, the mean differences between treatment arms were small but in the predicted direction. However, after adjusting for clinic variation, the study failed to demonstrate that the CALMA-plus intervention was more efficacious than the CALMA-only intervention for increasing controller medication use, or decreasing asthma symptoms. Both groups had lower rates of asthma symptoms and service utilization, consistent with previous results of the CALMA-only intervention.
CONCLUSIONS
Compliance of providers with the intervention and training, small number of clinics available and the multiple barriers experienced by providers for medicating may have been related to the lack of difference observed between the groups. Future interventions should respond to the limitations of the present study design and provide more resources to providers that will increase provider participation in training and implementation of the intervention. |
A fingerprint pattern classification approach based on the coordinate geometry of singularities | The problem of Automatic Fingerprint Pattern Classification (AFPC) has been studied by many fingerprint biometric practitioners. It is an important concept because, in instances where a relatively large database is being queried for the purposes of fingerprint matching, it serves to reduce the duration of the query. The fingerprint classes discussed in this document are the Central Twins (CT), Tented Arch (TA), Left Loop (LL), Right Loop (RL) and the Plain Arch (PA). The classification rules employed in this problem involve the use of the coordinate geometry of the detected singular points. Using a confusion matrix to evaluate the performance of the fingerprint classifier, a classification accuracy of 83.5% is obtained on the five-class problem. This performance evaluation is done by making use of fingerprint images from one of the databases of the year 2002 version of the Fingerprint Verification Competition (FVC2002). |
Qualitative Research & Evaluation Methods: Integrating Theory and Practice | In what case do you like reading so much? What about the type of the qualitative research evaluation methods integrating theory and practice book? The needs to read? Well, everybody has their own reason why should read some books. Mostly, it will relate to their necessity to get knowledge from the book and want to read just to get entertainment. Novels, story book, and other entertaining books become so popular this day. Besides, the scientific books will also be the best reason to choose, especially for the students, teachers, doctors, businessman, and other professions who are fond of reading. |
The influence of communication goals and physical demands on different dimensions of pain behavior | The purpose of the present research was to examine the influence of communication goals and physical demands on the expression of communicative (e.g., facial grimaces) and protective (e.g., guarding) pain behaviors. Participants with musculoskeletal conditions (N=50) were asked to lift a series of weights under two communication goal conditions. In one condition, participants were asked to estimate the weight of the object they lifted. In a second condition, participants were asked to rate their pain while lifting the same objects. The display of communicative pain behaviors varied as a function of the communication goal manipulation; participants displayed more communicative pain behavior when asked to rate their pain while lifting objects than when they estimated the weight of the object. Protective pain behaviors varied with the physical demands of the task, but not as a function of the communication goals manipulation. Pain ratings and self-reported disability were significantly correlated with protective pain behaviors but not with communicative pain behaviors. The results of this study support the functional distinctiveness of different forms of pain behavior. Findings are discussed in terms of evolutionary and learning theory models of pain behavior. Clinical implications of the findings are addressed. |
Oral Administration of OKT3 MAb to Patients with NASH, Promotes Regulatory T-cell Induction, and Alleviates Insulin Resistance: Results of a Phase IIa Blinded Placebo-Controlled Trial | Oral administration of anti-CD3 antibodies induced regulatory T cells (Tregs) alleviating the insulin resistance and liver damage in animal models. To determine the safety and biological effects of oral OKT3 monoclonal antibody (Balashov et al. Neurology 55:192–8, 2000) in patients with NASH. In this Phase-IIa trial, four groups of patients with biopsy-proven NASH (n = 9/group) received placebo (group A) or oral OKT3 (group B: 0.2; C: 1.0; D: 5.0 mg/day) for 30 days. Patients were followed for safety, liver enzymes, glucose, lipid profile, oral glucose tolerance test (OGTT), serum cytokines and Tregs. Oral OKT3 was well tolerated without treatment-related adverse events. OKT3 induced Tregs: with significant increases of CD4+LAP+ (Latency associated peptide) and CD4+CD25+LAP+ cells in Group D, and a significant increase in TGF-β in Groups C and D. AST decreased significantly in group D and a trend in Groups B and C. Fasting plasma glucose decreased significantly in all treatment groups compared with placebo. OGTT decreased significantly in Group D. Correlations were observed between the changes in several immune-modulatory effects and clinical biomarkers. While serum anti-CD3 levels where undetectable increases in human anti-mouse antibody levels were observed in Groups C and D. Oral administration of anti-CD3 MAb to patients with NASH was safe and well tolerated. Positive biological effects were noted in several hepatic, metabolic and immunologic parameters. These findings provide the basis for future trials to investigate the effect of oral anti-CD3 MAb immunotherapy in patients with NASH. |
A high voltage half bridge gate driver with mismatch-insensitive dead-time generator | A high voltage half bridge gate driver with mismatchinsensitive dead-time generator is proposed. The high voltage high-side level shifter with common-mode noise canceller technique guarantees the stable operation in negative output voltage level. Unlike a conventional dead-time generator, the proposed dead-time generator uses one delay cell to generate dead-time and shares it with the high-side and low-side paths. The high-side gate driver allows stable negative operation up to −10.2V DC level and −40.5V peak level at 15 V power supply. Measurement results of proposed dead-time generator show 1.7 times and 16.7 times improvement of dead-time mismatch when deadtime is 350 nsec which is minimum set value and 5 μsec, respectively, compared to the conventional dead-time generator. |
Don't hide in the crowd!: increasing social transparency between peer workers improves crowdsourcing outcomes | This paper studied how social transparency and different peer-dependent reward schemes (i.e., individual, teamwork, and competition) affect the outcomes of crowdsourcing. The results showed that when social transparency was increased by asking otherwise anonymous workers to share their demographic information (e.g., name, nationality) to the paired worker, they performed significantly better. A more detailed analysis showed that in a teamwork reward scheme, in which the reward of the paired workers depended only on the collective outcomes, increasing social transparency could offset effects of social loafing by making them more accountable to their teammates. In a competition reward scheme, in which workers competed against each other and the reward depended on how much they outperformed their opponent, increasing social transparency could augment effects of social facilitation by providing more incentives for them to outperform their opponent. The results suggested that a careful combination of methods that increase social transparency and different reward schemes can significantly improve crowdsourcing outcomes. |
Memory and Information Processing in Neuromorphic Systems | A striking difference between brain-inspired neuromorphic processors and current von Neumann processor architectures is the way in which memory and processing is organized. As information and communication technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper, we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multineuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems. |
INNOVATIVE BEHAVIOR IN THE WORKPLACE : THE ROLE OF PERFORMANCE AND IMAGE OUTCOME EXPECTATIONS | Why do employees engage in innovative behavior at their workplaces? We examine how employees’ innovative behavior is explained by expectations for such behavior to affect job performance (expected positive performance outcomes) and image inside their organizations (expected image risks and expected image gains). We found significant effects of all three outcome expectations on innovative behavior. These outcome expectations, as intermediate psychological processes, were shaped by contextual and individual difference factors, including perceived organization support for innovation, supervisor relationship quality, job requirement for innovativeness, employee reputation as innovative, and individual dissatisfaction with the status quo. |
A Semantic Approach to Discovering Schema Mapping Expressions | In many applications it is important to find a meaningful relationship between the schemas of a source and target database. This relationship is expressed in terms of declarative logical expressions called schema mappings. The more successful previous solutions have relied on inputs such as simple element correspondences between schemas in addition to local schema constraints such as keys and referential integrity. In this paper, we investigate the use of an alternate source of information about schemas, namely the presumed presence of semantics for each table, expressed in terms of a conceptual model (CM) associated with it. Our approach first compiles each CM into a graph and represents each table's semantics as a subtree in it. We then develop algorithms for discovering subgraphs that are plausible connections between those concepts/nodes in the CM graph that have attributes participating in element correspondences. A conceptual mapping candidate is now a pair of source and target subgraphs which are semantically similar. At the end, these are converted to expressions at the database level. We offer experimental results demonstrating that, for test cases of non-trivial mapping expressions involving schemas from a number of domains, the "semantic" approach outperforms the traditional technique in terms of recall and especially precision. |
The myth of the visual word form area | Recent functional imaging studies have referred to a posterior region of the left midfusiform gyrus as the "visual word form area" (VWFA). We review the evidence for this claim and argue that neither the neuropsychological nor neuroimaging data are consistent with a cortical region specialized for visual word form representations. Specifically, there are no reported cases of pure alexia who have deficits limited to visual word form processing and damage limited to the left midfusiform. In addition, we present functional imaging data to demonstrate that the so-called VWFA is activated by normal subjects during tasks that do not engage visual word form processing such as naming colors, naming pictures, reading Braille, repeating auditory words, and making manual action responses to pictures of meaningless objects. If the midfusiform region has a single function that underlies all these tasks, then it does not correspond to visual word form processing. On the other hand, if the region participates in several functions as defined by its interactions with other cortical areas, then identifying the neural system sustaining visual word form representations requires identification of the set of regions involved. We conclude that there is no evidence that visual word form representations are subtended by a single patch of neuronal cortex and it is misleading to label the left midfusiform region as the visual word form area. |
A Reinforcement Learning Theory for Homeostatic Regulation | Reinforcement learning models address animal’s behavioral adaptation to its changing “external” environment, and are based on the assumption that Pavlovian, habitual and goal-directed responses seek to maximize reward acquisition. Negative-feedback models of homeostatic regulation, on the other hand, are concerned with behavioral adaptation in response to the “internal” state of the animal, and assume that animals’ behavioral objective is to minimize deviations of some key physiological variables from their hypothetical setpoints. Building upon the drive-reduction theory of reward, we propose a new analytical framework that integrates learning and regulatory systems, such that the two seemingly unrelated objectives of reward maximization and physiological-stability prove to be identical. The proposed theory shows behavioral adaptation to both internal and external states in a disciplined way. We further show that the proposed framework allows for a unified explanation of some behavioral pattern like motivational sensitivity of different associative learning mechanism, anticipatory responses, interaction among competing motivational systems, and risk aversion. |
A new a-Si:H TFT pixel circuit compensating the threshold voltage shift of a-Si:H TFT and OLED for active matrix OLED | We propose a new hydrogenated amorphous silicon thin-film transistor (a-Si:H TFT) pixel circuit for an active matrix organic light-emitting diode (AMOLED) employing a voltage programming. The proposed a-Si:H TFT pixel circuit, which consists of five switching TFTs, one driving TFT, and one capacitor, successfully minimizes a decrease of OLED current caused by threshold voltage degradation of a-Si:H TFT and OLED. Our experimental results, based on the bias-temperature stress, exhibit that the output current for OLED is decreased by 7% in the proposed pixel, while it is decreased by 28% in the conventional 2-TFT pixel. |
Augmented Reality : An Overview and Five Directions for AR in Education | Augmented Reality (AR) is an emerging form of experience in which the Real World (RW) is enhanced by computer-generated content tied to specific locations and/or activities. Over the last several years, AR applications have become portable and widely available on mobile devices. AR is becoming visible in our audio-visual media (e.g., news, entertainment, sports) and is beginning to enter other aspects of our lives (e.g., e-commerce, travel, marketing) in tangible and exciting ways. Facilitating ubiquitous learning, AR will give learners instant access to locationspecific information compiled and provided by numerous sources (2009). Both the 2010 and 2011 Horizon Reports predict that AR will soon see widespread use on US college campuses. In preparation, this paper offers an overview of AR, examines recent AR developments, explores the impact of AR on society, and evaluates the implications of AR for learning and education. |
Deep Neural Networks in Machine Translation: An Overview | Deep neural networks (DNNs) are widely used in machine translation (MT). This article gives an overview of DNN applications in various aspects of MT. |
Digital Watermarking and Steganography | Sharing, disseminating, and presenting data in digital format is not just a fad, but it is becoming part of our life. Without careful planning, digitized resources could easily be misused, especially those that are shared across the Internet. Examples of such misuse include use without the owner’s permission, and modification of a digitized resource to fake ownership. One way to prevent such behaviors is to employ some form of copyright protection technique, such as digital watermarks. Digital watermarks refer to the data embedded into a digital source (e.g., images, text, audio, or video recording). They are similar to watermarks in printed materials as a message inserted into the host media typically becomes an integral part of the media. Apart from traditional watermarks in printed forms, digital watermarks may also be invisible, may be in the forms other than graphics, and may be digitally removed. |
ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced “parlay”), an open-source software platform for dialog research implemented in Python, available at http://parl.ai. Its goal is to provide a unified framework for sharing, training and testing dialog models; integration of Amazon Mechanical Turk for data collection, human evaluation, and online/reinforcement learning; and a repository of machine learning models for comparing with others’ models, and improving upon existing architectures. Over 20 tasks are supported in the first release, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs. |
The relationship between diabetes and infectious hospitalizations in renal transplant recipients. | The implications of diabetes developing after kidney transplantation (PostTDM group) need further study. Worse graft survival is seen in PostTDM compared with nondiabetic (NonDM group) patients (1,2). Yet, there is paucity of data on the risk of infection, an issue that becomes even more important since posttransplant patients are already immunocompromised. Thus, we aim to compare the risk of infection in the posttransplant period in PostTDM patients and in patients who were diagnosed with diabetes before transplant (PreTDM group) with those without diabetes.
Data were obtained from the U.S. Renal Data System (3). The study population was limited to adult renal transplant patients whose primary payer was Medicare to enable analysis of hospital outcomes. Primary (i.e., first time) kidney transplant recipients from the years 1995 to 2000 were included. Pancreas recipients were excluded.
The diagnosis of diabetes was obtained from Medicare claims using a previously validated method and from U.S. Renal Data System transplant files (2,4). Patients were classified as PreTDM if diabetes was detected before, or at the time of, hospitalization for transplantation. The remaining diabetic patients were classified as PostTDM.
The outcome of interest was infection requiring hospitalization (HI) and occurring in the posttransplant period. The survival time until development of HI was analyzed using Cox multivariate proportional hazard … |
Bilingual Word Embeddings from Non-Parallel Document-Aligned Data Applied to Bilingual Lexicon Induction | We propose a simple yet effective approach to learning bilingual word embeddings (BWEs) from non-parallel document-aligned data (based on the omnipresent skip-gram model), and its application to bilingual lexicon induction (BLI). We demonstrate the utility of the induced BWEs in the BLI task by reporting on benchmarking BLI datasets for three language pairs: (1) We show that our BWE-based BLI models significantly outperform the MuPTM-based and context-counting models in this setting, and obtain the best reported BLI results for all three tested language pairs; (2) We also show that our BWE-based BLI models outperform other BLI models based on recently proposed BWEs that require parallel data for bilingual training. |
Meibomian gland diagnostic expressibility: correlation with dry eye symptoms and gland location. | PURPOSE
To determine (1) if the number of meibomian glands yielding liquid secretion (MGYLS) is correlated with dry eye symptoms and (2) the mean number of MGYLS in the nasal, central, and temporal regions of the lower eyelid in a random clinical sample.
METHODS
Subjects presenting for routine eye examinations were recruited (n = 133; 90 females, 43 males; mean age = 50.3 +/- 14.7 years). The sample included symptomatic and asymptomatic individuals using the Standard Patient Evaluation of Eye Dryness questionnaire. Meibomian gland evaluations were performed using a standardized technique and diagnostic instrument.
RESULTS
The mean number of MGYLS in the lower eyelid correlated with dry eye symptoms, p = 0.0002. The mean numbers of MGYLS in each third of the lower eyelid were significantly different, p <or= 0.0001: temporal = 0.27 +/- 0.06, central = 2.14 +/- 0.13, and nasal = 3.10 +/- 0.15. The temporal third of the lower lid was 14 times as likely as the nasal third to have zero MGYLS; 86% of temporal versus 6% of nasal thirds had zero MGYLS.
CONCLUSIONS
This is the first report to document the following: (1) a correlation between the number of MGYLS in the lower eyelid and dry eye symptoms; (2) the number of MGYLS varies significantly across the lower eyelid, with the highest number of MGYLS in the nasal third and the lowest number of MGYLS in the temporal third of the lower eyelid; and (3) instrumentation to standardize diagnostic meibomian gland expression is desirable if not mandatory for the evaluation of meibomian gland function. |
Effect of device-guided breathing exercises on blood pressure in patients with hypertension: a randomized controlled trial. | OBJECTIVE
Hypertension is a chronic disorder with a high prevalence worldwide. Despite considerable efforts, it is sometimes hard to reach treatment goals for blood pressure (BP) with classical treatment options. Reducing breathing frequency has been advocated as a method to reduce BP.
METHODS
A randomized, single-blind, controlled trial was conducted in 30 non-diabetic patients with hypertension over a period of 9 weeks to evaluate the effect of a device that helps to slow breathing (Resperate) on BP and quality of life (QoL). The control group listened to music and used no other therapeutic device.
RESULTS
There was no significant difference in change in BP between intervention and control; BP -4.2 mmHg (95% CI -12.4 to 3.9)/-2.6 mmHg (95% CI -8.4 to 3.3). This result did not alter in post hoc analyses, when patients not achieving target breathing frequency (<10 breaths/min) or non-compliant patients were excluded. QoL did not change over time.
CONCLUSIONS
We found no effect of the Resperate on BP or QoL compared with the control group. We conclude that, at this moment, this device has no added value in the treatment of hypertension. |
Exploring consumers acceptance of mobile payments - an empirical study | With the growing impetus of wireless revolution and rapid increase of mobile devices, it is evident that mobile commerce and payment are becoming a critical component of the new digital economy. Mobile devices are being transformed from simple communication devices to payments platforms. This paper looks into the status of mobile payment in Saudi Arabia in term of consumers’ acceptance and concerns towards mobile payments. This study showed that there is a bright future for m-payment in Saudi Arabia as majority of respondents showed their willingness to participate in such an activity. Security of mobile payment transactions and the unauthorised use of mobile phones to make a payment are great concerns to the mobile phone users. Spinning out from the survey results, a list of recommendations is provided that would be of importance to the mobile industry stakeholders in Saudi Arabia and other countries with similar market and consumer characteristics. |
A Wideband Frequency Tunable Optoelectronic Oscillator Incorporating a Tunable Microwave Photonic Filter Based on Phase-Modulation to Intensity-Modulation Conversion Using a Phase-Shifted Fiber Bragg Grating | An optically tunable optoelectronic oscillator (OEO) with a wide frequency tunable range incorporating a tunable microwave photonic filter implemented based on phase-modulation to intensity-modulation conversion using a phase-shifted fiber Bragg grating (PS-FBG) is proposed and experimentally demonstrated. The PS-FBG in conjunction with two optical phase modulators in the OEO loop form a high-Q, wideband and frequency-tunable microwave photonic bandpass filter, to achieve simultaneously single-frequency selection and frequency tuning. Since the tuning of the microwave filter is achieved by tuning the wavelength of the incident light wave, the tunability can be easily realized at a high speed. A theoretical analysis is performed, which is verified by an experiment. A microwave signal with a frequency tunable from 3 GHz to 28 GHz is generated. To the best of our knowledge, this is the widest frequency tunable range ever achieved by an OEO. The phase noise performance of the OEO is also investigated. |
DARTS: Deceiving Autonomous Cars with Toxic Signs | Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses. |
De-identification of patient notes with recurrent neural networks | Objective
Patient notes in electronic health records (EHRs) may contain critical information for medical investigations. However, the vast majority of medical investigators can only access de-identified notes, in order to protect the confidentiality of patients. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) defines 18 types of protected health information that needs to be removed to de-identify patient notes. Manual de-identification is impractical given the size of electronic health record databases, the limited number of researchers with access to non-de-identified notes, and the frequent mistakes of human annotators. A reliable automated de-identification system would consequently be of high value.
Materials and Methods
We introduce the first de-identification system based on artificial neural networks (ANNs), which requires no handcrafted features or rules, unlike existing systems. We compare the performance of the system with state-of-the-art systems on two datasets: the i2b2 2014 de-identification challenge dataset, which is the largest publicly available de-identification dataset, and the MIMIC de-identification dataset, which we assembled and is twice as large as the i2b2 2014 dataset.
Results
Our ANN model outperforms the state-of-the-art systems. It yields an F1-score of 97.85 on the i2b2 2014 dataset, with a recall of 97.38 and a precision of 98.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with a recall of 99.25 and a precision of 99.21.
Conclusion
Our findings support the use of ANNs for de-identification of patient notes, as they show better performance than previously published systems while requiring no manual feature engineering. |
Validation, Verification, and Testing of Computer Software | Software quahty is achieved through the apphcatlon of development techniques and the use of verification procedures throughout the development process Careful consideratmn of specific quality attmbutes and validation reqmrements leads to the selection of a balanced collection of review, analysis, and testing techmques for use throughout the life cycle. This paper surveys current verification, validation, and testing approaches and discusses their strengths, weaknesses, and life-cycle usage. In conjunction with these, the paper describes automated tools used to nnplement vahdation, verification, and testmg. In the discussion of new research thrusts, emphasis is gwen to the continued need to develop a stronger theoretical basis for testing and the need to employ combinations of tools and techniques that may vary over each apphcation. |
Mindfulness, self-compassion, and happiness in non-meditators: A theoretical and empirical examination | This study examined relationships between mindfulness and indices of happiness and explored a fivefactor model of mindfulness. Previous research using this mindfulness model has shown that several facets predicted psychological well-being (PWB) in meditating and non-meditating individuals. The current study tested the hypothesis that the prediction of PWB by mindfulness would be augmented and partially mediated by self-compassion. Participants were 27 men and 96 women (mean age = 20.9 years). All completed self-report measures of mindfulness, PWB, personality traits (NEO-PI-R), and self-compassion. Results show that mindfulness is related to psychologically adaptive variables and that self-compassion is a crucial attitudinal factor in the mindfulness–happiness relationship. Findings are interpreted from the humanistic perspective of a healthy personality. 2010 Elsevier Ltd. All rights reserved. |
SCNN: An accelerator for compressed-sparse convolutional neural networks | Convolutional Neural Networks (CNNs) have emerged as a fundamental technology for machine learning. High performance and extreme energy efficiency are critical for deployments of CNNs, especially in mobile platforms such as autonomous vehicles, cameras, and electronic personal assistants. This paper introduces the Sparse CNN (SCNN) accelerator architecture, which improves performance and energy efficiency by exploiting the zero-valued weights that stem from network pruning during training and zero-valued activations that arise from the common ReLU operator. Specifically, SCNN employs a novel dataflow that enables maintaining the sparse weights and activations in a compressed encoding, which eliminates unnecessary data transfers and reduces storage requirements. Furthermore, the SCNN dataflow facilitates efficient delivery of those weights and activations to a multiplier array, where they are extensively reused; product accumulation is performed in a novel accumulator array. On contemporary neural networks, SCNN can improve both performance and energy by a factor of 2.7x and 2.3x, respectively, over a comparably provisioned dense CNN accelerator. |
Orthognathic surgery in the presence of temporomandibular dysfunction: what happens next? | One of the most well-known yet perhaps controversial conditions affecting temporomandibular dysfunction (TMD) and the signs and symptoms of facial pain and clinical outcomes after orthognathic surgery procedures is temporomandibular joint internal derangement. This article provides an overview of the mutual relationship between orthognathic surgery and TMD, with especial consideration to internal derangement. The existing literature is reviewed and analyzed and the pertinent findings are summarized. The objective is to guide oral and maxillofacial surgeons in their clinical decision making when contemplating orthognathic surgery in patients with preexisting TMD. |
Emotional memories are not all created equal: evidence for selective memory enhancement. | Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events. |
SciPass : a 100 Gbps capable secure Science DMZ using OpenFlow and Bro | In this paper, we describe a 100Gbps capable OpenFlow based Science DMZ approach which combines adaptive IDS load balancing, dynamic traffic filtering and a novel IDS based technique to detect “good” traffic flows and forward around performance challenged institutional firewalls. Evaluation of this approach was conducted using GridFTP and Iperf3. Results indicate this is a viable approach to enhance science data transfer performance and reduce security hardware costs. |
Second Use of Transportation Batteries: Maximizing the Value of Batteries for Transportation and Grid Services | Plug-in hybrid electric vehicles (PHEVs) and electric vehicles (EVs) are expected to gain significant market share in the next few decades. The economic viability for such vehicles is contingent upon the availability of cost-effective batteries with high power and energy density. For initial commercial success, government subsidies will be instrumental in allowing PHEVs and EVs to gain a foothold. However, in the long term, for electric vehicles to be commercially viable, the economics have to be self-sustaining. Toward the end of the battery life in the vehicle, the energy capacity left in the battery is not sufficient to provide the designed range for the vehicle. Typically, the automotive manufacturers recommend battery replacement when the remaining energy capacity reaches 70%-80%. There is still sufficient power (kilowatts) and energy capacity (kilowatthour) left in the battery to support various grid ancillary services such as balancing, spinning reserve, and load following. As renewable energy penetration increases, the need for such balancing services is expected to increase. This work explores optimality for the replacement of transportation batteries to be subsequently used for grid services. This analysis maximizes the value of an electric vehicle battery to be used as a transportation battery (in its first life) and, then, as a resource for providing grid services (in its second life). The results are presented across a range of key parameters, such as depth of discharge (DOD), number of batteries used over the life of the vehicle, battery life in the vehicle, battery state of health (SOH) at the end of life in the vehicle, and ancillary services rate. The results provide valuable insights for the automotive industry into maximizing the utility and the value of the vehicle batteries in an effort to either reduce the selling price of EVs and PHEVs or maximize the profitability of the emerging electrification of transportation. |
Deep Learning for Encrypted Traffic Classification: An Overview | Traffic classification has been studied for two decades and applied to a wide range of applications from QoS provisioning and billing in ISPs to security-related applications in firewalls and intrusion detection systems. Port-based, data packet inspection, and classical machine learning methods have been used extensively in the past, but their accuracy have been declined due to the dramatic changes in the Internet traffic, particularly the increase in encrypted traffic. With the proliferation of deep learning methods, researchers have recently investigated these methods for traffic classification task and reported high accuracy. In this article, we introduce a general framework for deep-learning-based traffic classification. We present commonly used deep learning methods and their application in traffic classification tasks. Then, we discuss open problems and their challenges, as well as opportunities for traffic classification. |
Practice Developments in Budgeting : An Overview and Research Perspective | Practitioners in Europe and the U.S. recently have proposed two distinct approaches to address what they believe are shortcomings of traditional budgeting practices. One approach advocates improving the budgeting process and primarily focuses on the planning problems with budgeting. The other advocates abandoning the budget and primarily focuses on the performance evaluation problems with budgeting. This paper provides an overview and research perspective on these two recent developments. We discuss why practitioners have become dissatisfied with budgets, describe the two distinct approaches, place them in a research context, suggest insights that may aid the practitioners, and use the practitioner perspectives to identify fruitful areas for research. INTRODUCTION Budgeting is the cornerstone of the management control process in nearly all organizations, but despite its widespread use, it is far from perfect. Practitioners express concerns about using budgets for planning and performance evaluation. The practitioners argue that budgets impede the allocation of organizational resources to their best uses and encourage myopic decision making and other dysfunctional budget games. They attribute these problems, in part, to traditional budgeting’s financial, top-down, commandand-control orientation as embedded in annual budget planning and performance evaluation processes (e.g., Schmidt 1992; Bunce et al. 1995; Hope and Fraser 1997, 2000, 2003; Wallander 1999; Ekholm and Wallin 2000; Marcino 2000; Jensen 2001). We demonstrate practitioners’ concerns with budgets by describing two practice-led developments: one advocating improving the budgeting process, the other abandoning it. These developments illustrate two points. First, they show practitioners’ concerns with budgeting problems that the scholarly literature has largely ignored while focusing instead 1 For example, Comshare (2000) surveyed financial executives about their current experience with their organizations’ budgeting processes. One hundred thirty of the 154 participants (84 percent) identified 332 frustrations with their organizations’ budgeting processes, an average of 2.6 frustrations per person. We acknowledge the many helpful suggestions by the reviewers, Bjorn Jorgensen, Murray Lindsay, Ken Merchant, and Mark Young. 96 Hansen, Otley, and Van der Stede Journal of Management Accounting Research, 2003 on more traditional issues like participative budgeting. Second, the two conflicting developments illustrate that firms face a critical decision regarding budgeting: maintain it, improve it, or abandon it? Our discussion has two objectives. First, we demonstrate the level of concern with budgeting in practice, suggesting its potential for continued scholarly research. Second, we wish to raise academics’ awareness of apparent disconnects between budgeting practice and research. We identify areas where prior research may aid the practitioners and, conversely, use the practitioners’ insights to suggest areas for research. In the second section, we review some of the most common criticisms of budgets in practice. The third section describes and analyzes the main thrust of two recent practiceled developments in budgeting. In the fourth section, we place these two practice developments in a research context and suggest research that may be relevant to the practitioners. The fifth section turns the tables by using the practitioner insights to offer new perspectives for research. In the sixth section, we conclude. PROBLEMS WITH BUDGETING IN PRACTICE The ubiquitous use of budgetary control is largely due to its ability to weave together all the disparate threads of an organization into a comprehensive plan that serves many different purposes, particularly performance planning and ex post evaluation of actual performance vis-à-vis the plan. Despite performing this integrative function and laying the basis for performance evaluation, budgetary control has many limitations, such as its longestablished and oft-researched susceptibility to induce budget games or dysfunctional behaviors (Hofstede 1967; Onsi 1973; Merchant 1985b; Lukka 1988). A recent report by Neely et al. (2001), drawn primarily from the practitioner literature, lists the 12 most cited weaknesses of budgetary control as: 1. Budgets are time-consuming to put together; 2. Budgets constrain responsiveness and are often a barrier to change; 3. Budgets are rarely strategically focused and often contradictory; 4. Budgets add little value, especially given the time required to prepare them; 5. Budgets concentrate on cost reduction and not value creation; 6. Budgets strengthen vertical command-and-control; 7. Budgets do not reflect the emerging network structures that organizations are adopting; 8. Budgets encourage gaming and perverse behaviors; 9. Budgets are developed and updated too infrequently, usually annually; 10. Budgets are based on unsupported assumptions and guesswork; 11. Budgets reinforce departmental barriers rather than encourage knowledge sharing; and 12. Budgets make people feel undervalued. 2 For example, in their review of nearly 2,000 research and professional articles in management accounting in the 1996–2000 period, Selto and Widener (2001) document several areas of ‘‘fit’’ and ‘‘misfit’’ between practice and research. They document that more research than practice exists in the area of participative budgeting and state that ‘‘[this] topic appears to be of little current, practical interest, but continues to attract research efforts, perhaps because of the interesting theoretical issues it presents.’’ Selto and Widener (2001) also document virtually no research on activity-based budgeting (one of the practice-led developments we discuss in this paper) and planning and forecasting, although these areas have grown in practice coverage each year during the 1996– 2000 period. Practice Developments in Budgeting 97 Journal of Management Accounting Research, 2003 While not all would agree with these criticisms, other recent critiques (e.g., Schmidt 1992; Hope and Fraser 1997, 2000, 2003; Ekholm and Wallin 2000; Marcino 2000; Jensen 2001) also support the perception of widespread dissatisfaction with budgeting in practice. We synthesize the sources of dissatisfaction as follows. Claims 1, 4, 9, and 10 relate to the recurring criticism that by the time budgets are used, their assumptions are typically outdated, reducing the value of the budgeting process. A more radical version of this criticism is that conventional budgets can never be valid because they cannot capture the uncertainty involved in rapidly changing environments (Wallender 1999). In more conceptual terms, the operation of a useful budgetary control system requires two related elements. First, there must be a high degree of operational stability so that the budget provides a valid plan for a reasonable period of time (typically the next year). Second, managers must have good predictive models so that the budget provides a reasonable performance standard against which to hold managers accountable (Berry and Otley 1980). Where these criteria hold, budgetary control is a useful control mechanism, but for organizations that operate in more turbulent environments, it becomes less useful (Samuelson 2000). Claims 2, 3, 5, 6, and 8 relate to another common criticism that budgetary controls impose a vertical command-and-control structure, centralize decision making, stifle initiative, and focus on cost reductions rather than value creation. As such, budgetary controls often impede the pursuit of strategic goals by supporting such mechanical practices as lastyear-plus budget setting and across-the-board cuts. Moreover, the budget’s exclusive focus on annual financial performance causes a mismatch with operational and strategic decisions that emphasize nonfinancial goals and cut across the annual planning cycle, leading to budget games involving skillful timing of revenues, expenditures, and investments (Merchant 1985a). Finally, claims 7, 11, and 12 reflect organizational and people-related budgeting issues. The critics argue that vertical, command-and-control, responsibility center-focused budgetary controls are incompatible with flat, network, or value chain-based organizational designs and impede empowered employees from making the best decisions (Hope and Fraser 2003). Given such a long list of problems and many calls for improvement, it seems odd that the vast majority of U.S. firms retain a formal budgeting process (97 percent of the respondents in Umapathy [1987]). One reason that budgets may be retained in most firms is because they are so deeply ingrained in an organization’s fabric (Scapens and Roberts 1993). ‘‘They remain a centrally coordinated activity (often the only one) within the business’’ (Neely et al. 2001, 9) and constitute ‘‘the only process that covers all areas of organizational activity’’ (Otley 1999). However, a more recent survey of Finnish firms found that although 25 percent are retaining their traditional budgeting system, 61 percent are actively upgrading their system, and 14 percent are either abandoning budgets or at least considering it (Ekholm and Wallin 2000). We discuss two practice-led developments that illustrate proposals to improve budgeting or to abandon it. Although the two developments reach different conclusions, both originated in the same organization, the Consortium for Advanced Manufacturing-International (CAM-I); one in 3 We note that there are several factors that inevitably contribute to the seemingly negative evaluation of budgetary controls. First, given information asymmetries, budgets operate under second-best conditions in most organizations. Second, information is costly. Finally, unlike the costs, the benefits of budgeting are indirect, and thus, less salient. 98 Hansen, Otley, and Van der Stede Journal of Management Accounting Research, 2003 the U.S. and the other in Europe. The U |
Efficacy and tolerability of Boswellia serrata extract in treatment of osteoarthritis of knee--a randomized double blind placebo controlled trial. | Osteoarthritis is a common, chronic, progressive, skeletal, degenerative disorder, which commonly affects the knee joint. Boswellia serrata tree is commonly found in India. The therapeutic value of its gum (guggulu) has been known. It posses good anti-inflammatory, anti-arthritic and analgesic activity. A randomized double blind placebo controlled crossover study was conducted to assess the efficacy, safety and tolerability of Boswellia serrata Extract (BSE) in 30 patients of osteoarthritis of knee, 15 each receiving active drug or placebo for eight weeks. After the first intervention, washout was given and then the groups were crossed over to receive the opposite intervention for eight weeks. All patients receiving drug treatment reported decrease in knee pain, increased knee flexion and increased walking distance. The frequency of swelling in the knee joint was decreased. Radiologically there was no change. The observed differences between drug treated and placebo being statistically significant, are clinically relevant. BSE was well tolerated by the subjects except for minor gastrointestinal ADRs. BSE is recommended in the patients of osteoarthritis of the knee with possible therapeutic use in other arthritis. |
A Survey on Internet Traffic Archival Systems | With the popularity of Internet applications and widespread use of mobile Internet, the Internet traffic maintains a rapid growth over the past decades. Internet traffic archival system (ITAS) for packets or flow records becomes more and more widely used in network monitor, network troubleshooting, user behavior and experience analysis etc. In this paper, we survey the design and implementation of several typical traffic archival systems. We analyze and compare the architectures and key technologies backing up Internet traffic archival system, and summarize the key technologies which include packet/flow capturing, packet/flow storage and bitmap index encoding algorithm, and dive into the packet/flow capturing technologies. Then, we propose the design and implementation of TiFaflow traffic archival system. Finally, we summarize and discuss the future direction of Internet traffic archival systems. |
A machine-learning classifier implemented in a standard 6T SRAM array | This paper presents a machine-learning classifier where the computation is performed within a standard 6T SRAM array. This eliminates explicit memory operations, which otherwise pose energy/performance bottlenecks, especially for emerging algorithms (e.g., from machine learning) that result in high ratio of memory accesses. We present an algorithm and prototype IC (in 130nm CMOS), where a 128×128 SRAM array performs storage of classifier models and complete classifier computations. We demonstrate a real application, namely digit recognition from MNIST-database images. The accuracy is equal to a conventional (ideal) digital/SRAM system, yet with 113× lower energy. The approach achieves accuracy >95% with a full feature set (i.e., 28×28=784 image pixels), and 90% when reduced to 82 features (as demonstrated on the IC due to area limitations). The energy per 10-way digit classification is 633pJ at a speed of 50MHz. |
Cheat-Proofing Dead Reckoned Multiplayer Games (Extended Abstract) | THE multiplayer game (MPG) market is segmented into a handful of readily identifiable genres, the most popular being first-person shooters, realtime strategy games, and role-playing games. First-person shooters (FPS) such as Quake [11], Half-Life [17], and Unreal Tournament [9] are fast-paced conflicts between up to thirty heavily armed players. Players in realtime strategy (RTS) games like Command & Conquer [19], StarCraft [8], and Age of Empires [18] or role-playing game (RPG) such as Diablo II [7] command tens or hundreds of units in battle against up to seven other players. Persistent virtual worlds such as Ultima Online [2], Everquest [12], and Lineage [14] encompass hundreds of thousands of players at a time (typically served by multiple servers). Cheating has always been a problem in computer games, and when prizes are involved can become a contractual issue for the game service provider. Here we examine a cheat where players lie about their network latency (and therefore the amount of time they have to react to their opponents) to see into the future and stay |
New reverse-conducting IGBT (1200V) with revolutionary compact package | Fuji Electric developed a 1200V class RC-IGBT based on our latest thin wafer process. The performance of this RC-IGBT shows the same relationship between conduction loss and switching loss as our 6th generation conventional IGBT and FWD. In addition its trade-off can be optimized for hard switching by lifetime killer. Calculations of the hard switching inverter loss and chip junction temperature (Tj) show that the optimized RC-IGBT can handle 35% larger current density per chip area. In order to utilize the high performance characteristics of the RC-IGBT, we assembled them in our newly developed compact package. This module can handle 58% higher current than conventional 100A modules at a 51% smaller footprint. |
Design and measurement of array antennas for 77GHz automotive radar application | Array antennas for 77GHz automotive radar application are designed and measured. Linear series-fed patch array (SFPA) antenna is designed for transmitters of middle range radar (MRR) and all the receivers. A planar SFPA based on the linear one and substrate integrated waveguide (SIW) feeding network is proposed for transmitter of long range radar (LRR), which can decline the radiation from feeding network itself. The array antennas are fabricated, both the performances with and without radome of these array antennas are measured. Good agreement between simulation and measurement has been achieved. They can be good candidates for 77GHz automotive application. |
Modeling of an electric vehicle charging station for fast DC charging | The proposed model of an electric vehicle charging station is suitable for the fast DC charging of multiple electric vehicles. The station consists of a single grid-connected inverter with a DC bus where the electric vehicles are connected. The control of the individual electric vehicle charging processes is decentralized, while a separate central control deals with the power transfer from the AC grid to the DC bus. The electric power exchange does not rely on communication links between the station and vehicles, and a smooth transition to vehicle-to-grid mode is also possible. Design guidelines and modeling are explained in an educational way to support implementation in Matlab/Simulink. Simulations are performed in Matlab/Simulink to illustrate the behavior of the station. The results show the feasibility of the model proposed and the capability of the control system for fast DC charging and also vehicle-to-grid. |
IAP Guidelines on Rickettsial Diseases in Children. | OBJECTIVE
To formulate practice guidelines on rickettsial diseases in children for pediatricians across India.
JUSTIFICATION
Rickettsial diseases are increasingly being reported from various parts of India. Due to low index of suspicion, nonspecific clinical features in early course of disease, and absence of easily available, sensitive and specific diagnostic tests, these infections are difficult to diagnose. With timely diagnosis, therapy is easy, affordable and often successful. On the other hand, in endemic areas, where healthcare workers have high index of suspicion for these infections, there is rampant and irrational use of doxycycline as a therapeutic trial in patients of undifferentiated fevers. Thus, there is a need to formulate practice guidelines regarding rickettsial diseases in children in Indian context.
PROCESS
A committee was formed for preparing guidelines on rickettsial diseases in children in June 2016. A meeting of consultative committee was held in IAP office, Mumbai and scientific content was discussed. Methodology and results were scrutinized by all members and consensus was reached. Textbook references and published guidelines were also used in few instances to make recommendations. Various Indian and international publications pertinent to present study were collated and guidelines were approved by all committee members. Future updates in these guidelines will be dictated by new scientific data in the field of rickettsial diseases in children.
RECOMMENDATIONS
Indian tick typhus and scrub typhus are commonly seen rickettsial diseases in India. It is recommended that practicing pediatricians should be well conversant with compatible clinical scenario, suggestive epidemiological features, differential diagnoses and suggestive laboratory features to make diagnosis and avoid over diagnosis of these infections, as suggested in these guidelines. Doxycycline is the drug of choice and treatment should begin promptly without waiting for confirmatory laboratory results. |
Five-Point Fundamental Matrix Estimation for Uncalibrated Cameras | We aim at estimating the fundamental matrix in two views from five correspondences of rotation invariant features obtained by e.g. the SIFT detector. The proposed minimal solver1 first estimates a homography from three correspondences assuming that they are co-planar and exploiting their rotational components. Then the fundamental matrix is obtained from the homography and two additional point pairs in general position. The proposed approach, combined with robust estimators like Graph-Cut RANSAC, is superior to other state-of-the-art algorithms both in terms of accuracy and number of iterations required. This is validated on synthesized data and 561 real image pairs. Moreover, the tests show that requiring three points on a plane is not too restrictive in urban environment and locally optimized robust estimators lead to accurate estimates even if the points are not entirely co-planar. As a potential application, we show that using the proposed method makes two-view multi-motion estimation more accurate. |
Discrimination Power of Short-Term Heart Rate Variability Measures for CHF Assessment | In this study, we investigated the discrimination power of short-term heart rate variability (HRV) for discriminating normal subjects versus chronic heart failure (CHF) patients. We analyzed 1914.40 h of ECG of 83 patients of which 54 are normal and 29 are suffering from CHF with New York Heart Association (NYHA) classification I, II, and III, extracted by public databases. Following guidelines, we performed time and frequency analysis in order to measure HRV features. To assess the discrimination power of HRV features, we designed a classifier based on the classification and regression tree (CART) method, which is a nonparametric statistical technique, strongly effective on nonnormal medical data mining. The best subset of features for subject classification includes square root of the mean of the sum of the squares of differences between adjacent NN intervals (RMSSD), total power, high-frequencies power, and the ratio between low- and high-frequencies power (LF/HF). The classifier we developed achieved sensitivity and specificity values of 79.3% and 100 %, respectively. Moreover, we demonstrated that it is possible to achieve sensitivity and specificity of 89.7% and 100 %, respectively, by introducing two nonstandard features ΔAVNN and ΔLF/HF, which account, respectively, for variation over the 24 h of the average of consecutive normal intervals (AVNN) and LF/HF. Our results are comparable with other similar studies, but the method we used is particularly valuable because it allows a fully human-understandable description of classification procedures, in terms of intelligible “if ... then ...” rules. |
3D Deep Learning for Efficient and Robust Landmark Detection in Volumetric Data | Recently, deep learning has demonstrated great success in computer vision with the capability to learn powerful image features from a large training set. However, most of the published work has been confined to solving 2D problems, with a few limited exceptions that treated the 3D space as a composition of 2D orthogonal planes. The challenge of 3D deep learning is due to a much larger input vector, compared to 2D, which dramatically increases the computation time and the chance of over-fitting, especially when combined with limited training samples (hundreds to thousands), typical for medical imaging applications. To address this challenge, we propose an efficient and robust deep learning algorithm capable of full 3D detection in volumetric data. A two-step approach is exploited for efficient detection. A shallow network (with one hidden layer) is used for the initial testing of all voxels to obtain a small number of promising candidates, followed by more accurate classification with a deep network. In addition, we propose two approaches, i.e., separable filter decomposition and network sparsification, to speed up the evaluation of a network. To mitigate the over-fitting issue, thereby increasing detection robustness, we extract small 3D patches from a multi-resolution image pyramid. The deeply learned image features are further combined with Haar wavelet features to increase the detection accuracy. The proposed method has been quantitatively evaluated for carotid artery bifurcation detection on a head-neck CT dataset from 455 patients. Compared to the state-ofthe-art, the mean error is reduced by more than half, from 5.97 mm to 2.64 mm, with a detection speed of less than 1 s/volume. |
Patterns of psychosocial adjustment following cardiac surgery. | PURPOSE
The purpose of this research was to document the postoperative experiences of a group of cardiac surgery patients with a view to identifying factors relevant to postsurgical mood and adjustment.
METHODS
Forty-six cardiac surgery patients (mean age = 63.6 years, SD = 11.0) were recruited through a cardiac rehabilitation program at a large teaching hospital in Melbourne, Victoria, Australia. A semistructured interview (Austin CEP Interview) was used to canvass a broad range of postsurgical issues, and 3 mood questionnaires were administered to provide a quantitative assessment of mood at the time of interview.
RESULTS
Three distinct patterns of adjustment and outcome following cardiac surgery were identified and described: "new well me," "new sick me," and "me as always." Undergoing major cardiac surgery per se did not predict mood and adjustment difficulties, whereas the presence of chronic and disabling cardiac symptoms prior to surgery did. Adjustment issues primarily manifested as the extent of change in a patient's identity relating to health and illness perceptions, with age acting as a predictor of the type of adjustment difficulties experienced.
CONCLUSION
This study highlights the significance of psychosocial factors for assessing surgical outcomes and the importance of tailoring rehabilitation programs to the specific needs of individual patients. |
A New Anaesthetic Protocol for Adult Zebrafish (Danio rerio): Propofol Combined with Lidocaine | BACKGROUND
The increasing use of zebrafish model has not been accompanied by the evolution of proper anaesthesia for this species in research. The most used anaesthetic in fishes, MS222, may induce aversion, reduction of heart rate, and consequently high mortality, especially during long exposures. Therefore, we aim to explore new anaesthetic protocols to be used in zebrafish by studying the quality of anaesthesia and recovery induced by different concentrations of propofol alone and in combination with different concentrations of lidocaine.
MATERIAL AND METHODS
In experiment A, eighty-three AB zebrafish were randomly assigned to 7 different groups: control, 2.5 (2.5P), 5 (5P) or 7.5 μg/ml (7.5P) of propofol; and 2.5 μg/ml of propofol combined with 50, (P/50L), 100 (P/100L) or 150 μg/ml (P/150L) of lidocaine. Zebrafish were placed in an anaesthetic water bath and time to lose the equilibrium, reflex to touch, reflex to a tail pinch, and respiratory rate were measured. Time to gain equilibrium was also assessed in a clean tank. Five and 24 hours after anaesthesia recovery, zebrafish were evaluated concerning activity and reactivity. Afterwards, in a second phase of experiments (experiment B), the best protocol of the experiment A was compared with a new group of 8 fishes treated with 100 mg/L of MS222 (100M).
RESULTS
In experiment A, only different concentrations of propofol/lidocaine combination induced full anaesthesia in all animals. Thus only these groups were compared with a standard dose of MS222 in experiment B. Propofol/lidocaine induced a quicker loss of equilibrium, and loss of response to light and painful stimuli compared with MS222. However zebrafish treated with MS222 recovered quickly than the ones treated with propofol/lidocaine.
CONCLUSION
In conclusion, propofol/lidocaine combination and MS222 have advantages in different situations. MS222 is ideal for minor procedures when a quick recovery is important, while propofol/lidocaine is best to induce a quick and complete anaesthesia. |
Assessing the value of risk predictions by using risk stratification tables. | Key Summary Points Risk prediction models are statistical models used to predict the probability of an outcome on the basis of the values of 1 or more risk factors (markers). The accuracy of the model's predictions is typically summarized with statistics that describe the model's discrimination and calibration. Risk stratification tables are a more informative way to assess and compare the models. The tables illustrate the distribution of predictions across risk categories. That illustration allows users to assess 3 key measures of the models' value for guiding medical decisions: the models' calibration, ability to stratify people into clinically relevant risk categories, and accuracy at classifying patients into higher- and lower-risk categories. This information is contained in the margins of the risk stratification table rather than in its cells. The tables should only be used to compare risk prediction models when one of the models contains all of the markers that are contained in the other (nested models); they should not be used to compare models with different sets of markers (nonnested models). The table predictions require corrections when casecontrol data are used. The recent epidemiologic and clinical literature is filled with studies evaluating statistical models that predict risk for disease or some other adverse event (15). Because risk prediction models are intended to help patients and clinicians make decisions, evaluation of these models requires methods that differ from those used to assess models describing disease etiology. This is because the characteristics of the models are less important than their value for guiding decisions. Cook and colleagues (1, 6) recently proposed a new approach to evaluate risk prediction models: a risk stratification table. This methodology appropriately focuses on the key purpose of a risk prediction model, which is to classify individuals into clinically relevant risk categories, and it has therefore been widely adopted in the literature (24). We examine the risk stratification approach in detail in this article, identifying the relevant information that can be abstracted from a risk stratification table and cautioning against misuses of the method that frequently occur in practice. We use a recently published study of a breast cancer risk prediction model by Tice and colleagues (2) to illustrate the concepts. Background A risk prediction marker is any measure that is used to predict a person's risk for an event. It may be a quantitative measure, such as high-density lipoprotein cholesterol level, or a qualitative measure, such as family history of disease. Risk predictors are also risk factors, in the sense that they will necessarily be strongly associated with the risk for disease. But a large, significant association does not assure that the marker has value in predicting risk for many people. A risk prediction model is a statistical model that combines information from several markers. Common types include logistic regression models, Cox proportional hazard models, and classification trees. Each type of model produces a predicted risk for each person by using information in the model. Consider, for example, a model predicting breast cancer risk that includes age as the only predictor. The resulting risk prediction for a woman of a given age is simply the proportion of women her age who develop breast cancer. The woman's predicted risk will change if more information is included in the model. For instance, if family history information is added, her predicted risk will be the proportion of women her age and with her family history who develop breast cancer. The purpose of a risk prediction model is to accurately stratify individuals into clinically relevant risk categories. This risk information can be used to guide clinical or policy decisions, for example, about preventive interventions for persons or disease screening for subpopulations identified as high risk, or to select persons for inclusion in clinical trials. The value of a risk prediction model for guiding these kinds of decisions can be judged by the extent to which the risk calculated from the model reflects the fraction of persons in the population with actual events (its calibration); the proportions in which the population is stratified into clinically relevant risk categories (its stratification capacity); and the extent to which participants with events are assigned to high-risk categories and those without events are assigned to low-risk categories (its classification accuracy). Risk prediction models are commonly evaluated by using the receiver-operating characteristic (ROC) curve (4, 7), which is a standard tool for evaluating the discriminatory accuracy of diagnostic or screening markers. This curve shows the true-positive rate plotted against the false-positive rate for rules that classify persons by using risk thresholds that vary over allpossible values. Receiver-operating characteristic curves are generally not helpful for evaluating risk prediction models because they do not provide information about the actual risks that the models predict or about the proportion of participants who have high or low risk values. Moreover, when comparing ROC curves for 2 risk prediction models, the models are aligned according to their false-positive rates (that is, different risk thresholds are applied to the 2 models to achieve the same false-positive rate). This is clearly inappropriate. In addition, the area under the ROC curve or c-statistic, a commonly reported summary measure that can be interpreted as the probability that the predicted risk for a participant with an event is higher than that for a participant without an event, has little direct clinical relevance. Clinicians are never asked to compare risks for a pair of patientsone who will eventually have the event and one who will not. Neither the ROC curve nor the c-statistic relates to the practical task of predicting risks for clinical decision making. Cook and colleagues (1, 6) propose using risk stratification tables to evaluate the incremental value of a new marker, or the benefit of adding a new marker (for example, C-reactive protein), to an established set of risk predictors (for example, Framingham risk predictors, such as age, diabetes, cholesterol level, smoking, and low-density lipoprotein cholesterol levels). In these stratification tables, risks calculated from models with and without the new marker are cross-tabulated. This approach represents a substantial improvement over the use of ROC methodology because it displays the risks calculated by use of the model and the proportions of individuals in the population who are stratified into the risk groups. We will provide an example of this approach and show how information about model calibration, stratification capacity, and classification accuracy can be derived from a risk stratification table and used to assess the added value of a marker for clinical and health care policy decisions. Example Tice and colleagues (2) published a study that builds and evaluates a model for predicting breast cancer risk by using data from 1095484 women in a prospective cohort and incidence data from the Surveillance, Epidemiology, and End Results database. Age, race or ethnicity, family history, and history of breast biopsy were used to model risk with a Cox proportional hazard model. The study focused on the benefit of adding breast density information to the model. The hazard ratio for breast density in the multivariate model (extremely dense vs. almost entirely fat) was estimated as 4.2 for women younger than age 65 years and 2.2 for women age 65 years or older. This suggests that breast density is strongly associated with disease riskthat is, that breast cancer rates are higher among women with higher breast density. However, it does not describe the value of breast density for helping women make informed clinical decisions, which requires knowledge of the frequency distribution of breast density in the population. To evaluate the added value of breast density, Tice and colleagues defined 5-year breast cancer risk categories as low (##lt##1%), low to intermediate (1% to 1.66%), intermediate to high (1.67% to 2.5%), and high (##gt##2.5%). The 1.67% cutoff for intermediate risk was presumably chosen on the basis of recommendations by the American Society of Clinical Oncology (8) and the Canadian Task Force on Preventive Health Care (9) to counsel women with 5-year risks greater than this threshold about considering tamoxifen for breast cancer prevention. Tice and colleagues used a risk stratification table (Table 1) to compare risk prediction models with and without breast density. Table 1. Five-Year Risks for Breast Cancer as Predicted by Models That Do and Do Not Include Breast Density Calibration Assessing model calibration is an important first step in evaluating any risk prediction model. Good calibration is essential; it means that the model-predicted probability of an event for a person with specified predictor values is the same as or very close to the proportion of all persons in the population with those same predictor values who experience the event (10). With many predictors, and especially with continuous predictors, we cannot evaluate calibration at each possible predictor value because there are too few participants with exactly those values. Instead, the standard approach is to place persons within categories of predicted risk and to compare the category values with the observed event rates for participants in each category. The calibration of the risk prediction models for breast cancer can be assessed by comparing the proportions of events in the margins of Table 1 with the corresponding row and column labels. For the model without breast density, the proportions of observed events within each risk category are in the far-right Total column and they generally agree wit |
Dynamic User Task Scheduling for Mobile Robots | We present our efforts to deploy mobile robots in office environments, focusing in particular on the challenge of planning a schedule for a robot to accomplish user-requested actions. We concretely aim to make our CoBot mobile robots available to execute navigational tasks requested by users, such as telepresence, and picking up and delivering messages or objects at different locations. We contribute an efficient web-based approach in which users can request and schedule the execution of specific tasks. The scheduling problem is converted to a mixed integer programming problem. The robot executes the scheduled tasks using a synthetic speech and touch-screen interface to interact with users, while allowing users to follow the task execution online. Our robot uses a robust Kinect-based safe navigation algorithm, moves fully autonomously without the need to be chaperoned by anyone, and is robust to the presence of moving humans, as well as non-trivial obstacles, such as legged chairs and tables. Our robots have already performed 15km of autonomous service tasks. Introduction and Related Work We envision a system in which autonomous mobile robots robustly perform service tasks in indoor environments. The robots perform tasks which are requested by building residents over the web, such as delivering mail, fetching coffee, or guiding visitors. To fulfill the users’ requests, we must plan a schedule of when the robot will execute each task in accordance with the constraints specified by the users. Many efforts have used the web to access robots, including the early examples of the teleoperation of a robotic arm (Goldberg et al. 1995; Taylor and Trevelyan 1995) and interfacing with a mobile robot (e.g, (Simmons et al. 1997; Siegwart and Saucy 1999; Saucy and Mondada 2000; Schulz et al. 2000)), among others. The robot Xavier (Simmons et al. 1997; 2000) allowed users to make requests over the web for the robot to go to specific places, and other mobile robots soon followed (Siegwart and Saucy 1999; Grange, Fong, and Baur 2000; Saucy and Mondada 2000; Schulz et al. 2000). The RoboCup@Home initiative (Visser and Burkhard 2007) provides competition setups for indoor Copyright © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: CoBot-2, an omnidirectional mobile robot for indoor users. service autonomous robots, with an increasingly wide scope of challenges focusing on robot autonomy and verbal interaction with users. In this work, we present our architecture to effectively make a fully autonomous indoor service robot available to general users. We focus on the problem of planning a schedule for the robot, and present a mixed integer linear programming solution for planning a schedule. We ground our work on the CoBot-2 platform1, shown in Figure 1. CoBot-2 autonomously localizes and navigates in a multi-floor office environment while effectively avoiding obstacles (Biswas and Veloso 2010). The robot carries a variety of sensing and computing devices, including a camera, a Kinect depthcamera, a Hokuyo LIDAR, a touch-screen tablet, microphones, speakers, and wireless communication. CoBot-2 executes tasks sent by users over the web, and we have devised a user-friendly web interface that allows users to specify tasks. Currently, the robot executes three types of tasks: a GoToRoom task where the robot visits a location, a Telepresence task where the robot goes to a location CoBot-2 was designed and built by Michael Licitra, [email protected], as a scaled-up version of the CMDragons small-size soccer robots, also designed and built by him. 27 Automated Action Planning for Autonomous Mobile Robots: Papers from the 2011 AAAI Workshop (WS-11-09) |
Long-term efficacy of transcatheter closure of ventricular septal defect in combination with percutaneous coronary intervention in patients with ventricular septal defect complicating acute myocardial infarction: a multicentre study. | AIMS
To assess the immediate and long-term outcomes of transcatheter closure of ventricular septal defect (VSD) in combination with percutaneous coronary intervention (PCI) in patients with VSD complicating acute myocardial infarction (AMI).
METHODS AND RESULTS
Data were prospectively collected from 35 AMI patients who underwent attempted transcatheter VSD closure and PCI therapy in five high-volume heart centres. All the patients who survived the procedures were followed up by chest x-ray, electrocardiogram and echocardiography. Thirteen patients underwent urgent VSD closure in the acute phase (within two weeks from VSD) while the others underwent elective closure at a median of 23 days from VSD occurrence. The percentage of VSD closure device success was 92.3% (36/39) and procedure success was 91.4% (32/35). The incidence of in-hospital mortality was 14.3% (5/35). At a median of 53 months follow-up, only two patients died at 38 and 41 months, respectively, and other patients' cardiac function tested by echocardiography improved significantly compared to that evaluated before discharge.
CONCLUSION
The combination of transcatheter VSD closure and PCI for treating VSD complicating AMI is safe and feasible and is a promising alternative to surgery in patients with anatomically suitable VSD and coronary lesion. |
Dexterous anthropomorphic robot hand with distributed tactile sensor: Gifu hand II | This paper presents an anthropomorphic robot hand called the Gifu hand II, which has a thumb and four fingers, all the joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand. |
Analyzing noise in autoencoders and deep networks | Autoencoders have emerged as a useful framework for unsupervised learning of internal representations, and a wide variety of apparently conceptually disparate regularization techniques have been proposed to generate useful features. Here we extend existing denoising autoencoders to additionally inject noise before the nonlinearity, and at the hidden unit activations. We show that a wide variety of previous methods, including denoising, contractive, and sparse autoencoders, as well as dropout can be interpreted using this framework. This noise injection framework reaps practical benefits by providing a unified strategy to develop new internal representations by designing the nature of the injected noise. We show that noisy autoencoders outperform denoising autoencoders at the very task of denoising, and are competitive with other single-layer techniques on MNIST, and CIFAR10. We also show that types of noise other than dropout improve performance in a deep network through sparsifying, decorrelating, and spreading information across representations. |
Extent, patterns, and burden of uncontrolled disease in severe or difficult-to-treat asthma. | BACKGROUND
Characterization of uncontrolled asthma burden in a natural treatment setting can influence treatment recommendations and clinical practice. The objective was to characterize and compare the economic burden of severe or difficult-to-treat asthma in uncontrolled and controlled patients.
METHODS
Baseline patient data (age > or = 13 years; n = 3916) were obtained from The Epidemiology and Natural History of Asthma: Outcomes and Treatment Regimens study. Disease control was assessed using two approaches: (i) applying criteria for control based on the Gaining Optimal Asthma Control study, and (ii) using the Asthma Therapy Assessment Questionnaire (ATAQ) to identify the number of asthma control problems. Assessments were performed at baseline, and at months 12 and 24. Monetary values were assigned to productivity loss and medical resource use. Direct and indirect costs were aggregated over 24 months and compared using Student's t-test for continuous measures and chi-squared for categorical variables.
RESULTS
Throughout the study, most patients had uncontrolled asthma (83% uncontrolled; 16% inconsistent control; 1.3% controlled). Controlled patients experienced fewer work or school absences and less healthcare resource use than uncontrolled patients at all study time points. Using the multilevel ATAQ control score, asthma costs increased directly with the number of asthma control problems. Costs for uncontrolled patients were more than double those of controlled patients throughout the study (14,212 vs 6,452 US Dollars; adjusted to 2002 Dollars; P < 0.0001).
CONCLUSIONS
This study demonstrated that few severe or difficult-to-treat asthma patients achieved control over a 2-year period and the economic consequence of uncontrolled disease is substantial. |
Auxiliary Guided Autoregressive Variational Autoencoders | Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields stateof-the-art quantitative results. |
Adaptive Haptic Feedback Steering Wheel for Driving Simulators | Controlling a virtual vehicle is a sensory-motor activity with a specific rendering methodology that depends on the hardware technology and the software in use. We propose a method that computes haptic feedback for the steering wheel. It is best suited for low-cost, fixed-base driving simulators but can be ported to any driving simulator platform. The goal of our method is twofold. 1) It provides an efficient yet simple algorithm to model the steering mechanism using a quadri-polar representation. 2) This model is used to compute the haptic feedback on top of which a tunable haptic augmentation is adjusted to overcome the lack of presence and the unavoidable simulation loop latencies. This algorithm helps the driver to laterally control the virtual vehicle. We also discuss the experimental results that demonstrate the usefulness of our haptic feedback method. |
Implementing New Practices: An Empirical Study of Organizational Learning in Hospital Intensive Care Units | This paper contributes to research on organizational learning by investigating specific learning activities undertaken by improvement project teams in hospital intensive care units and proposing an integrative model to explain implementation success. Organizational learning is important in this context because medical knowledge changes constantly, and hospital care units must learn if they are to provide high quality care. To develop a model of how improvement project teams promote essential organizational learning in health care, we draw from three streams of related research – best practice transfer (BPT), team learning (TL), and process change (PC). To test the model’s hypotheses, we collected data from 23 neonatal intensive care units seeking to implement new or improved practices. We first analyzed the frequency of specific learning activities reported by improvement project participants and discovered two distinct factors: learn-what (activities that identify current best practices) and learn-how (activities that operationalize practices in a given setting). We then conducted general linear model analyses and found support for three of our four hypothesis. Specifically, a high level of supporting evidence for a unit’s portfolio of improvement projects was associated with implementation success. Learn-how was positively associated with implementation success, but learn-what was not. Psychological safety was associated with learn-how, which was found to mediate between psychological safety and implementation success. |
From content delivery today to information centric networking | Today, content delivery is a heterogeneous ecosystem composed by various independent infrastructures. The ever increasing growth of Internet traffic has encouraged the proliferation of different architectures to serve content provider needs and user demand. Despite the differences among the technology, their low level implementation can be characterized in a few fundamental building blocks: network storage, request routing, and data transfer. Existing solutions are inefficient because they try to build an information centric service model over a network infrastructure which was designed to support host-to-host communications. Information-Centric Networking (ICN) paradigm has been proposed as a possible solution to this mismatch. ICN integrates content delivery as a native network feature. The rationale is to architect a network that automatically interprets, processes, and delivers content (information) independently of its location. This paper makes the following contributions: 1) it identifies a set of building blocks for content delivery, 2) it surveys the most popular approaches to realize the above building blocks, 3) it compares content delivery solutions relying on the current Internet infrastructure with novel ICN approaches. |
Tocilizumab in early progressive rheumatoid arthritis: FUNCTION, a randomised controlled trial | OBJECTIVES
The efficacy of tocilizumab (TCZ), an anti-interleukin-6 receptor antibody, has not previously been evaluated in a population consisting exclusively of patients with early rheumatoid arthritis (RA).
METHODS
In a double-blind randomised controlled trial (FUNCTION), 1162 methotrexate (MTX)-naive patients with early progressive RA were randomly assigned (1:1:1:1) to one of four treatment groups: 4 mg/kg TCZ+MTX, 8 mg/kg TCZ+MTX, 8 mg/kg TCZ+placebo and placebo+MTX (comparator group). The primary outcome was remission according to Disease Activity Score using 28 joints (DAS28-erythrocyte sedimentation rate (ESR) <2.6) at week 24. Radiographic and physical function outcomes were also evaluated. We report results through week 52.
RESULTS
The intent-to-treat population included 1157 patients. Significantly more patients receiving 8 mg/kg TCZ+MTX and 8 mg/kg TCZ+placebo than receiving placebo+MTX achieved DAS28-ESR remission at week 24 (45% and 39% vs 15%; p<0.0001). The 8 mg/kg TCZ+MTX group also achieved significantly greater improvement in radiographic disease progression and physical function at week 52 than did patients treated with placebo+MTX (mean change from baseline in van der Heijde-modified total Sharp score, 0.08 vs 1.14 (p=0.0001); mean reduction in Health Assessment Disability Index, -0.81 vs -0.64 (p=0.0024)). In addition, the 8 mg/kg TCZ+placebo and 4 mg/kg TCZ+MTX groups demonstrated clinical efficacy that was at least as effective as MTX for these key secondary endpoints. Serious adverse events were similar among treatment groups. Adverse events resulting in premature withdrawal occurred in 20% of patients in the 8 mg/kg TCZ+MTX group.
CONCLUSIONS
TCZ is effective in combination with MTX and as monotherapy for the treatment of patients with early RA.
TRIAL REGISTRATION NUMBER
ClinicalTrials.gov, number NCT01007435. |
Effects of Ammonium Perchlorate on Thyroid Function in Developing Fathead Minnows, Pimephales promelas | Perchlorate is a known environmental contaminant, largely due to widespread military use as a propellant. Perchlorate acts pharmacologically as a competitive inhibitor of thyroidal iodide uptake in mammals, but the impacts of perchlorate contamination in aquatic ecosystems and, in particular, the effects on fish are unclear. Our studies aimed to investigate the effects of concentrations of ammonium perchlorate that can occur in the environment (1, 10, and 100 mg/L) on the development of fathead minnows, Pimephales promelas. For these studies, exposures started with embryos of < 24-hr postfertilization and were terminated after 28 days. Serial sectioning of thyroid follicles showed thyroid hyperplasia with increased follicular epithelial cell height and reduced colloid in all groups of fish that had been exposed to perchlorate for 28 days, compared with control fish. Whole-body thyroxine (T4) content (a measure of total circulating T4 in fish exposed to 100 mg/L perchlorate was elevated compared with the T4 content of control fish, but 3,5,3-triiodothyronine (T3) content was not significantly affected in any exposure group. Despite the apparent regulation of T3, after 28 days of exposure to ammonium perchlorate, fish exposed to the two higher levels (10 and 100 mg/L) were developmentally retarded, with a lack of scales and poor pigmentation, and significantly lower wet weight and standard length than were control fish. Our study indicates that environmental levels of ammonium perchlorate affect thyroid function in fish and that in the early life stages these effects may be associated with developmental retardation. |
Large Scaled Relation Extraction With Reinforcement Learning | Sentence relation extraction aims to extract relational facts from sentences, which is an important task in natural language processing field. Previous models rely on the manually labeled supervised dataset. However, the human annotation is costly and limits to the number of relation and data size, which is difficult to scale to large domains. In order to conduct largely scaled relation extraction, we utilize an existing knowledge base to heuristically align with texts, which not rely on human annotation and easy to scale. However, using distant supervised data for relation extraction is facing a new challenge: sentences in the distant supervised dataset are not directly labeled and not all sentences that mentioned an entity pair can represent the relation between them. To solve this problem, we propose a novel model with reinforcement learning. The relation of the entity pair is used as distant supervision and guide the training of relation extractor with the help of reinforcement learning method. We conduct two types of experiments on a publicly released dataset. Experiment results demonstrate the effectiveness of the proposed method compared with baseline models, which achieves 13.36% improvement. |
Intuitive Theories as Grammars for Causal Inference | This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recognized that intuitive theories play essential roles in organizing our most basic knowledge of the world, particularly for causal structures in physical, biological, psychological or social domains (Atran, 1995; Carey, 1985a; Kelley, 1973; McCloskey, 1983; Murphy & Medin, 1985; Nichols & Stich, 2003). A principal function of intuitive theories in these domains is to support the learning of new causal knowledge: generating and constraining people’s hypotheses about possible causal relations, highlighting variables, actions and observations likely to be informative about those hypotheses, and guiding people’s interpretation of the data they observe (Ahn & Kalish, 2000; Pazzani, 1987; Pazzani, Dyer & Flowers, 1986; Waldmann, 1996). Leading accounts of cognitive development argue for the importance of intuitive theories in children’s mental lives and frame the major transitions of cognitive development as instances of theory change (Carey, 1985a; Gopnik & Meltzoff, 1997; Inagaki & Hatano 2002; Wellman & Gelman, 1992). Here we attempt to lay out some prospects for understanding the structure, function, and acquisition of intuitive theories from a rational computational perspective. From this viewpoint, theory-like representations are not just a convenient way of summarizing certain aspects of human knowledge. They provide crucial foundations for successful learning and reasoning, and we want to understand how they do so. With this goal in mind, we focus on |
Programmable Networks—From Software-Defined Radio to Software-Defined Networking | Current implementations of Internet systems are very hard to be upgraded. The ossification of existing standards restricts the development of more advanced communication systems. New research initiatives, such as virtualization, software-defined radios, and software-defined networks, allow more flexibility for networks. However, until now, those initiatives have been developed individually. We advocate that the convergence of these overlying and complementary technologies can expand the amount of programmability on the network and support different innovative applications. Hence, this paper surveys the most recent research initiatives on programmable networks. We characterize programmable networks, where programmable devices execute specific code, and the network is separated into three planes: data, control, and management planes. We discuss the modern programmable network architectures, emphasizing their research issues, and, when possible, highlight their practical implementations. We survey the wireless and wired elements on the programmable data plane. Next, on the programmable control plane, we survey the divisor and controller elements. We conclude with final considerations, open issues and future challenges. |
Optimization of the Connection Topology of an Offshore Wind Farm Network | In this paper, an approach based on a genetic algorithm is presented in order to optimize the connection topology of an offshore wind farm network. The main objective is to introduce a technique of coding a network topology to a binary string. The first advantage is that the optimal connections of both middle- and high-voltage alternating-current grids are considered, i.e., the radial clustering of wind turbines, the number and locations of the offshore electrical substations, and the number of high-voltage cables. The second improvement consists of removing infeasible network configurations as designs with crossing cables and thereby reduces the search space of solutions. |
Password-free authentication for social networks | The last decade has witnessed the rapid increase in the adoption and the popularity of social networks. As the number of social networks increase, simpler and easier sign-up and login processes become a key element to promote the new social networks. In many cases, the simple requirement of creating a new user account hinders users from creating new accounts and prompts them to decline using a new network or service. There are a number of applications that allow users to use their existing social network credentials to sign up, however, some users are reluctant to take advantage of such approaches because of privacy concerns or negative perceptions including the worry of spamming. As a result of those concerns and limitations, alternative user authentication approaches that do not require traditional user registration or log-in steps are being explored by the industry. This paper discusses various approaches that can be applied to create password-free user authentication systems that enable social networks to authenticate individual users and provide them access to user-specific data. The paper discusses the prominent techniques that enables web sites to identify individual users based on the characteristics of their device and browser. The practice of sensing and identifying such attributes is referred to as web-based device fingerprinting. The paper discusses how a number of browser detection methods can be combined to create a system that facilitates password-free user authentication. |
Topology comparison for 6.6kW On board charger: Performance, efficiency, and selection guideline | This paper compares various topologies for 6.6kW on-board charger (OBC) to find out suitable topology. In general, OBC consists of 2-stage; power factor correction (PFC) stage and DC-DC converter stage. Conventional boost PFC, interleaved boost PFC, and semi bridgeless PFC are considered as PFC circuit, and full-bridge converter, phase shift full-bridge converter, and series resonant converter are taken into account for DC-DC converter circuit. The design process of each topology is presented. Then, loss analysis is implemented in order to calculate the efficiency of each topology for PFC circuit and DC-DC converter circuit. In addition, the volume of magnetic components and number of semi-conductor elements are considered. Based on these results, topology selection guideline according to the system specification of 6.6kW OBC is proposed. |
Anti-MOG antibodies are present in a subgroup of patients with a neuromyelitis optica phenotype | Antibodies against myelin oligodendrocyte glycoprotein (MOG) have been identified in a subgroup of pediatric patients with inflammatory demyelinating disease of the central nervous system (CNS) and in some patients with neuromyelitis optica spectrum disorder (NMOSD). The aim of this study was to examine the frequency, clinical features, and long-term disease course of patients with anti-MOG antibodies in a European cohort of NMO/NMOSD. Sera from 48 patients with NMO/NMOSD and 48 patients with relapsing-remitting multiple sclerosis (RR-MS) were tested for anti-aquaporin-4 (AQP4) and anti-MOG antibodies with a cell-based assay. Anti-MOG antibodies were found in 4/17 patients with AQP4-seronegative NMO/NMOSD, but in none of the AQP4-seropositive NMO/NMOSD (n = 31) or RR-MS patients (n = 48). MOG-seropositive patients tended towards younger disease onset with a higher percentage of patients with pediatric (<18 years) disease onset (MOG+, AQP4+, MOG−/AQP4−: 2/4, 3/31, 0/13). MOG-seropositive patients presented more often with positive oligoclonal bands (OCBs) (3/3, 5/29, 1/13) and brain magnetic resonance imaging (MRI) lesions during disease course (2/4, 5/31, 1/13). Notably, the mean time to the second attack affecting a different CNS region was longer in the anti-MOG antibody-positive group (11.3, 3.2, 3.4 years). MOG-seropositive patients show a diverse clinical phenotype with clinical features resembling both NMO (attacks mainly confined to the spinal cord and optic nerves) and MS with an opticospinal presentation (positive OCBs, brain lesions). Anti-MOG antibodies can serve as a diagnostic and maybe prognostic tool in patients with an AQP4-seronegative NMO phenotype and should be tested in those patients. |
Semantics-aware Graph-based Recommender Systems Exploiting Linked Open Data | The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach. |
The Legacies of Yukawa and Tomonaga | For this symposium to commemorate the accomplishments of Tomonaga and Yukawa, I will talk about their lives and their place in the history of physics. Then I will go on to some reasonable spot and speculations or my view of what physics is doing at this moment. I discuss my view of physics or kinds of attitudes that I had in doing physics, as well as looking forward to the future of physics and some of its speculations. |
Efficient processing of data warehousing queries in a split execution environment | Hadapt is a start-up company currently commercializing the Yale University research project called HadoopDB. The company focuses on building a platform for Big Data analytics in the cloud by introducing a storage layer optimized for structured data and by providing a framework for executing SQL queries efficiently. This work considers processing data warehousing queries over very large datasets. Our goal is to maximize perfor mance while, at the same time, not giving up fault tolerance and scalability. We analyze the complexity of this problem in the split execution environment of HadoopDB. Here, incoming queries are examined; parts of the query are pushed down and executed inside the higher performing database layer; and the rest of the query is processed in a more generic MapReduce framework.
In this paper, we discuss in detail performance-oriented query execution strategies for data warehouse queries in split execution environments, with particular focus on join and aggregation operations. The efficiency of our techniques is demonstrated by running experiments using the TPC-H benchmark with 3TB of data. In these experiments we compare our results with a standard commercial parallel database and an open-source MapReduce implementation featuring a SQL interface (Hive). We show that HadoopDB successfully competes with other systems. |
Evaluating semantic metamemory: Retrospective confidence judgements on the information subtest. | The current research explored the potential value of adding a supplementary measure of metamemory to the Information subtest of the Wechsler Adult Intelligence Scale - Third Edition (WAIS-III in Study 1) or Fourth Edition (WAIS-IV in Study 2) in order to assess its relationship to other neuropsychological measures and to brain injury. After completing the Information subtest, neuropsychological examinees were asked to make retrospective confidence judgements (RCJ) by rating their answer certainty in the original order of item administration. In Study 1 (N = 52) and study 2 (N = 30), correct answers were rated with significantly more certainty than wrong answers (termed a "confidence gap"), and in both studies, higher confidence for wrong answers was significantly correlated with poorer performance on the Wisconsin Card Sorting Test (for categories completed r = -.58 in Study 1, and r = -.47 in Study 2; for perseverative errors r = .44 in Study 1, and r = .45 in Study 2). In both studies, a number of examinees with positive CT findings had a very small or reversed confidence gap. These findings suggest that semantic metamemory is sensitive to executive functioning and brain injury and should be assessed in the neuropsychological examination. |
Water-only fasting and an exclusively plant foods diet in the management of stage IIIa, low-grade follicular lymphoma. | Follicular lymphoma (FL), the second most common non-Hodgkin's lymphoma (NHL), is well characterised by a classic histological appearance and an indolent course. Current treatment protocols for FL range from close observation to immunotherapy, chemotherapy and/or radiotherapies. We report the case of a 42-year-old woman diagnosed by excisional biopsy with stage IIIa, grade 1 FL. In addition to close observation, the patient underwent a medically supervised, 21-day water-only fast after which enlarged lymph nodes were substantially reduced in size. The patient then consumed a diet of minimally processed plant foods free of added sugar, oil and salt (SOS), and has remained on the diet since leaving the residential facility. At 6 and 9-month follow-up visits, the patient's lymph nodes were non-palpable and she remained asymptomatic. This case establishes a basis for further studies evaluating water-only fasting and a plant foods, SOS-free diet as a treatment protocol for FL. |
On Physical-Layer Security in Multiuser Visible Light Communication Systems With Non-Orthogonal Multiple Access | In order to improve the security performance of multiuser visible light communication (VLC) and facilitate the secure application of optical wireless communication technology in Internet-of-Things, we investigate the physical-layer security in a multiuser VLC system with non-orthogonal multiple access (NOMA). When the light-emitting diode (LED) transmitter communicates with multiple legitimate users by downlink NOMA, both single eavesdropper and multi-eavesdropper scenarios are considered. In the presence of single eavesdropper, based on transmission characteristics of the optical wireless channel, with known instantaneous channel state information (CSI) of the NOMA legitimate channels and statistical CSI of the eavesdropper channel, an exact expression of secrecy outage probability (SOP) is derived, which acts as a benchmark of the security performance to guide selecting or optimizing parameters of the LED transmitter and the photodiode receiver of NOMA legitimate users. In the multi-eavesdropper case, based on the spatial distribution of legitimate users and eavesdroppers, the SOP is obtained via a stochastic geometry theory, so as to guide the NOMA legitimate users to keep away from the area with high eavesdropper density. For typical parameters of the indoor LED transmitter and the PD receiver, simulation results show that the SOP performance improves with the increasing of LED transmission power or transmission signal-to-noise ratio (SNR) in both scenarios. Specifically, in the single eavesdropper case, enlarging the channel condition difference of user groups or deviating the eavesdropper from the given user group can improve the SOP performance, and for a given NOMA legitimate user, the SOP eventually settles around 0.2 while the semi-angle at half illuminance of the LED varies between 15° to 60°. In the multi-eavesdropper case, we can get a better SOP performance when reducing the eavesdropper density or the semi-angle at half illuminance of the LED for a given eavesdropper density. |
Prolonged red cell storage before transfusion increases extravascular hemolysis. | BACKGROUND
Some countries have limited the maximum allowable storage duration for red cells to 5 weeks before transfusion. In the US, red blood cells can be stored for up to 6 weeks, but randomized trials have not assessed the effects of this final week of storage on clinical outcomes.
METHODS
Sixty healthy adult volunteers were randomized to a single standard, autologous, leukoreduced, packed red cell transfusion after 1, 2, 3, 4, 5, or 6 weeks of storage (n = 10 per group). 51-Chromium posttransfusion red cell recovery studies were performed and laboratory parameters measured before and at defined times after transfusion.
RESULTS
Extravascular hemolysis after transfusion progressively increased with increasing storage time (P < 0.001 for linear trend in the AUC of serum indirect bilirubin and iron levels). Longer storage duration was associated with decreasing posttransfusion red cell recovery (P = 0.002), decreasing elevations in hematocrit (P = 0.02), and increasing serum ferritin (P < 0.0001). After 6 weeks of refrigerated storage, transfusion was followed by increases in AUC for serum iron (P < 0.01), transferrin saturation (P < 0.001), and nontransferrin-bound iron (P < 0.001) as compared with transfusion after 1 to 5 weeks of storage.
CONCLUSIONS
After 6 weeks of refrigerated storage, transfusion of autologous red cells to healthy human volunteers increased extravascular hemolysis, saturated serum transferrin, and produced circulating nontransferrin-bound iron. These outcomes, associated with increased risks of harm, provide evidence that the maximal allowable red cell storage duration should be reduced to the minimum sustainable by the blood supply, with 35 days as an attainable goal.REGISTRATION. ClinicalTrials.gov NCT02087514.
FUNDING
NIH grant HL115557 and UL1 TR000040. |
Localization algorithms for multilateration (MLAT) systems in airport surface surveillance | We present a general scheme for analyzing the performance of a generic localization algorithm for multilateration (MLAT) systems (or for other distributed sensor, passive localization technology). MLAT systems are used for airport surface surveillance and are based on time difference of arrival measurements of Mode S signals (replies and 1,090 MHz extended squitter, or 1090ES). In the paper, we propose to consider a localization algorithm as composed of two components: a data model and a numerical method, both being properly defined and described. In this way, the performance of the localization algorithm can be related to the proper combination of statistical and numerical performances. We present and review a set of data models and numerical methods that can describe most localization algorithms. We also select a set of existing localization algorithms that can be considered as the most relevant, and we describe them under the proposed classification. We show that the performance of any localization algorithm has two components, i.e., a statistical one and a numerical one. The statistical performance is related to providing unbiased and minimum variance solutions, while the numerical one is related to ensuring the convergence of the solution. Furthermore, we show that a robust localization (i.e., statistically and numerI. A. Mantilla-Gaviria · J. V. Balbastre-Tejedor Instituto ITACA, Universidad Politécnica de Valencia, Camino de Vera S/N, 46022 Edificio 8G, Acceso B, Valencia, Spain e-mail: [email protected] J. V. Balbastre-Tejedor e-mail: [email protected] M. Leonardi · G. Galati (B) DIE, Tor Vergata University, Via del Politecnico 1, 00133 Rome, Italy e-mail: [email protected]; [email protected] M. Leonardi e-mail: [email protected] ically efficient) strategy, for airport surface surveillance, has to be composed of two specific kind of algorithms. Finally, an accuracy analysis, by using real data, is performed for the analyzed algorithms; some general guidelines are drawn and conclusions are provided. |
Clinical practice guidelines for the surgical management of rotator cuff tears in adults. | CONTEXT
Rotator cuff tears are very common. In 2005, about 45 000 patients in France underwent surgery. Surgical techniques and indications have evolved over recent years with the development of arthroscopic procedures. The lack of visibility on current practice and a request by the French Ministry of Health to assess the fixation devices used in arthroscopic surgery prompted the drafting of these guidelines.
OBJECTIVES
To produce guidelines on the indications and limitations of open surgery and arthroscopic surgery.
METHODS
A systematic review of the literature (2000-2007) was performed. It was submitted to a multidisciplinary working group of experts in the field (n = 12) who drafted an evidence report and clinical practice guidelines, which were amended in the light of comments from 36 peer reviewers.
MAIN RECOMMENDATIONS
(i) Medical treatment (oral medication, injections, physiotherapy) is always the first option in the management of degenerative tears of rotator cuff tendons. Surgery is a later option that depends on clinical and morphological factors, and patient characteristics.(ii) Surgery can be considered for the purpose of functional recovery in cases of a painful, weak or disabling shoulder refractory to medical treatment. (iii) Arthroscopy is indicated for nonreconstructive surgery or debridement, and for partial tear debridement or repair. (iv) Open surgery, mini-open surgery or arthroscopy can be used for a full-thickness tear accessible to direct repair by suture. (v) A humeral prosthesis or total reversed prosthesis is indicated for cuff tear arthropathy. (vi) The fixation devices used for bone reinsertion (anchors, screws, staples,and buttons) are indispensable for fully arthroscopic repair. No studies have determined the number of fixation devices to be used according to tear size. |
Renal Phosphate Reabsorption is Correlated with the Increase in Lumbar Bone Mineral Density in Patients Receiving Once-Weekly Teriparatide | In order to assess the changes in serum calcium and phosphate and the changes in renal tubular phosphate reabsorption (TmP/GFR) and to evaluate the association between these indices and the increase in bone mineral density (BMD) with once-weekly intermittent administration of teriparatide (TPTD), the results from the teriparatide once-weekly efficacy research (TOWER) trial were re-analyzed. The TOWER trial studied postmenopausal women and older men with osteoporosis. Patients were randomly assigned to receive TPTD 56.5 μg or placebo for 72 weeks. Of these patients, the present study investigated those whose calcium and phosphate levels and lumbar BMD (L-BMD) were measured (TPTD group, n = 153 and Placebo group, n = 137). The TPTD group had significantly lower serum phosphate, calcium-phosphate product, and TmP/GFR at weeks 4, 24, 48, and 72 and urinary fractional calcium excretion (FECa) at weeks 12, 48, and 72 (p < 0.05). In the TPTD group, the serum phosphate and TmP/GFR during early treatment (4, and 12 weeks) showed a significant positive correlation with the percent change in L-BMD at weeks 48 and 72. Based on multivariate analysis corrected for age, BMI, and L-BMD at the start of treatment, serum phosphate and TmP/GFR at week 4 showed a significant correlation with the percent change in L-BMD. This study suggests that the L-BMD response to once-weekly long-term TPTD treatment is associated with circulating phosphate or with the status of its renal reabsorption. Preventing decrease in serum phosphate levels may be important in acquiring greater L-BMD with once-weekly TPTD. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.