title
stringlengths
8
300
abstract
stringlengths
0
10k
Growth factor-induced phosphoinositide 3-OH kinase/Akt phosphorylation in smooth muscle cells: induction of cell proliferation and inhibition of cell death.
OBJECTIVE The signaling pathways mediating proliferation and apoptosis in vascular smooth muscle cells (VSMC) are not well established. It has previously been shown that activation of the phosphoinositide 3-OH kinase (PI3K)/Akt pathway or the ERK 1/2 pathway can mediate anti-apoptotic function in different cell types. This study determined the specific contribution of the PI3K/Akt and ERK pathway in the regulation of apoptosis and proliferation of VSMC. METHODS AND RESULTS Incubation of rat VSMC with FCS, insulin or IGF-1 time-dependently stimulated the phosphorylation of Akt, however FCS but not insulin or IGF-1 activated the MAP-kinase ERK 1/2. Moreover, insulin inhibited H(2)O(2)-induced apoptosis via the Akt pathway as demonstrated by pharmacological inhibition of the PI3K or overexpression of a dominant negative Akt mutant. In contrast, FCS inhibited H(2)O(2)-induced apoptosis via the Akt and also the ERK pathway. FCS, but not insulin or IGF-1 induced VSMC proliferation, suggesting that Akt activation is necessary but not sufficient for VSMC proliferation. FCS-induced proliferation of VSMC was only mediated via the Akt pathway and not the ERK pathway. CONCLUSIONS These results define a link between cell proliferation and programmed cell death in VSMC via the same signal transduction pathway, namely activation of the serine/threonine kinase Akt, which may have significant implication for the development of vascular diseases or remodeling.
Hybrid contract checking via symbolic simplification
Program errors are hard to detect or prove absent. Allowing programmers to write formal and precise specifications, especially in the form of contracts, is a popular approach to program verification and error discovery. We formalize and implement a hybrid (static and dynamic) contract checker for a subset of OCaml. The key technique is symbolic simplification, which makes integrating static and dynamic contract checking easy and effective. Our technique statically checks contract satisfaction or blames the function violating the contract. When a contract satisfaction is undecidable, it leaves residual code for dynamic contract checking.
Access Control of Door and Home Security by Raspberry Pi Through Internet
In the present age Internet of things (IOT) has enterned a golden era of rapid growth. The Internet of things is a concept that aims to extend the benefits of the regular Internet—constant connectivity, remote control ability, data sharing, and so on—to goods in the physical world. Everyday things are getting connected with the internet. This concept can be used tomanage the security concerned issues in a cost effective way. In ths paper work a system is being developed to connect any door with the internet, so that the access control system can be controlled from any where in the world. In a case that one is not at home and a vistor is at his door steps then the authorized person will be notified about the visitor via twitter and the person can see the visitor from the web through the camera from any where and the system will take a picture of the visitor and keep a record by sending an attachment through E-mail or tweet in twitter. If the authorized person wants to give a message the visitor it can be sent easily through the internet and it will appear in a screen on the front face of the door. The door lock can be controlled through the internet. With the help of this system an evidence of the visitor can be kept as a record if any emergency case or situtaion occurs.
Clinical Trial of Thermal Pulsation (LipiFlow) in Meibomian Gland Dysfunction With Preteatment Meibography
OBJECTIVES Thermal pulsation (LipiFlow) has been advocated for meibomian gland dysfunction (MGD) treatment and was found useful. We aimed to evaluate the efficacy and safety of thermal pulsation in Asian patients with different grades of meibomian gland loss. METHODS A hospital-based interventional study comparing thermal pulsation to warm compresses for MGD treatment. Fifty patients were recruited from the dry eye clinic of a Singapore tertiary eye hospital. The ocular surface and symptom were evaluated before treatment, and one and three months after treatment. Twenty-five patients underwent thermal pulsation (single session), whereas 25 patients underwent warm compresses (twice daily) for 3 months. Meibomian gland loss was graded using infrared meibography, whereas function was graded using the number of glands with liquid secretion. RESULTS The mean age (SD) of participants was 56.4 (11.4) years in the warm compress group and 55.6 (12.7) years in the thermal pulsation group. Seventy-six percent of the participants were female. Irritation symptom significantly improved over 3 months in both groups (P<0.01), whereas tear breakup time (TBUT) was modestly improved at 1 month in only the thermal pulsation group (P=0.048), without significant difference between both groups over the 3 months (P=0.88). There was also no significant difference in irritation symptom, TBUT, Schirmer test, and gland secretion variables between patients with different grades of gland loss or function at follow-ups. CONCLUSIONS A single session of thermal pulsation was similar in its efficacy and safety profile to 3 months of twice daily warm compresses in Asians. Treatment efficacy was not affected by pretreatment gland loss.
Catfish Binary Particle Swarm Optimization for Feature Selection
The feature selection process constitutes a commonly encountered problem of global combinatorial optimization. This process reduces the number of features by removing irrelevant, noisy, and redundant data, thus resulting in acceptable classification accuracy. Feature selection is a preprocessing technique with great importance in the fields of data analysis and information retrieval processing, pattern classification, and data mining applications. This paper presents a novel optimization algorithm called catfish binary particle swarm optimization (CatfishBPSO), in which the so-called catfish effect is applied to improve the performance of binary particle swarm optimization (BPSO). This effect is the result of the introduction of new particles into the search space (“catfish particles”), which replace particles with the worst fitness by the initialized at extreme points of the search space when the fitness of the global best particle has not improved for a number of consecutive iterations. In this study, the K-nearest neighbor (K-NN) method with leave-oneout cross-validation (LOOCV) was used to evaluate the quality of the solutions. CatfishBPSO was applied and compared to six classification problems taken from the literature. Experimental results show that CatfishBPSO simplifies the feature selection process effectively, and either obtains higher classification accuracy or uses fewer features than other feature selection methods.
Yeast interactions and wine flavour.
Wine is the product of complex interactions between fungi, yeasts and bacteria that commence in the vineyard and continue throughout the fermentation process until packaging. Although grape cultivar and cultivation provide the foundations of wine flavour, microorganisms, especially yeasts, impact on the subtlety and individuality of the flavour response. Consequently, it is important to identify and understand the ecological interactions that occur between the different microbial groups, species and strains. These interactions encompass yeast-yeast, yeast-filamentous fungi and yeast-bacteria responses. The surface of healthy grapes has a predominance of Aureobasidium pullulans, Metschnikowia, Hanseniaspora (Kloeckera), Cryptococcus and Rhodotorula species depending on stage of maturity. This microflora moderates the growth of spoilage and mycotoxigenic fungi on grapes, the species and strains of yeasts that contribute to alcoholic fermentation, and the bacteria that contribute to malolactic fermentation. Damaged grapes have increased populations of lactic and acetic acid bacteria that impact on yeasts during alcoholic fermentation. Alcoholic fermentation is characterised by the successional growth of various yeast species and strains, where yeast-yeast interactions determine the ecology. Through yeast-bacterial interactions, this ecology can determine progression of the malolactic fermentation, and potential growth of spoilage bacteria in the final product. The mechanisms by which one species/strain impacts on another in grape-wine ecosystems include: production of lytic enzymes, ethanol, sulphur dioxide and killer toxin/bacteriocin like peptides; nutrient depletion including removal of oxygen, and production of carbon dioxide; and release of cell autolytic components. Cell-cell communication through quorum sensing molecules needs investigation.
Viscosity Solutions of Fully Nonlinear Parabolic Path Dependent PDEs: Part II
In our previous paper [8], we have introduced a notion of viscosity solutions for fully nonlinear path-dependent PDEs, extending the semilinear case of [6], which satisfies a partial comparison result under standard Lipshitz-type assumptions. The main result of this paper provides a full wellposedness result under an additional assumption formulated on some partial differential equation defined locally by freezing the path. Namely, assuming further that such path-frozen standard PDEs satisfy the comparison principle and the Perron’s approach for existence, we prove that the nonlinear path-dependent PDE has a unique viscosity solution. Uniqueness is implied by a comparison result.
Exploring sentiment analysis on twitter data
The growing popularity of microblogging websites has transformed these into rich resources for sentiment mining. Even though opinion mining has more than a decade of research to boost about, it is mostly confined to the exploration of formal text patterns like online reviews, news articles etc. Exploration of the challenges offered by informal and crisp microblogging have taken roots but there is scope for a large way ahead. The proposed work aims at developing a hybrid model for sentiment classification that explores the tweet specific features and uses domain independent and domain specific lexicons to offer a domain oriented approach and hence analyze and extract the consumer sentiment towards popular smart phone brands over the past few years. The experiments have proved that the results improve by around 2 points on an average over the unigram baseline.
A patient adaptable ECG beat classifier based on neural networks
A novel supervised neural network-based algorithm is designed to reliably distinguish in electrocardiographic (ECG) records between normal and ischemic beats of the same patient. The basic idea behind this paper is to consider an ECG digital recording of two consecutive R-wave segments (RRR interval) as a noisy sample of an underlying function to be approximated by a fixed number of Radial Basis Functions (RBF). The linear expansion coefficients of the RRR interval represent the input signal of a feed-forward neural network which classifies a single beat as normal or ischemic. The system has been evaluated using several patient records taken from the European ST-T database. Experimental results show that the proposed beat classifier is very reliable, and that it may be a useful practical tool for the automatic detection of ischemic episodes. The electrocardiogram (ECG) is a graphic recording of the electrical activities in human heart and provides diagnostically significant information. Its shape, size and duration reflect the heart rhythm over time. The waves related to electrical impulses occurring at each beat of the heart are shown in Fig. 1. The P-wave represents the beginning of the cardiac cycle and is followed by the QRS complex, which is generally the most recognizable feature of an ECG waveform. At the end of the cardiac cycle is the T-wave. The varied sources of heart diseases provide a wide range of alterations in the shape of the electrocar-diogram. For instance, inverted T waves (Fig. 2) are seen during the evolution of myocardial infarction, while ST-segment depression (Fig. 3) can be caused by ischemia. In recent years, many researches concerning automated processing of ECG signals have been conducted A difficult problem in computer-aided ECG analysis is related to the large variation in the morphology of ECG waveforms, not only among different patients, but even within the same patient. This makes the detection of ECG features (ST-segment, T-wave, QRS-area) a tough task. The aim of the present work is to design an ECG beat recognition method to distinguish normal and ischemic patterns of the same patient without requiring the extraction of ECG features. The method is a supervised neural network-based algorithm , and hence its applicability requires the availability of recordings of both normal beats and annotated ischemic episodes. Once trained, the system should be capable of detecting new ischemic beats similar to those previously observed in the patient. This objective is relevant because of …
A comparative study of Artificial Bee Colony algorithm
Artificial Bee Colony (ABC) algorithm is one of the most recently introduced swarm-based algorithms. ABC simulates the intelligent foraging behaviour of a honeybee swarm. In this work, ABC is used for optimizing a large set of numerical test functions and the results produced by ABC algorithm are compared with the results obtained by genetic algorithm, particle swarm optimization algorithm, differential evolution algorithm and evolution strategies. Results show that the performance of the ABC is better than or similar to those of other population-based algorithms with the advantage of employing fewer control parameters. 2009 Elsevier Inc. All rights reserved.
Communication synthesis for distributed embedded systems
Designers of distributed embedded systems face many challenges in determining the tradeoffs when defining a system architecture or retargeting an existing design. Communication synthesis, the automatic generation of the necessary software and hardware for system components to exchange data, is required to more effectively explore the design space and automate very error prone tasks. The paper examines the problem of mapping a high level specification to an arbitrary architecture that uses specific, common bus protocols for interprocessor communication. The communication model presented allows for easy retargeting to different bus topologies, protocols, and illustrates that global considerations are required to achieve a correct implementation. An algorithm is presented that partitions multihop communication timing constraints to effectively utilize the bus bandwidth along a message path. The communication synthesis tool is integrated with a system co-simulator to provide performance data for a given mapping.
Field-Programmable Deep Neural Network (DNN) Learning and Inference accelerator: a concept
An accelerator is a specialized integrated circuit designed to perform specific computations faster than if those computations were performed by general purpose processor, CPU or GPU. State-of-the-art Deep Neural Networks (DNNs) are becoming progressive larger and different applications require different number of layers, types of layer, number of nodes per layer and different interconnect between consecutive layers. A DNN learning and inference accelerator thus need to be reconfigurable against those many requirements for a DNN. It needs to reconfigure to maximum use its on die resources, and if necessary, need to be able to connect with other similar dies or packages for larger and higher performing DNNs. A Field-Programmable DNN learning & inference accelerator (FProg-DNN) using hybrid systolic/non-systolic techniques, distributed information/control and deep pipelined structure is proposed and its microarchitecture and operation presented here. 400mm die sizes are planned for 100 thousand workers (FP64) that can extend to multipledie packages. Reconfigurability allows for different number of workers to be assigned to different layers as a function of the relative difference in computational load among layers. The computational delay per layer is made roughly the same along pipelined accelerator structure. VGG-16 and recently proposed Inception Modules are used for showing the flexibility of the FProg-DNN’s reconfigurability. Special structures were also added for a combination of convolution layer, map coincidence and feedback for state of the art learning with small set of examples, which is the focus of a companion paper by the author (Franca-Neto, 2018). The flexibility of the accelerator described can be appreciated by the fact that it is able to reconfigure from (1) allocating all a DNN computations to a single worker in one extreme of sub-optimal performance to (2) optimally allocating workers per layer according to computational load in each DNN layer to be realized. Due the pipelined nature of the DNN realized in the FProgDNN, the speed-up provided by FProg-DNN varies from 2x to 50x to GPUs or TPUs at equivalent number of workers. This speed-up is consequence of hiding the delay in transporting activation outputs from one layer to the next in a DNN behind the computations in the receiving layer. This FProg-DNN concept has been simulated and validated at behavioral/functional level.
MAC: Mining Activity Concepts for Language-Based Temporal Localization
We address the problem of language-based temporal localization in untrimmed videos. Compared to temporal localization with fixed categories, this problem is more challenging as the language-based queries not only have no pre-defined activity list but also may contain complex descriptions. Previous methods address the problem by considering features from video sliding windows and language queries and learning a subspace to encode their correlation, which ignore rich semantic cues about activities in videos and queries. We propose to mine activity concepts from both video and language modalities by applying the actionness score enhanced Activity Concepts based Localizer (ACL). Specifically, the novel ACL encodes the semantic concepts from verb-obj pairs in language queries and leverages activity classifiers' prediction scores to encode visual concepts. Besides, ACL also has the capability to regress sliding windows as localization results. Experiments show that ACL significantly outperforms state-of-the-arts under the widely used metric, with more than 5% increase on both Charades-STA and TACoS datasets.
Remission in rheumatoid arthritis: physician and patient perspectives.
OBJECTIVE To examine the prevalence of remission in rheumatoid arthritis (RA) as determined by physicians and patients independently, and to determine the degree of agreement among methods, the strength of predictor variables of remission, and the length of remission. METHODS Eight hundred patients with RA completed a remission questionnaire on the day of their rheumatologist visit and their rheumatologists completed a separate questionnaire the same day. The question(s) were: "Given all your experience with disease activity in RA, are you [is your patient] currently in remission?". Patients also completed 0-10 visual analog scales for RA activity, pain, and functional limitation. RESULTS The percentage of patients in remission by physician and patient assessment was 34.8% [95% confidence interval (CI) 31.4-38.2] and 30.9% (95% CI 27.7-34.20), respectively. The percentage of patients classified concordantly (full agreement) was 78.6%, and the associated kappa statistic was 0.54 (95% CI 0.45-0.58). The median duration of remission was 2.0 years. The median RA activity, pain, and functional scores were 1.0, 1.5, and 1.25 for patient-determined remission and 1.5, 1.5, and 1.5 for physician-determined remission. CONCLUSION Physician and patient estimates of remission in RA are similar (34.8% to 30.9%), and agreement was 78.6% (kappa 0.53). Based on previous data and the observed presence of disease activity, this definition of remission appears to be a measure of minimal disease activity rather than true remission. The problem of remission rates will not be solved until a consensus definition that has relevance in research and the clinic is developed.
ChatCoder : Toward the Tracking and Categorization of Internet Predators
We describe the preliminary results from a new research project which studies the communicative strategies of online sexual predators. These pedophiles approach children via Internet technologies, such as instant messaging or chat rooms. This article presents the software we used to facilitate analysis of chat log transcripts and the development of a communicative theory of online predation. This software is used to label and analyze chat transcripts. Our preliminary experimental results show that we can distinquish between predator and victim communication, but not as reliably as we would like to. We can, however, confidently distinquish between predatory and nonpredatory discussion. In a second set of experiments, we used k-means clustering to discover that there are four types of online predation communication.
DIFFUSING POLICIES : TOWARDS WASSERSTEIN POLICY GRADIENT FLOWS
Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence. We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region). This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation. We show that in the small steps limit with respect to the Wasserstein distance W2, policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result. This means that policies undergo diffusion and advection, concentrating near actions with high reward. This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise.
Long-term Results of a Randomized Double-blinded Prospective Trial of a Lightweight (Ultrapro) Versus a Heavyweight Mesh (Prolene) in Laparoscopic Total Extraperitoneal Inguinal Hernia Repair (TULP-trial).
OBJECTIVE The aim of the randomized clinical trial was to compare the 2 years of clinical outcomes of a lightweight (Ultrapro) vs a heavyweight (Prolene) mesh for laparoscopic total extraperitoneal (TEP) inguinal hernia repair. BACKGROUND Lightweight meshes reduce postoperative pain and stiffness in open anterior inguinal hernia repair. The discussion about a similar benefit for laparoscopic repair is ongoing, but concerns exist about higher recurrence rates. METHODS Between March 2010 and October 2012, male patients who presented with a primary, reducible unilateral inguinal hernia who underwent day-case TEP repair were eligible. Outcome parameters included chronic pain, recurrence, foreign body feeling, and quality of life scores. RESULTS During the study period, 950 patients were included. One year postoperatively the presence of relevant pain (Numeric Rating Score 4-10) was significantly higher in the lightweight mesh group (2.9%) compared with the heavyweight mesh group (0.7%) (P = 0.01), and after 2 years this difference remained significant (P = 0.03). There were 4 (0.8%) recurrent hernias in the heavyweight mesh group and 13 (2.7%) in the lightweight group (P = 0.03). No differences in foreign body feeling or quality of life scores were detected. CONCLUSIONS In TEP hernia surgery, there was no benefit of lightweight over heavyweight meshes observed 2 years postoperatively.
Economics of Internet of Things (IoT): An Information Market Approach
Internet of things (IoT) has been proposed to be a new paradigm of connecting devices and providing services to various applications, e.g., transportation, energy, smart city, and healthcare. In this paper, we focus on an important issue, i.e., economics of IoT, that can have a great impact to the success of IoT applications. In particular, we adopt and present the information economics approach with its applications in IoT. We first review existing economic models developed for IoT services. Then, we outline two important topics of information economics which are pertinent to IoT, i.e., the value of information and information good pricing. Finally, we propose a game theoretic model to study the price competition of IoT sensing services. Some outlooks on future research directions of applying information economics to IoT are discussed.
Self-Control and Grit: Related but Separable Determinants of Success.
Other than talent and opportunity, what makes some people more successful than others? One important determinant of success is self-control - the capacity to regulate attention, emotion, and behavior in the presence of temptation. A second important determinant of success is grit - the tenacious pursuit of a dominant superordinate goal despite setbacks. Self-control and grit are strongly correlated, but not perfectly so. This means that some people with high levels of self-control capably handle temptations but do not consistently pursue a dominant goal. Likewise, some exceptional achievers are prodigiously gritty but succumb to temptations in domains other than their chosen life passion. Understanding how goals are hierarchically organized clarifies how self-control and grit are related but distinct: Self-control entails aligning actions with any valued goal despite momentarily more-alluring alternatives; grit, in contrast, entails having and working assiduously toward a single challenging superordinate goal through thick and thin, on a timescale of years or even decades. Although both self-control and grit entail aligning actions with intentions, they operate in different ways and at different time scales. This hierarchical goal framework suggests novel directions for basic and applied research on success.
Generic decoding of seen and imagined objects using hierarchical visual features
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Rotation Invariant Vortices for Flow Visualization
We propose a new class of vortex definitions for flows that are induced by rotating mechanical parts, such as stirring devices, helicopters, hydrocyclones, centrifugal pumps, or ventilators. Instead of a Galilean invariance, we enforce a rotation invariance, i.e., the invariance of a vortex under a uniform-speed rotation of the underlying coordinate system around a fixed axis. We provide a general approach to transform a Galilean invariant vortex concept to a rotation invariant one by simply adding a closed form matrix to the Jacobian. In particular, we present rotation invariant versions of the well-known Sujudi-Haimes, Lambda-2, and Q vortex criteria. We apply them to a number of artificial and real rotating flows, showing that for these cases rotation invariant vortices give better results than their Galilean invariant counterparts.
Biometric Recognition: Challenges and Opportunities
biometric recognition: challenges and opportunities biometric recognition challenges and opportunities biometric recognition: challenges and opportunities biometric recognition challenges and opportunities sc-f biometric recognition: challenges and opportunities biometric recognition challenges and opportunities wet fingerprint recognition: challenges and opportunities emerging biometric modalities: challenges and opportunities pattern recognition letters biometrics research group 50 years of biometric research: accomplishments biometric recognition challenges and opportunities edbl biometric-based authentication for cyberworld security biometric recognition challenges and opportunities ebook biometrics assistedborder challenges and opportunities opportunities and challenges for biometric systems in biometric recognition challenges and opportunities biometric recognition challenges and opportunities biometric systems & interoperability nist the national biometrics challenge 2011 update beauty or beast? harvard university nsf workshop on fundamental research challenges for biometrics in forensic science: challenges, lessons and 50 years of biometric research: accomplishments modern biometric technologies: technical issues and the national biometrics challenge welcome to fbi biometric recognition challenges and opportunities ebook aids vaccine development challenges and opportunities toward trustworthy identification systems report questions biometric technologies phys the national biometrics challenge whitehouse big data challenges in biometric technology timothy pilgrim deputy privacy commissioner speech to dr. stephanie a. c. schuckers director, center for doi 10.1007/978-3-642-27733-7 9188-2 © springer science friendly biometric credentials citeseerx vulnerabilities of biometric authentication “threats and challenges to fingerprints challenges to fingerprints the fingerprint reader on your smartphone may be adequate page 1 of 18 findbiometrics yir 2010 question 2 iris biometric security technology for secure data access challenges to fingerprints challenges to fingerprints is spectral re?ectance of the face a reliable biometric?
Phase II study of elsamitrucin (BMY-28090) for the treatment of patients with refractory/relapsed non-Hodgkin's lymphoma
Purpose: To determine the response rate of patients withrefractory/relapsed non-Hodgkin's lymphoma to treatment with elsamitrucin and to further characterize the toxic effects of elsamitrucin in this group of patients. Patients and methods: Eligibility required pathologically verified relapsed or refractory non-Hodgkin's lymphoma with no more than two prior chemotherapy regimens for patients with tumors classified by the International Working Formulation (IWF) as A-C and no more than one prior chemotherapy for those with IWF grades D-G. Patients were entered with either normal or impaired bone marrow function, but normal liver function tests were required unless clearly related to lymphomatous involvement of the liver. Elsamitrucin 25 mg/m2 was administered intravenously over 5–10 minutes weekly. Results: Thirty-one patients entered the study and were treated for a median of six weeks (range 1–42). All patients were évaluable for toxicity and 30 for response. Mild nausea and/or vomiting and asthenia were the most frequently reported adverse events. Four (13%, 95% CI 4.4–31.6%) partial responses were seen along with two (7%) minor responses while 9 (30%) patients had stable disease. Conclusion: Elsamitrucin showed modest activity in patients with relapsed or refractory non-Hodgkin's lymphoma. Toxicity was relatively mild, consisted mainly of asthenia, nausea and vomiting and did not include myelosuppression. The activity of elsamitrucin in this group of patients and its lack of myelosuppression suggest utility in this disease especially when combined with other proven agents.
Online Learning for Adversaries with Memory: Price of Past Mistakes
The framework of online learning with memory naturally captures learning problems with temporal effects, and was previously studied for the experts setting. In this work we extend the notion of learning with memory to the general Online Convex Optimization (OCO) framework, and present two algorithms that attain low regret. The first algorithm applies to Lipschitz continuous loss functions, obtaining optimal regret bounds for both convex and strongly convex losses. The second algorithm attains the optimal regret bounds and applies more broadly to convex losses without requiring Lipschitz continuity, yet is more complicated to implement. We complement the theoretical results with two applications: statistical arbitrage in finance, and multi-step ahead prediction in statistics.
User-Based Active Learning
Active learning has been proven a reliable strategy to reduce manual efforts in training data labeling. Such strategies incorporate the user as oracle: the classifier selects the most appropriate example and the user provides the label. While this approach is tailored towards the classifier, more intelligent input from the user may be beneficial. For instance, given only one example at a time users are hardly able to determine whether this example is an outlier or not. In this paper we propose user-based visually-supported active learning strategies that allow the user to do both, selecting and labeling examples given a trained classifier. While labeling is straightforward, selection takes place using a interactive visualization of the classifier's a-posteriori output probabilities. By simulating different user selection strategies we show, that user-based active learning outperforms uncertainty based sampling methods and yields a more robust approach on different data sets. The obtained results point towards the potential of combining active learning strategies with results from the field of information visualization.
Thermoelectric microdevice fabrication process and evaluation at the Jet Propulsion Laboratory (JPL)
Advances in the microelectronics industry have made it possible to fabricate a multitude of microdevices, such as microprocessors, microsensors, microcontrollers, and microinstruments. These electronic microdevices have significantly reduced power requirements but at the same time require more attention in terms of integrated thermal management and power management and distribution. Micro thermoelectric converters are considered a promising technology approach for meeting some of these new requirements. Thermoelectric microdevices can convert rejected or waste heat into usable electric power, at moderate (200-500K) temperatures and often with small temperature differentials. They can also be easily integrated and provide effective cooling for devices specific in optoelectronics, such as mid-IR lasers, dense-wavelength-division-multiplexing (DWDM) components and charge-coupled-device (CCD) detectors. In the Materials and Device Technology Group at JPL, we have developed a unique fabrication method for a thermoelectric microdevice that utilizes standard integrated circuit techniques in combination with electrochemical deposition of compound semiconductors (Bi2Te3/Bi2.,Sb,Te3). Our fabrication process is innovative in the sense that we are able to electrochemically micro mold different thermoelectric elements, with the flexibility of adjusting geometry, materials composition or batch scalability. Successive layers of photoresist were patterned and electrochemically filled with compound semiconductor materials or metal interconnects (Au or Ni). A thermoelectric microdevice was built on either glass or an oxidized silicon substrate containing 63 couples (63 n-legs/63 p-legs) at approximately 20 microns in structure height and with a device area close to 1700 pm x 1700 pm. In cooling mode, we evaluated device performance using an IR camera and differential thermal imaging software. We were able to detect a maximum cooling effect of about 2K. In power generation mode, a 75 watt light source was illuminated directly above the device while the current generated was measured. A detailed step-by-step overview of the fabrication process will be given, as well as specifics in testing setups, results and future directions. Introduction As a spacecraft travels further away from the sun, for a defined solar panel surface area, solar flux decreases accordingly (inverse square law) and loses effective power. Spacecraft that travel beyond the orbit of Mars or that require longer lasting power systems require a source of electric power other than solar energy. For missions such as Cassini (launched 1997 to study the Saturn system), radioisotope thermoelectric generators (RTGs) are used for power [I]. Thermoelectric devices take advantage of the Seebeck effect for power generation and can also utilize the Peltier effect for active cooling. Thermoelectric coolers have various applications for microprocessors, medical analyzers, portable picnic coolers and many more. In the optoelectronics industry, thermal management is a significant factor in optimizing device performance. For instance, due to excessive heat generated, there are compromises in laser wavelength stability and increased noise levels in detectors In the past few years, advancements in the microelectronics industry have made it possible to miniaturize components, devices, instruments and even spacecraft. With the miniaturization of electronic devices, there has also been a concomitant focus on developing miniaturized power conversion and thermal management systems. Miniaturizing thermoelectric converters will enable milliwatt power at several volts for MEMS devices and other microinstruments [ 1,3]. Additionally, thermoelectric micro coolers offer effective and practical options for precise thermal management in compact optoelectronic devices. A few applications include spot cooling for mid-IR lasers and CCD detectors [4]. An encouraging approach for meeting various power requirements, while simultaneously being able to offer adequate thermal control, are micro thermoelectric converters. These thermoelectric microdevices can operate at moderate (200-500K) temperatures and with small temperature differentials. For the temperature range of 200-500K, alloys based on n-type Bi2Te3 and p-type Bi2-,Sb,Te3, are the best materials suitable for numerous optoelectronic and micro spacecraft applications [ 11. According to scaling laws [ 1,3], the attractive idea behind a thermoelectric microdevice is to increase specific power (W/cm2) by reducing the size of the thermoelectric elements, while maintaining the same aspect ratio of elements in a larger thermoelectric device. Equally important, miniaturization increases maximum cooling and improves cooling densities [5]. A thermoelectric module generally consists of several nand p-type leg elements (couples) connected in series electrically and in parallel thermally. A microdevice will enable potentially thousands of these couples to be connected together in a very small area, leading to open circuit voltages of several volts at even modest temperature gradients [1,3]. At the Jet Propulsion Laboratory (JPL), we have fabricated thermoelectric microdevices using a combination of integrated circuit processing techniques and electrochemical deposition P I . of compound semiconductors (Bi2Te3/ Bi2-xSbxTe3) [6-81. It was possible to construct micro power generators/coolers with leg elements approximately 20 microns tall and approximately 60 microns in diameter (varies somewhat due to conical shape of legs). A thermoelectric microdevice was built on either glass or an oxidized silicon substrate(Si/Si02) containing 63 couples (63 n-legsi63 p-legs) and with a device area close to 1700 pm x 1700 pm. Microdevices were tested and evaluated for power generation and effective cooling performance. Electrochemistry and Materials Properties Electrochemical deposition (ECD) offers an inexpensive and scalable process [9]. Materials can be varied in composition with deposition rates up to several tens of microns per hour. N-type Bi2Te3 and p-type Bi2.,SbxTe3 compounds were deposited at room temperature at constant potential (EG&G PAR 273A) in a standard three electrode configuration. The working electrode was either a metallized glass or metallized oxidized silicon substrate. The cell had a Pt counter electrode and a saturated calomel electrode (SCE) reference. Regions for deposition were defined using a patterned photoresist mask. Thermoelectric leg elements were deposited from solutions containing dissolved elemental metals with a concentration on the order of M in aqueous 1 M HN03 (pH=O). Solutions containing Sb use chelating agents such as citrate, tartrate or ethylene diamine tetraacetate (EDTA) to allow higher concentrations of the less soluble element at pH 0 [1,3]. Leg elements have been electrochemically formed, however with different thermoelectric properties from that of bulk materials. Due to difficulties in obtaining material properties from individual leg elements, we instead measured ECD films (1 cm2, -10 pm thick). As deposited Bi2Te3 films exhibited heavily doped n-type behavior with dense growth. EDX analysis confirmed near Bi2Te3 stoichiometry. Bi2Te3 material properties are as follows: Seebeck = -30 to -60 pV/K, p 1 d c m (in plane), n 1 x lo2' cm-3 and p~ 1525 cm2V-'s-'. ECD p-Bi2.,Sb,Te3 properties have not been fully characterized because of inconsistencies in reproducibility. Material compositions were found to be very sensitive to initial electrolyte concentrations and deposition voltages. At Sb-rich or near Sb2Te3 stoichiometry, desirable dense morphologies were attained but at the sacrifice of Seebeck values. Upon increasing Bi content, both film and leg elements resulted in unfavorable dendritic/columnar growth. Leg morphology is critical to device fabrication and performance. It is extremely difficult to fabricate complete devices if the tops of the electrodeposited legs are too rough (mentioned later). Also, these low density or porous ECD materials are characterized by higher resistivities and reduced mechanical integrity. Leg elements with low mechanical strength are susceptible to stress induced horizontal cracking, which dramatically increases resistivity or ultimately leads to device failure. Nonetheless, even with incomplete materials characterization, preliminary observations indicate that annealing ECD materials at 250°C have promising effects Commercially available gold and nickel bath solutions were used for ECD of bottom base dogbone contacts and top interconnects.
Owing to Psyche
“Owing to Psyche” examines Keats’s well‐known ode to address the long‐standing debate concerning poetry’s justice. Poetry alternately is said to have an important moral function in telling us truths about ourselves, to be harmful because it misrepresents truths otherwise accessible, or to function as mere pleasurable entertainment. In particular, Keats exemplifies a familiar portrayal of Romantic poets and, by extension, Romanticism proper, as maturing from an early, naive faith in the just ends of poetry to a later demystification that culminates in an ostensibly more modern, truer conception of “art for art’s sake,” or mere poetry. Keats’s Psyche, however, undermines the progressive schemes on which this understanding of Romanticism and literary history relies. Rendering the choice between early and late, like that between just and mere poetry, inoperative, my reading of Keats’s ode collapses the delimitation of “Romanticism” as a particular moment within a progressive literary history, and therewith an...
Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion
Present object detection methods working on 3D range data are so far either optimized for unstructured offroad environments or flat urban environments. We present a fast algorithm able to deal with tremendous amounts of 3D Lidar measurements. It uses a graph-based approach to segment ground and objects from 3D lidar scans using a novel unified, generic criterion based on local convexity measures. Experiments show good results in urban environments including smoothly bended road surfaces.
Generative Adversarial Active Learning for Unsupervised Outlier Detection
Outlier detection is an important topic in machine learning and has been used in a wide range of applications. In this paper, we approach outlier detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution. However, due to the sparsity of data in high-dimensional space, a limited number of potential outliers may fail to provide sufficient information to assist the classifier in describing a boundary that can separate outliers from normal data effectively. To address this, we propose a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator. Moreover, to prevent the generator from falling into the mode collapsing problem, the stop node of training should be determined when SO-GAAL is able to provide sufficient information. But without any prior information, it is extremely difficult for SO-GAAL. Therefore, we expand the network structure of SO-GAAL from a single generator to multiple generators with different objectives (MO-GAAL), which can generate a reasonable reference distribution for the whole dataset. We empirically compare the proposed approach with several state-of-the-art outlier detection methods on both synthetic and real-world datasets. The results show that MO-GAAL outperforms its competitors in the majority of cases, especially for datasets with various cluster types or high irrelevant variable ratio. The experiment codes are available at: https://github.com/leibinghe/GAAL-based-outlier-detection
Effects of environmental changes in a stair climbing intervention: generalization to stair descent.
PURPOSE Visual improvements have been shown to encourage stair use in worksites independently of written prompts. This study examined whether visual modifications alone can influence behavior in a shopping mall. Climbing one flight of stairs, however, will not confer health benefits. Therefore, this study also assessed whether exposure to the intervention encouraged subsequent stair use. DESIGN Interrupted time-series design. SETTINGS Escalators flanked by a staircase on either side. SUBJECTS Ascending and descending pedestrians (N = 81,948). INTERVENTIONS Following baseline monitoring, a colorful design was introduced on the stair risers of one staircase (the target staircase). A health promotion message was superimposed later on top. The intervention was visible only to ascending pedestrians. Thus, any rise in descending stair use would indicate increased intention to use stairs, which endured after initial exposure to the intervention. MEASURES Observers inconspicuously coded pedestrians' means of ascent/descent and demographic characteristics. RESULTS The design alone had no meaningful impact. Addition of the message, however, increased stair climbing at the target and nontarget staircases by 190% and 52%, respectively. The message also produced a modest increase in stair descent at the target (25%) and nontarget (9%) staircases. CONCLUSIONS In public venues, a message component is critical to the success of interventions. In addition, it appears that exposure to an intervention can encourage pedestrians to use stairs on a subsequent occasion.
Facial Action Coding System
The Facial Action Coding System (FACS) is a widely used protocol for recognizing and labelling facial expression by describing the movement of muscles of the face. FACS is used to objectively measure the frequency and intensity of facial expressions without assigning any emotional meaning to those muscle movements. Instead FACS breaks down facial expressions into their smallest discriminable movements called Action Units. Each Action Unit creates a distinct change in facial appearance, such as an eyebrow lift or nose wrinkle. FACS coders can identify the Action Units which are present on the face when viewing still images or videos. Psychological research has used FACS to examine a variety of research questions including social-emotional development, neuropsychiatric disorders, and deception. In the course of this report we provide an overview of FACS and the Action Units, its reliability as a measure, and how it has been applied in some key areas of psychological research.
Topic Detection and Tracking using idf-Weighted Cosine Coefficient
The goal of TDT Topic Detection and Tracking is to develop aut omatic methods of identifying topically related stories wit hin a stream of news media. We describe approaches for both detection and tr cking based on the well-known idf -weighted cosine coefficient similarity metric. The surprising outcome of this research is th at we achieved very competitive results for tracking using a very simple method of feature selection, without word stemming and with ou a score normalization scheme. The detection task results wer e not as encouraging though we attribute this more to the clustering algorithm than the underlying similarity metric. 1. The Tracking Task The goal of the topic tracking task for TDT2 is to identify new s stories on a particular event defined by a small number ( Nt) of positive training examples and a greater number of negative examples . All stories in the news stream subsequent to the final positive ex ample are to be classified as on-topic if they pertain to the event or off-topic if they do not. Although the task is similar to IR routing and fi ltering tasks, the definition of event leads to at least one significan t difference. An event is defined as an occurrence at a given place and t ime covered by the news media. Stories are on-topic if they cover th event itself or any outcome (strictly-defined in [2]) of the e vent. By this definition, all stories prior to the occurrence are offt pic, which contrary to the IR tasks mentioned, theoretically provides for unlimited off-topic training material (assuming retrospective corpora are available). We expected to be able to take advantage of these unlimited negative examples but in our final implementation did so only to the extent that we used a retrospective corpus to improve t erm statistics of our database. 1.1. idf-Weighted Cosine Coefficient As the basis for our approach we used the idf -weighted cosine coefficient described in [1] often referred to as tf idf . Using this metric, the tracking task becomes two-fold. Firstly, choosing an op timal set of features to represent topics, i.e. feature selection. Th e approach must choose features from a single story as well as from multi ple stories (forNt > 1). Secondly, determining a threshold (potentially one per topic) which optimizes the miss and false alarm proba bilities for a particular cost function, effectively normalizing th e similarity scores across topics. The cosine coefficient is a document similarity metric which has been investigated extensively. Here documents (and querie s) ar represented as vectors in an n-dimensional space, where n is the number of unique terms in the database. The coefficients of th e vector for a given document are the term frequencies ( tf ) for that dimension. The resulting vectors are extremely sparse and t ypically high frequency words (mostly closed class) are ignored. The cosine of the angle between two vectors is an indication of vector si milarity and is equal to the dot-product of the vectors normalized by the product of the vector lengths. cos( ) = ~ A ~ B k ~ Akk ~ Bk tf idf (term frequency times inverse document frequency) weighting is an ad-hoc modification to the cosine coefficient calcul tion which weights words according to their usefulness in discriminating documents. Words that appear in few documents are more usefu l than words that appear in many documents. This is captured in the equation for the inverse document frequency of a word: idf(w) = log10 N df(w) Wheredf(w) is the number of documents in a collection which contain wordw andN is the total number of documents in the collection. For our implementation we weighted only the topic vector by idf and left the story vector under test unchanged. This allows u s to calculate and fix anidf -scaled topic vector immediately after training on the last positive example story for a topic. The resulting calculation for the similarity measure becomes: sim(a; b) = Pnw=1 tfa(w) tfb(w) idf(w) pPnw=1 tf2 a (w) pPnw=1 tf2 b (w) 1.2. UPENN System Attributes To facilitate testing, the stories were loaded into a simple document processing system. Once in the system, stories are proc ssed in chronological order testing all topics simultaneously w ith a single pass over the data 1 at a rate of approximately 6000 stories per minute on a Pentium 266 MHz machine. The system tokenizer delimits on white space and punctuation (and discards it), col lapses case, but provides no stemming. A list of 179 stop words consi sti g almost entirely of close classed words was also employed. In order to improve word statistics, particularly for the beginning of the test set, we prepended a retrospective corpus (the TDT Pilot Data [3]) of approximately 16 thousand stories. 1In accordance with the evaluation specification for this pro ject [2] no information is shared across topics. 1.3. Feature Selection Thechoice as well asnumber of features (words) used to represent a topic has a direct effect on the trade-off between miss and f lse alarm probabilities. We investigated four methods of produ cing lists of features sorted by their effectiveness in discriminatin g a topic. This then allowed us to easily vary the number of those featur s for the topic vectors 2. 1. Keep all features except those words belonging to the stop word list. 2. Relative to training stories, sort words by document coun t, keepn most frequent. This approach has the advantage of finding those words which are common across training stories, an d therefore are more general to the topic area, but has the disa dvantage of extending poorly from the Nt = 16 case to the Nt = 1 case. 3. For each story, sort by word count ( tf ), keepn most frequent. While this approach tends to ignore low count words which occur in multiple training documents, it generalizes well f rom theNt = 16 to theNt = 1 case. 4. As a variant on the previous method we tried adding to the initial n features using a simple greedy algorithm. Against a database containing all stories up to and including the Nt-th training story, we queried the database with the n f atures plus the next most frequent term. If the separation of on-topic an d off-topic stories increased, we kept the term, if not we igno red it and tested the next term in the list. We defined separation as the difference between the average on-topic scores and th e average of the 20 highest scoring off-topic documents. Of the feature selection methods we tried the forth one yield ed the best results across varying values of Nt, although only slightly better than the much simpler third method. Occam’s Razor prompt ed us to omit this complication from the algorithm. The DET curves 3 in Figure 1. show the effect of varying the number of features (o btained from method 3) on the miss and false alarm probabilities. The upper right most curve results from choosing the single most fr equent feature. Downward to the left, in order are the curves for 5, 1 0, 50, 150 and 300 features. After examining similar plots from the pilot, training4, and development-test data sets, we set the number of features for our system to 50. It can be seen that there is limited benefit in adding features after this point. 1.4. Normalization / Threshold Selection With a method of feature selection in place, a threshold for t he similarity score must be determined above which stories will be d e med on-topic, and below which they will not. Since each topic is r epresented by its own unique vector it cannot be expected that the same threshold value will be optimal across all topics unless the scores are normalized. We tried two approaches for normalizing the topic similarity scores. For the first approach we calculated the similarity of a rando m sample of several hundred off-topic documents in order to estim a e an 2We did not employ feature selection on the story under test bu t used the text in entirety. 3See [5] for detailed description of DET curves. 4The first two month period of TDT2 data is called the training s et, not to be confused with training data. 1 2 5 10 20 40 60 80 90 .01 .02 .05 .1 .2 .5 1 2 5 10 20 40 60 80 90 M is s pr ob ab ili ty ( in % ) False Alarms probability (in %) random performance num_features=1 num_features=5 num_features=10 num_features=50 num_features=150 num_features=300 Figure 1: DET curve for varying number of features. (Nt=4, TD T2 evaluation data set, newswire and ASR transcripts) average off-topic score relative to the topic vector. The no rmalized score is then a function of the average on-topic 5 and off-topic scores and the off-topic standard deviation 6. The second approach looked at only thehighest scoringoff-topic stories returned from a query of the topic vector against a retrospective database with th e score normalized in a similar fashion to the first approach. Both attempts reduced the story-weighted miss probability by approximately 10 percent at low false alarm probability relat ive. However, this results was achieved at the expense of higher miss probability at higher false alarm probability, and a higher cost a t the operating point defined by the cost function for the task 7. Ctrack = Cmiss P (miss) Ptopic +Cfa P (fa) (1 Ptopic) where Cmiss = 1: (the cost of a miss) Cfa = 1: (the cost of a false alarm) Ptopic = 0:02: (thea priori probability of a story being on a given topic was chosen based on the TDT2 training topics and traini ng corpus.) Because of the less optimal trade-off between error probabi lities at the point defined by the cost function, we choose to ignore nor malization and look directly at cost as a function of a single thr es old value across all topics. We plotted tf idf score against story and topic-weighted cost for the training and development-test data sets. As our global threshold we averaged the scores at which story and topic-weighted cost were minimized. This is depicted in figu re 2. Figure 3 shows the same curves for the evaluation data set. Th e threshold resulting from the traini
Tissue characteristics and anatomic distribution of cardiac metastases among patients with advanced systemic cancer assessed by cardiac magnetic resonance (CMR)
Methods The population comprised consecutive adults (≥18 yo) with metastatic systemic neoplasms who underwent contrast-enhanced CMR between 1/2012 8/2015. Patients with primary cardiac neoplasms were excluded. CMR was performed using 1.5T (88%) and 3T (12%) clinical (GE) scanners. A standard contrast-enhanced CMR protocol was applied: Cine-CMR (SSFP) was used to assess cardiac structure and morphology. DE-CMR (IR-GRE, TI 250-350 msec, 0.2 mmol/kg gadolinium) was used for tissue characterization; long TI (600 msec) DE-CMR was employed to confirm tissue properties of visualized masses. CMET was defined using established criteria as a discrete, irregularly contoured mass with discrete borders independent of cardiac chambers, myocardium, or central catheters. CMET was further categorized based on enhancement pattern (absent, diffuse, heterogeneous enhancement with patchy hypoenhancement). Transthoracic echocardiography (echo), if performed clinically within 30 days of CMR, was used to test conventional imaging for CMET. Results 115 patients (57 ± 15 yo, 54% male) with metastatic extra-cardiac primary neoplasms were studied; 29% (n=33) had CMET on CMR. Sarcoma (21% [n=7]) and melanoma (12% [n=4]) were the two leading primary cancer etiologies; atypical primaries also occurred (n=3 pancreatic, n=1 gastrointestinal stromal, n=1 CNS). CMET location markedly varied (45% RV | 27% LV | 18% RA | 12% LA | 27% pericardial); 21% of cases involved multiple cardiac locations. 76% were due to hematogenous or lymphatic spread; 24% were due to direct invasion. DE-CMR demonstrated CMET enhancement in 83% of cases; enhancement pattern was variable (54% heterogeneous, 46% diffuse). CMET often occurred in absence of pericardial (27%) or pleural (48%) effusions. 67% of the population underwent echo within 30 (6.7 ± 8.0) days of CMR, including 61% (n=20) of patients with CMET by CMR. As shown (Table 1), echo provided limited diagnostic sensitivity for CMET, whether assessed on a per-patient (75%) or per-location (74%) basis, despite excellent specificity (≥98%). Echo performance varied based on CMET morphology and location; CMET detected by CMR but missed by echo were either intra-myocardial (n=2) or in locations suboptimally evaluated via transthoracic ultrasound (n=2 posterior LA | n=1 RV outflow tract).
A new single phase to three phase converter with active input current shaping for low cost AC motor drives
A single-phase-to-three-phase converter for a low-cost AC motor drive is proposed. The converter employs only six switches and incorporates a front-end half-bridge active rectifier structure that provides the DC link with an active input current shaping feature, which results in sinusoidal input current at close to unity power factor. The front-end rectifier in the converter permits bidirectional power flow and provides for excellent regulation against fluctuations in source voltage, facilitating regenerative braking of the AC motor drive. A control strategy that maintains a near-unity power factor over the full operating range and is easy to implement is described. Suitable design guides for the selection of filter components are presented. Simulation and experimental results that verify the developed theoretical models are also presented.<<ETX>>
Impact of cilostazol on restenosis after percutaneous coronary balloon angioplasty.
BACKGROUND Restenosis after percutaneous transluminal coronary (balloon) angioplasty (PTCA) remains a major drawback of the procedure. We previously reported that cilostazol, a platelet aggregation inhibitor, inhibited intimal proliferation after directional coronary atherectomy and reduced the restenosis rate in humans. The present study aimed to determine the effect of cilostazol on restenosis after PTCA. METHODS AND RESULTS Two hundred eleven patients with 273 lesions who underwent successful PTCA were randomly assigned to the cilostazol (200 mg/d) group or the aspirin (250 mg/d) control group. Administration of cilostazol was initiated immediately after PTCA and continued for 3 months of follow-up. Quantitative coronary angiography was performed before PTCA and after PTCA and at follow-up. Reference diameter, minimal lumen diameter, and percent diameter stenosis (DS) were measured by quantitative coronary angiography. Angiographic restenosis was defined as DS at follow-up >50%. Eligible follow-up angiography was performed in 94 patients with 123 lesions in the cilostazol group and in 99 patients with 129 lesions in the control group. The baseline characteristics and results of PTCA showed no significant difference between the 2 groups. However, minimal lumen diameter at follow-up was significantly larger (1.65+/-0.55 vs 1.37+/-0.58 mm; P<0.0001) and DS was significantly lower (34.1+/-17.8% vs 45.6+/-19. 3%; P<0.0001) in the cilostazol group. Restenosis and target lesion revascularization rates were also significantly lower in the cilostazol group (17.9% vs 39.5%; P<0.001 and 11.4% vs 28.7%; P<0. 001). CONCLUSIONS Cilostazol significantly reduces restenosis and target lesion revascularization rates after successful PTCA.
Physical activity and breast cancer risk: the European Prospective Investigation into Cancer and Nutrition.
There is convincing evidence for a decreased risk of breast cancer with increased physical activity. Uncertainties remain, however, about the role of different types of physical activity on breast cancer risk and the potential effect modification for these associations. We used data from 218,169 premenopausal and postmenopausal women from nine European countries, ages 20 to 80 years at study entry into the European Prospective Investigation into Cancer and Nutrition. Hazard ratios (HR) from multivariate Cox regression models were calculated using metabolic equivalent value-based physical activity variables categorized in quartiles, adjusted for age, study center, education, body mass index, smoking, alcohol use, age at menarche, age at first pregnancy, parity, current oral contraceptive use, and hormone replacement therapy use. The physical activity assessment included recreational, household, and occupational activities. A total physical activity index was estimated based on cross-tabulation of these separate types of activity. During 6.4 years of follow-up, 3,423 incident invasive breast cancers were identified. Overall, increasing total physical activity was associated with a reduction in breast cancer risk among postmenopausal women (P(trend) = 0.06). Specifically, household activity was associated with a significantly reduced risk in postmenopausal (HR, 0.81; 95% confidence interval, 0.70-0.93, highest versus the lowest quartile; P(trend) = 0.001) and premenopausal (HR, 0.71; 95% confidence interval, 0.55-0.90, highest versus lowest quartile; P(trend) = 0.003) women. Occupational activity and recreational activity were not significantly related to breast cancer risk in both premenopausal and postmenopausal women. This study provides additional evidence for a protective effect of physical activity on breast cancer risk.
Religious experience and Contemporary Theological Epistemology
In this volume we present the proceedings from the fourth international Leuven Encounters in Systematic Theology (LEST IV, November 5-8, 2003), which focussed on a critical investigation of the place and role of religious experience in the legitimation structures of contemporary theological thinking patterns. In the first part, the keynote lectures, including the responses, are gathered (among others from L. Boeve, F. Fiorenza, L. Hemming, G. Jantzen, S. Painadath, S. Robert, R. Schaeffler, and S. Van den Bossche). In the second part, a selection of the contributions offered in the thematic seminars is presented.
Administration of bortezomib before and after autologous stem cell transplantation improves outcome in multiple myeloma patients with deletion 17p.
In patients with multiple myeloma (MM), risk stratification by chromosomal abnormalities may enable a more rational selection of therapeutic approaches. In the present study, we analyzed the prognostic value of 12 chromosomal abnormalities in a series of 354 MM patients treated within the HOVON-65/GMMG-HD4 trial. Because of the 2-arm design of the study, we were able to analyze the effect of a bortezomib-based treatment before and after autologous stem cell transplantation (arm B) compared with standard treatment without bortezomib (arm A). For allanalyzed chromosomal aberrations, progression-free survival (PFS) and overall survival (OS) were at least equal or superior in the bortezomib arm compared with the standard arm. Strikingly, patients with del(17p13) benefited the most from the bortezomib-containing treatment: the median PFS in arm A was 12.0 months and in arm B it was 26.2 months (P = .024); the 3 year-OS for arm A was 17% and for arm B it was 69% (P = .028). After multivariate analysis, del(17p13) was an independent predictor for PFS (P < .0001) and OS (P < .0001) in arm A, whereas no statistically significant effect on PFS (P = .28) or OS (P = .12) was seen in arm B. In conclusion, the adverse impact of del(17p13) on PFS and OS could be significantly reduced by bortezomib-based treatment, suggesting that long-term administration of bortezomib should be recommended for patients carrying del(17p13).
PyDEC: Software and Algorithms for Discretization of Exterior Calculus
This article describes the algorithms, features, and implementation of PyDEC, a Python library for computations related to the discretization of exterior calculus. PyDEC facilitates inquiry into both physical problems on manifolds as well as purely topological problems on abstract complexes. We describe efficient algorithms for constructing the operators and objects that arise in discrete exterior calculus, lowest-order finite element exterior calculus, and in related topological problems. Our algorithms are formulated in terms of high-level matrix operations which extend to arbitrary dimension. As a result, our implementations map well to the facilities of numerical libraries such as NumPy and SciPy. The availability of such libraries makes Python suitable for prototyping numerical methods. We demonstrate how PyDEC is used to solve physical and topological problems through several concise examples.
Can hysterosalpingo-contrast sonography replace hysterosalpingography in confirming tubal blockage after hysteroscopic sterilization and in the evaluation of the uterus and tubes in infertile patients?
OBJECTIVE The objective of the study was to assess the accuracy of hysterosalpingo-contrast sonography (HyCoSy) in establishing tubal patency or blockage and evaluating the uterine cavity by comparing it with hysteroscopy laparoscopy (HLC) or hysterosalpingography (HSG). STUDY DESIGN This study was a chart review evaluating infertility patients and patients who had undergone hysteroscopic sterilization who underwent both HyCoSy and HLC or HyCoSy and HSG at private offices associated with university hospitals. Sensitivity, specificity, positive predictive value, and negative predictive value of HyCoSy were calculated. RESULTS HyCoSy compared with HLC had a sensitivity of 97% and specificity of 82%, and HyCoSy compared with HSG was 100% concordant. Uterine cavities evaluated by sonohysterography and hysteroscopy were 100% concordant. CONCLUSION HyCoSy is accurate in determining tubal patency and evaluating the uterine cavity, suggesting it could supplant HSG not only as the first-line diagnostic test in an infertility workup but also in confirming tubal blockage after hysteroscopic sterilization.
Optical Character Recognition Techniques: A survey
This paper presents a literature review on English OCR techniques. English OCR system is compulsory to convert numerous published books of English into editable computer text files. Latest research in this area has been able to grown some new methodologies to overcome the complexity of English writing style. Still these algorithms have not been tested for complete characters of English Alphabet. Hence, a system is required which can handle all classes of English text and identify characters among these classes.
Spice-compatible modeling of high injection and propagation of minority carriers in the substrate of Smart Power ICs
Classical substrate noise analysis considers the silicon resistivity of an integrated circuit only as doping dependent besides neglecting diffusion currents as well. In power circuits minority carriers are injected into the substrate and propagate by drift–diffusion. In this case the conductivity of the substrate is spatially modulated and this effect is particularly important in high injection regime. In this work a description of the coupling between majority and minority drift–diffusion currents is presented. A distributed model of the substrate is then proposed to take into account the conductivity modulation and its feedback on diffusion processes. The model is expressed in terms of equivalent circuits in order to be fully compatible with circuit simulators. The simulation results are then discussed for diodes and bipolar transistors and compared to the ones obtained from physical device simulations and measurements. 2014 Published by Elsevier Ltd.
Liposomes in drug delivery: Progress and limitations
Liposomes are microparticulate lipoidal vesicles which are under extensive investigation as drug carriers for improving the delivery of therapeutic agents. Due to new developments in liposome technology, several liposomebased drug formulations are currently in clinical trial, and recently some of them have been approved for clinical use. Reformulation of drugs in liposomes has provided an opportunity to enhance the therapeutic indices of various agents mainly through alteration in their biodistribution. This review discusses the potential applications of liposomes in drug delivery with examples of formulations approved for clinical use, and the problems associated with further exploitation of this drug delivery system. © 1997 Elsevier Science B.V.
Carrier Frequency Offset Compensation with Successive Cancellation in Uplink OFDMA Systems
Similar to OFDM systems, OFDMA systems also suffer from frequency mismatches between the receiver and the transmitter. However, the fact that each uplink user has a different frequency offset makes the compensation more challenging than that of OFDM systems. This letter proposes successive interference cancellation (SIC) for compensating the frequency offset in the uplink OFDMA systems. A decorrelator is used to remove the inter-carrier interference (ICI) within a user's signal and successive cancellation is applied to mitigate the multi access interference (MAI) arising due to the frequency difference among uplink users. The proposed algorithm is shown to eliminate the interference and has a manageable complexity.
Cyclodextrins: application in different routes of drug administration.
The objective of this review article is to explain the use of cyclodextrin in the different routes of drug administration. The article gives the chemistry of cyclodextrins and addresses the issue of the mechanism of drug release from cyclodextrin complexes. Dilution, competitive displacement, protein binding, change in ionic strength and temperature and drug uptake by tissues are the different release mechanisms of the drug from the drug-cyclodextrin complex discussed here. Use and its limitations in the different drug delivery systems like nasal, ophthalmic, transdermal and rectal drug delivery are explained. The application of the cyclodextrins in the oral drug delivery is detailed in this review. Many studies have shown that cyclodextrins are useful additives in the different routes of drug administration because of increased aqueous solubility, stability, bioavailability and reduced drug irritation.
Evaluation of the impact of plug-in electric vehicle loading on distribution system operations
Electric transportation has many attractive features in today's energy environment including decreasing greenhouse gas emissions from the transportation sector, reducing dependence on imported petroleum, and potentially providing consumers a lower cost alternative to gasoline. Plug-in hybrid Electric (PHEV) vehicles represent the most promising approach to electrification of a significant portion of the transportation sector. Electric power utilities recognize this possibility and must analyze the associated impacts to electric system operations. This paper provides details of analytical framework developed to evaluate the impact of PHEV loading on distribution system operations as part of a large, multi-utility collaborative study. This paper also summarizes partial results of the impact of PHEVs on one utility distribution feeder.
Learning to rank search results for time-sensitive queries
Retrieval effectiveness of temporal queries can be improved by taking into account the time dimension. Existing temporal ranking models follow one of two main approaches: 1) a mixture model linearly combining textual similarity and temporal similarity, and 2) a probabilistic model generating a query from the textual and temporal part of document independently. In this paper, we propose a novel time-aware ranking model based on learning-to-rank techniques. We employ two classes of features for learning a ranking model, entity-based and temporal features, which are derived from annotation data. Entity-based features are aimed at capturing the semantic similarity between a query and a document, whereas temporal features measure the temporal similarity. Through extensive experiments we show that our ranking model significantly improves the retrieval effectiveness over existing time-aware ranking models.
A Comparative Study on Steganography Digital Images: A Case Study of Scalable Vector Graphics (SVG) and Portable Network Graphics (PNG) Images Formats
Today image steganography plays a key role for exchanging a secret data through the internet. However, the optimal choice of images formats for processing steganography is still an open issue; therefore, this research comes into a table. This research conducts a comparative study between Scalable Vector Graphics (SVG) image format and Portable Network Graphics (PNG) image format. As results show, SVG image format is more efficient than PNG image format in terms of capacity and scalability before and after processing steganography. As well, SVG image format helps to increase simplicity and performance for processing steganography, since it is an XML text file. Our comparative study provides significant results between SVG and PNG images, which have not been seen in the previous related studies. Keywords—Image steganography; data hiding; raster and vector images; Scalable Vector Graphics (SVG) and Portable Network Graphics (PNG) images format
Combining LSTM and Latent Topic Modeling for Mortality Prediction
There is a great need for technologies that can predict the mortality of patients in intensive care units with both high accuracy and accountability. We present joint end-to-end neural network architectures that combine long short-term memory (LSTM) and a latent topic model to simultaneously train a classifier for mortality prediction and learn latent topics indicative of mortality from textual clinical notes. For topic interpretability, the topic modeling layer has been carefully designed as a single-layer network with constraints inspired by LDA. Experiments on the MIMIC-III dataset show that our models significantly outperform prior models that are based on LDA topics in mortality prediction. However, we achieve limited success with our method for interpreting topics from the trained models by looking at the neural network weights.
As-rigid-as-possible shape interpolation
We present an object-space morphing technique that blends the interiors of given two- or three-dimensional shapes rather than their boundaries. The morph is rigid in the sense that local volumes are least-distorting as they vary from their source to target configurations. Given a boundary vertex correspondence, the source and target shapes are decomposed into isomorphic simplicial complexes. For the simplicial complexes, we find a closed-form expression allocating the paths of both boundary and interior vertices from source to target locations as a function of time. Key points are the identification of the optimal simplex morphing and the appropriate definition of an error functional whose minimization defines the paths of the vertices. Each pair of corresponding simplices defines an affine transformation, which is factored into a rotation and a stretching transformation. These local transformations are naturally interpolated over time and serve as the basis for composing a global coherent least-distorting transformation.
Plastic surgeon compliance with national safety initiatives: clinical outcomes and "never events".
BACKGROUND Venous thromboembolism and surgical-site infection have been identified as preventable complications that are addressed by the National Quality Forum and the Surgical Care Improvement Project. The authors examined compliance of faculty with venous thromboembolism and surgical-site infection prophylaxis and incidence of adverse outcomes in patients at risk. METHODS The authors performed retrospective chart reviews on 243 patients who underwent abdominoplasty or panniculectomy from 2000 to 2007 and documented demographics and adverse outcomes. Analysis was completed using Pearson's chi-square and Fisher's exact test for categorical variables. Significance was set at p < 0.05. Obesity was defined as body mass index more than 30 and morbid obesity was defined as body mass index more than 40. RESULTS Of 243 patients, 144 (59 percent) were obese. Seventeen patients (7 percent) suffered complications. All 243 patients received at least one form of venous thromboembolism prophylaxis. One patient had a deep venous thrombosis, and two had pulmonary embolism. These three patients were morbidly obese. Seventy-four percent of patients received appropriate antibiotics. Thirteen patients (5.3 percent) developed significant postoperative infection requiring hospitalization, 12 (92 percent) of whom received appropriate antibiotics. Eleven of these 13 patients (85 percent) were obese, and seven (54 percent) were morbidly obese. Obesity proved to be the only significant risk factor (p > 0.05). CONCLUSIONS Despite very good compliance with safe practice initiatives, significant adverse outcomes occurred. Obesity was the only pervasive risk factor. This study highlights the potential need for compliance with quality measures and demonstrates that adverse outcomes may result despite adherence to best surgical practices.
Effect of continuous combined therapy with vitamin K(2) and vitamin D(3) on bone mineral density and coagulofibrinolysis function in postmenopausal women.
OBJECTIVES To investigate the therapeutic effect of combined use of vitamin K(2) and D(3) on vertebral bone mineral density in postmenopausal women with osteopenia and osteoporosis. SUBJECTS AND METHODS We enrolled 172 women with vertebral bone mineral density <0.98 g/cm(2) (osteopenia and osteoporosis) as measured by dual-energy X-ray absorptiometry. In this study, we employed the criteria for diagnosis of osteopenia and osteoporosis using dual energy X-ray absorptiometry proposed by the Japan Society of Bone Metabolism in 1996. Subjects were randomized into four groups (each having 43 subjects in vitamin K(2) therapy group, vitamin D(3) therapy group, vitamin K(2) and D(3) combined therapy group, or a control group receiving dietary therapy alone) and treated with respective agents for 2 years, with bone mineral density was measured prior to therapy and after 6, 12, 18, and 24 months of treatment. The bone metabolism markers analyzed were serum type 1 collagen carboxyterminal propeptide (P1CP), serum intact osteocalcin, and urinary pyridinoline. Tests of blood coagulation function consisted of measurement of activated partial thromboplastin time (APTT) and analysis of concentrations of antithrombin III (AT III), fibrinogen, and plasminogen. RESULTS Combined therapy with vitamin K(2) and D(3) for 24 months markedly increased bone mineral density (4.92 +/- 7.89%), while vitamin K(2) alone increased it only 0.135 +/- 5.44%. The bone markers measured, revealed stimulation of both bone formation and resorption activity. We observed an increase in coagulation and fibrinolytic activity that was within the normal range, suggesting that balance was maintained in the fibrinolysis-coagulation system. CONCLUSIONS Continuous combination therapy with vitamin K(2) and D(3) may be useful for increasing vertebral bone mass in postmenopausal women. Furthermore, the increase in coagulation function observed during this therapy was within the physiological range, and no adverse reactions were observed.
Baseline echocardiographic values for adult male rats.
BACKGROUND Because of safety, repeatability, and portability, clinical echocardiography is well established as a standard for cardiac anatomy, cardiac function, and hemodynamics. Similarly, application of echocardiography in commonly used rat experimental models would be worthwhile. The use of noninvasive ultrasound imaging in the rat is a potential replacement for more invasive terminal techniques. Although echocardiography has become commonly used in the rat, normal parameters for cardiac anatomy and function, and comparison with established human values, have not been reported. METHODS A total of 44 Sprague-Dawley male rats had baseline echocardiography replicating a protocol for clinical echocardiography. RESULTS Complete 2-dimensional echocardiography for cardiac anatomy and function was obtained in 44 rats. Hemodynamic parameters could be recorded in 85% of rats. The ejection fraction and fractional shortening values of the left ventricle were similar to those reported for healthy human beings. Pulsed Doppler velocities of atrial systole for mitral valve inflow, pulmonary vein reversal, and Doppler tissue of the lateral mitral valve annulus also had similar means as healthy human beings. The calculated left ventricular mass was at the same order of magnitude as a proportion of body weight of rat to man. All other observations in the clinical protocol were different from those reported in healthy human beings. CONCLUSION The use of echocardiography for assessment of cardiac anatomy, function, and hemodynamics can be consistently applied to the rat and replicates much of the information used routinely in human echocardiography.
Learning to Remember Translation History with a Continuous Cache
Existing neural machine translation (NMT) models generally translate sentences in isolation, missing the opportunity to take advantage of document-level information. In this work, we propose to augment NMT models with a very light-weight cache-like memory network, which stores recent hidden representations as translation history. The probability distribution over generated words is updated online depending on the translation history retrieved from the memory, endowing NMT models with the capability to dynamically adapt over time. Experiments on multiple domains with different topics and styles show the effectiveness of the proposed approach with negligible impact on the computational cost.
An Optimal Power Scheduling Method for Demand Response in Home Energy Management System
With the development of smart grid, residents have the opportunity to schedule their power usage in the home by themselves for the purpose of reducing electricity expense and alleviating the power peak-to-average ratio (PAR). In this paper, we first introduce a general architecture of energy management system (EMS) in a home area network (HAN) based on the smart grid and then propose an efficient scheduling method for home power usage. The home gateway (HG) receives the demand response (DR) information indicating the real-time electricity price that is transferred to an energy management controller (EMC). With the DR, the EMC achieves an optimal power scheduling scheme that can be delivered to each electric appliance by the HG. Accordingly, all appliances in the home operate automatically in the most cost-effective way. When only the real-time pricing (RTP) model is adopted, there is the possibility that most appliances would operate during the time with the lowest electricity price, and this may damage the entire electricity system due to the high PAR. In our research, we combine RTP with the inclining block rate (IBR) model. By adopting this combined pricing model, our proposed power scheduling method would effectively reduce both the electricity cost and PAR, thereby, strengthening the stability of the entire electricity system. Because these kinds of optimization problems are usually nonlinear, we use a genetic algorithm to solve this problem.
On the convergence of the Weiszfeld algorithm
Abstract.In this work we analyze the paper “Brimberg, J. (1995): The Fermat-Weber location problem revisited. Mathematical Programming 71, 71–76” which claims to close the question on the conjecture posed by Chandrasekaran and Tamir in 1989 on the convergence of the Weiszfeld algorithm. Some counterexamples are shown to the proofs showed in Brimberg’s paper.
Automatic Soccer Player Tracking in Single Camera with Robust Occlusion Handling Using Attribute Matching
This paper presents an automatic method to track soccer players in soccer video recorded from a single camera where the occurrence of pan-tilt-zoom can take place. The automatic object tracking is intended to support texture extraction in a free viewpoint video authoring application for soccer video. To ensure that the identity of the tracked object can be correctly obtained, background segmentation is performed and automatically removes commercial billboards whenever it overlaps with the soccer player. Next, object tracking is performed by an attribute matching algorithm for all objects in the temporal domain to find and maintain the correlation of the detected objects. The attribute matching process finds the best match between two objects in different frames according to their pre-determined attributes: position, size, dominant color and motion information. Utilizing these attributes, the experimental results show that the tracking process can handle occlusion problems such as occlusion involving more than three objects and occluded objects with similar color and moving direction, as well as correctly identify objects in the presence of camera movements. key words: free viewpoint, attribute matching, automatic object tracking, soccer video
Corrosion behavior of Ni-based coating containing spherical tungsten carbides in hydrochloric acid solution
A Ni-based alloy coating with 30 wt.% spherical tungsten carbide particles was prepared through plasma transferred arc welding on 42CrMo steel. The composition and microstructure of the coating were examined through X-ray diffraction and scanning electron microscopy with energy-dispersive spectrometry. The corrosion behaviors of the coating compared to the Ni coating without tungsten carbide particles and to the bare substrate in a 0.5 mol/L HCl solution were presented through polarization curves, electrochemical impedance spectroscopy (EIS) measurements and long-term immersion tests. The results demonstrated that the composite coating microstructure comprised Ni matrix, Ni-rich phase, tungsten carbide particles, W-rich phase and Cr-rich phase. The polarization curves and EIS measurements presented that a passivation film, which mainly included Ni, Cr, Fe and W oxides, was formed in the composite coating that protected the substrate from corrosion by HCl solution. In the immersion tests, a micro-galvanic reaction at the new-formed phases and Ni matrix interface caused severe pit corrosion and Ni matrix consumption. The debonding of Ni-rich and W-rich phases could be observed with the immersion time extension. The tungsten carbide particles and Cr-rich phase were still attached on the surface for up to 30 days.
Towards a semantic-based approach for software reusable component classification and retrieval
In this paper, we propose a semantic-based approach to improve software component reuse. The whole approach extends the software reusable library to the World Wide Web; overcomes the keyword-based barrier by allowing user queries in natural language; treats a software component as a service described by semantic service representation format; enhances the retrieval by semantically matching between a user query semantic representation and software component semantic descriptions against a domain ontology; and finally stores the relevant software components into a reusable repository based UDDI infrastructure. The technologies applied to achieve the goal include: Natural Language Processing, Web services, Semantic Web, Conceptual Graph, domain ontology. The research in the first phase will focus on the classification and retrieval for software reusable components. In the classification process, natural language processing and domain knowledge technologies are employed for program understanding down to code level, and Web services and Semantic Web technologies as well as Conceptual Graph are used to semantically describe/represent a component. In the retrieval process, a user query in natural language is translate into semantic representation formats in order to augment retrieval recall and precision by deploying the same semantic representation technologies on both the user query side and the component side.
The direct antiglobulin test: a critical step in the evaluation of hemolysis.
The direct antiglobulin test (DAT) is a laboratory test that detects immunoglobulin and/or complement on the surface of red blood cells. The utility of the DAT is to sort hemolysis into an immune or nonimmune etiology. As with all tests, DAT results must be viewed in light of clinical and other laboratory data. This review highlights the most common clinical situations where the DAT can help classify causes of hemolysis, including autoimmune hemolytic anemia, transfusion-related hemolysis, hemolytic disease of the fetus/newborn, drug-induced hemolytic anemia, passenger lymphocyte syndrome, and DAT-negative hemolytic anemia. In addition, the pitfalls and limitations of the test are addressed. False reactions may occur with improper technique, including improper washing, centrifugation, and specimen agitation at the time of result interpretation. Patient factors, such as spontaneous red blood cell agglutination, may also contribute to false results.
Passive YouTube QoE Monitoring for ISPs
Over the last decade, Quality of Experience (QoE) has become the guiding paradigm for enabling a more user-centric understanding of quality of communication networks and services. The intensifying competition among ISPs and the exponentially increasing traffic volumes caused by online video platforms like YouTube is forcing service providers to integrate QoE into their corporate DNA. This paper investigates the problem of YouTube QoE monitoring from an access provider's perspective. To this end, we present three novel methods for in-network measurement of the QoE impairment that dominates user perception in the context of HTTP video-streaming: stalling of playback. Our evaluation results show that it is possible to detect application-level stalling events at high accuracy by using network-level passive probing only. However, only the most complex and most accurate approach can be used for QoE prediction due to the non-linear ties inherent in human quality perception.
High performance of dual band/dual polarization compact OMT
The development of a compact circular polarization Orthomode Trasducer (OMT) working in two frequency bands with dual circular polarization (RHCP & LHCP) is presented. The device covers the complete communication spectrum allocated at C-band. At the same time, the device presents high power handling capability and very low mass and envelope size. The OMT plus a feed horn are used to illuminate a Reflector antenna, the surface of which is shaped to provide domestic or regional coverage from geostationary orbit. The full band operation increases the earth-satellite communication capability. The paper will show the OMT selected architecture, the RF performances at unit level and at component level. RF power aspects like multipaction and PIM are addressed. This development was performed under European Space Agency ESA ARTES-4 program.
Depression and pain comorbidity: a literature review.
Because depression and painful symptoms commonly occur together, we conducted a literature review to determine the prevalence of both conditions and the effects of comorbidity on diagnosis, clinical outcomes, and treatment. The prevalences of pain in depressed cohorts and depression in pain cohorts are higher than when these conditions are individually examined. The presence of pain negatively affects the recognition and treatment of depression. When pain is moderate to severe, impairs function, and/or is refractory to treatment, it is associated with more depressive symptoms and worse depression outcomes (eg, lower quality of life, decreased work function, and increased health care utilization). Similarly, depression in patients with pain is associated with more pain complaints and greater impairment. Depression and pain share biological pathways and neurotransmitters, which has implications for the treatment of both concurrently. A model that incorporates assessment and treatment of depression and pain simultaneously is necessary for improved outcomes.
Soft and Declarative Fishing of Information in Big Data Lake
In recent years, many fields that experience a sudden proliferation of data, which increases the volume of data that must be processed and the variety of formats the data is stored in have been identified. This causes pressure on existing compute infrastructures and data analysis methods, as more and more data are considered as a useful source of information for making critical decisions in particular fields. Among these fields exist several areas related to human life, e.g., various branches of medicine, where the uncertainty of data complicates the data analysis, and where the inclusion of fuzzy expert knowledge in data processing brings many advantages. In this paper, we show how fuzzy techniques can be incorporated in big data analytics carried out with the declarative U-SQL language over a big data lake located on the cloud. We define the concept of big data lake together with the Extract, Process, and Store process performed while schematizing and processing data from the Data Lake, and while storing results of the processing. Our solution, developed as a Fuzzy Search Library for Data Lake, introduces the possibility of massively parallel, declarative querying of big data lake with simple and complex fuzzy search criteria, using fuzzy linguistic terms in various data transformations, and fuzzy grouping. Presented ideas are exemplified by a distributed analysis of large volumes of biomedical data on Microsoft Azure cloud. Results of performed tests confirm that the presented solution is highly scalable on the Cloud and is a successful step toward soft and declarative processing of data on a large scale. The solution presented in this paper directly addresses three characteristics of big data, i.e., volume, variety, and velocity, and indirectly addresses, veracity and value.
Classification Algorithm for Feature Extraction using Linear Discriminant Analysis and Cross-correlation on ECG Signals
This paper develops a novel framework for feature extraction based on a combination of Linear Discriminant Analysis and cross-correlation. Multiple Electrocardiogram (ECG) signals, acquired from the human heart in different states such as in fear, during exercise, etc. are used for simulations. The ECG signals are composed of P, Q, R, S and T waves. They are characterized by several parameters and the important information relies on its HRV (Heart Rate Variability). Human interpretation of such signals requires experience and incorrect readings could result in potentially life threatening and even fatal consequences. Thus a proper interpretation of ECG signals is of paramount importance. This work focuses on designing a machine based classification algorithm for ECG signals. The proposed algorithm filters the ECG signals to reduce the effects of noise. It then uses the Fourier transform to transform the signals into the frequency domain for analysis. The frequency domain signal is then cross correlated with predefined classes of ECG signals, in a manner similar to pattern recognition. The correlated co-efficients generated are then thresholded. Moreover Linear Discriminant Analysis is also applied. Linear Discriminant Analysis makes classes of different multiple ECG signals. LDA makes classes on the basis of mean, global mean, mean subtraction, transpose, covariance, probability and frequencies. And also setting thresholds for the classes. The distributed space area is divided into regions corresponding to each of the classes. Each region associated with a class is defined by its thresholds. So it is useful in distinguishing ECG signals from each other. And pedantic details from LDA (Linear Discriminant Analysis) output graph can be easily taken in account rapidly. The output generated after applying cross-correlation and LDA displays either normal, fear, smoking or exercise ECG signal. As a result, the system can help clinically on large scale by providing reliable and accurate classification in a fast and computationally efficient manner. The doctors can use this system by gaining more efficiency. As very few errors are involved in it, showing accuracy between 90% 95%.
Fast generation of realistic virtual humans
In this paper we present a complete pipeline to create ready-to-animate virtual humans by fitting a template character to a point set obtained by scanning a real person using multi-view stereo reconstruction. Our virtual humans are built upon a holistic character model and feature a detailed skeleton, fingers, eyes, teeth, and a rich set of facial blendshapes. Furthermore, due to the careful selection of techniques and technology, our reconstructed humans are quite realistic in terms of both geometry and texture. Since we represent our models as single-layer triangle meshes and animate them through standard skeleton-based skinning and facial blendshapes, our characters can be used in standard VR engines out of the box. By optimizing for computation time and minimizing manual intervention, our reconstruction pipeline is capable of processing whole characters in less than ten minutes.
Hepatitis B virus nucleic acid testing in Chinese blood donors with normal and elevated alanine aminotransferase.
BACKGROUND Nucleic acid testing (NAT) is currently not a routine donor test in China. The aim of this study was to evaluate the current residual risk of hepatitis B virus (HBV) transmission and the value of ALT testing in preventing HBV infection. STUDY DESIGN AND METHODS From January 2008 to September 2009, a total of 5521 qualified donations by routine screening and 5034 deferred donations due to elevated ALT alone were collected from five blood centers. Samples were tested for HBV DNA by triplex individual-donation (ID)-NAT (ULTRIO assay, on the TIGRIS system, Novartis Diagnostics). HBV NAT-reactive samples were further analyzed by HBV serology, alternative NAT, and viral load and were diluted to simulate if they could be detected in a minipool-NAT. RESULTS There was no significant difference in the HBV NAT-yield rate between the qualified donations group (5/5521) and the deferred donations group (4/5034). Of these nine potential HBV-yield cases, one donor (11%) was a possible HBV window-period donor, one (11%) was a chronic HBV carrier, and seven (78%) had probable or confirmed occult HBV infections. Of seven potential HBV-yield cases quantified, the viral loads were less than or equal to 70.0 IU/mL. Minipool testing (minipools of 4, 8, and 16 donations) would miss 43% to 79% of the nine HBV-yield donations. CONCLUSIONS Based on our findings in qualified donations, we estimate that the nationwide implementation of ID-NAT testing for HBV DNA in China would detect an additional 9964 viremic donations per year. ALT testing seems to have no significant value in preventing transfusion-transmitted HBV infection. ID-NAT versus simulated minipool-NAT using the ULTRIO test demonstrates the benefit to implement a more sensitive NAT strategy in regions of high HBV endemicity.
The transformation of invention in nineteenth century American rhetoric
The virtual disappearance of an inventio of discovery in American rhetoric during the nineteenth century has been extensively chronicled. This discussion attempts to explain the development, starting with the truism that rhetorical systems are cultural products arising in response to the needs of an age. The shift in the nature of invention was the direct result of the supremacy of Campbell, Blair, and Whately in rhetorical discussions of the last century, the three thinkers proving compatible with the dominant American views in philosophy, science, and art—the philosophy being Scottish Common Sense Realism, the science practical rather than theoretical, and the aesthetic conservative socially and politically.
Fasting serum taurine-conjugated bile acids are elevated in type 2 diabetes and do not change with intensification of insulin.
CONTEXT Bile acids (BAs) are newly recognized signaling molecules in glucose and energy homeostasis. Differences in BA profiles with type 2 diabetes mellitus (T2D) remain incompletely understood. OBJECTIVE The objective of the study was to assess serum BA composition in impaired glucose-tolerant, T2D, and normal glucose-tolerant persons and to monitor the effects of improving glycemia on serum BA composition in T2D patients. DESIGN AND SETTING This was a cross-sectional cohort study in a general population (cohort 1) and nonrandomized intervention (cohort 2). PATIENTS AND INTERVENTIONS Ninety-nine volunteers underwent oral glucose tolerance testing, and 12 persons with T2D and hyperglycemia underwent 8 weeks of intensification of treatment. MAIN OUTCOME MEASURES Serum free BA and respective taurine and glycine conjugates were measured by HPLC tandem mass spectrometry. RESULTS Oral glucose tolerance testing identified 62 normal-, 25 impaired glucose-tolerant, and 12 T2D persons. Concentrations of total taurine-conjugated BA were higher in T2D and intermediate in impaired- compared with normal glucose-tolerant persons (P = .009). Univariate regression revealed a positive association between total taurine-BA and fasting glucose (R = 0.37, P < .001), postload glucose (R = 0.31, P < .002), hemoglobin A1c (R = 0.26, P < .001), fasting insulin (R = 0.21, P = .03), and homeostatic model assessment-estimated insulin resistance (R = 0.26, P = .01) and an inverse association with oral disposition index (R = -0.36, P < .001). Insulin-mediated glycemic improvement in T2D patients did not change fasting serum total BA or BA composition. CONCLUSION Fasting taurine-conjugated BA concentrations are higher in T2D and intermediate in impaired compared with normal glucose-tolerant persons and are associated with fasting and postload glucose. Serum BAs are not altered in T2D in response to improved glycemia. Further study may elucidate whether this pattern of taurine-BA conjugation can be targeted to provide novel therapeutic approaches to treat T2D.
Induced nucleotide specificity in a GTPase.
In signal-recognition particle (SRP)-dependent protein targeting to the bacterial plasma membrane, two GTPases, Ffh (a subunit of the bacterial SRP) and FtsY (the bacterial SRP receptor), act as GTPase activating proteins for one another. The molecular mechanism of this reciprocal GTPase activation is poorly understood. In this work, we show that, unlike other GTPases, free FtsY exhibits only low preference for GTP over other nucleotides. On formation of the SRP.FtsY complex, however, the nucleotide specificity of FtsY is enhanced 10(3)-fold. Thus, interactions with SRP must induce conformational changes that directly affect the FtsY GTP-binding site: in response to SRP binding, FtsY switches from a nonspecific "open" state to a "closed" state that provides discrimination between cognate and noncognate nucleotides. We propose that this conformational change leads to more accurate positioning of the nucleotide and thus could contribute to activation of FtsY's GTPase activity by a novel mechanism.
In the zone or zoning out? Tracking behavioral and neural fluctuations during sustained attention.
Despite growing recognition that attention fluctuates from moment-to-moment during sustained performance, prevailing analysis strategies involve averaging data across multiple trials or time points, treating these fluctuations as noise. Here, using alternative approaches, we clarify the relationship between ongoing brain activity and performance fluctuations during sustained attention. We introduce a novel task (the gradual onset continuous performance task), along with innovative analysis procedures that probe the relationships between reaction time (RT) variability, attention lapses, and intrinsic brain activity. Our results highlight 2 attentional states-a stable, less error-prone state ("in the zone"), characterized by higher default mode network (DMN) activity but during which subjects are at risk of erring if DMN activity rises beyond intermediate levels, and a more effortful mode of processing ("out of the zone"), that is less optimal for sustained performance and relies on activity in dorsal attention network (DAN) regions. These findings motivate a new view of DMN and DAN functioning capable of integrating seemingly disparate reports of their role in goal-directed behavior. Further, they hold potential to reconcile conflicting theories of sustained attention, and represent an important step forward in linking intrinsic brain activity to behavioral phenomena.
Cluster Analysis for Large, High-Dimensional Datasets: Methodology and Applications
Cluster analysis represents one of the most versatile methods in statistical science. It is employed in empirical sciences for the summarization of datasets into groups of similar objects, with the purpose of facilitating the interpretation and further analysis of the data. Cluster analysis is of particular importance in the exploratory investigation of data of high complexity, such as that derived from molecular biology or image databases. Consequently, recent work in the field of cluster analysis, including the work presented in this thesis, has focused on designing algorithms that can provide meaningful solutions for data with high cardinality and/or dimensionality, under the natural restriction of limited resources. In the first part of the thesis, a novel algorithm for the clustering of large, highdimensional datasets is presented. The developed method is based on the principles of projection pursuit and grid partitioning, and focuses on reducing computational requirements for large datasets without loss of performance. To achieve that, the algorithm relies on procedures such as sampling of objects, feature selection, and quick density estimation using histograms. The algorithm searches for low-density points in potentially favorable one-dimensional projections, and partitions the data by a hyperplane passing through the best split point found. Tests on synthetic and reference data indicated that the proposed method can quickly and efficiently recover clusters that are distinguishable from the remaining objects on at least one direction; linearly non-separable clusters were usually subdivided. In addition, the clustering solution was proved to be robust in the presence of noise in moderate levels, and when the clusters are partially overlapping. In the second part of the thesis, a novel method for generating synthetic datasets with variable structure and clustering difficulty is presented. The developed algorithm can construct clusters with different sizes, shapes, and orientations, consisting of objects sampled from different probability distributions. In addition, some of the clusters can have multimodal distributions, curvilinear shapes, or they can be defined only in restricted subsets of dimensions. The clusters are distributed within the data space using a greedy geometrical procedure, with the overall degree of cluster overlap adjusted by scaling the clusters. Evaluation tests indicated that the proposed approach is highly effective in prescribing the cluster overlap. Furthermore, it can be extended to allow for the production of datasets containing non-overlapping clusters with defined degrees of separation. In the third part of the thesis, a novel system for the semi-supervised annotation of images is described and evaluated. The system is based on a visual vocabulary of prototype visual features, which is constructed through the clustering of visual features extracted from training images with accurate textual annotations. Consequently, each training image is associated with the visual words representing its detected features. In addition, each such image is associated with the concepts extracted from the linked textual data. These two sets of associations are combined into a direct linkage scheme between textual concepts and visual words, thus constructing an automatic image classifier that can annotate new images with text-based concepts using only their visual features. As an initial application, the developed method was successfully employed in a person classification task.
SMART LOW VOLTAGE AC SOLID STATE CIRCUIT BREAKERS FOR SMART GRIDS
The solid state circuit breaker (SSCB) is a device used in the power system in order to provide protection when a short circuit or fault current occurs. The objective of this paper is to study and implement a smart prototype of SSCB for smart grids. The presented SSCB is controlled through the current/time characteristics of that used in the conventional mechanical circuit breakers in addition to limit the high fault current levels (fault current limiter) especially with the proliferation of the distributed generation and the associated fault current level increase. In this paper, the principle of operation of the mechanical circuit breakers (MCB) and their classifications are introduced, andswitches used in the design of SSCBs are presented. Simulation of SSCB is carried out to study its feasibility and performance in the case of various operating conditions. Then, ahardware prototype of SSCB using IGBTs devices is constructed and tested to elucidate the proposed
The Unity and Diversity of Executive Functions and Their Contributions to Complex “Frontal Lobe” Tasks: A Latent Variable Analysis
This individual differences study examined the separability of three often postulated executive functions-mental set shifting ("Shifting"), information updating and monitoring ("Updating"), and inhibition of prepotent responses ("Inhibition")-and their roles in complex "frontal lobe" or "executive" tasks. One hundred thirty-seven college students performed a set of relatively simple experimental tasks that are considered to predominantly tap each target executive function as well as a set of frequently used executive tasks: the Wisconsin Card Sorting Test (WCST), Tower of Hanoi (TOH), random number generation (RNG), operation span, and dual tasking. Confirmatory factor analysis indicated that the three target executive functions are moderately correlated with one another, but are clearly separable. Moreover, structural equation modeling suggested that the three functions contribute differentially to performance on complex executive tasks. Specifically, WCST performance was related most strongly to Shifting, TOH to Inhibition, RNG to Inhibition and Updating, and operation span to Updating. Dual task performance was not related to any of the three target functions. These results suggest that it is important to recognize both the unity and diversity of executive functions and that latent variable analysis is a useful approach to studying the organization and roles of executive functions.
Estrus Detection in Dairy Cows from Acceleration Data using Self-learning Classification Models
Automatic estrus detection techniques in dairy cows have been present by different traits. Pedometers and accelerators are the most common sensor equipment. Most of the detection methods are associated with the supervised classification technique, which the training set becomes a crucial reference. The training set obtained by visual observation is subjective and time consuming. Another limitation of this approach is that it usually does not consider the factors affecting successful alerts, such as the discriminative figure, activity type of cows, the location and direction of the sensor node placed on the neck collar of a cow. This paper presents a novel estrus detection method that uses k-means clustering algorithm to create the training set online for each cow. And the training set is finally used to build an activity classification model by SVM. The activity index counted by the classification results in each sampling period can measure cow’s activity variation for assessing the onset of estrus. The experimental results indicate that the peak of estrus time are higher than that of non-estrus time at least twice in the activity index curve, and it can enhance the sensitivity and significantly reduce the error rate.
The case for the reduced instruction set computer
One of the primary goals of computer architects is to design computers that are more costeffective than their predecessors. Cost-effectiveness includes the cost of hardware to manufacture the machine, the cost of programming, and costs incurred related to the architecture in debugging both the initial hardware and subsequent programs. If we review the history of computer families we find that the most common architectural change is the trend toward ever more complex machines. Presumably this additional complexity has a positive tradeoff with regard to the costeffectiveness of newer models. In this paper we propose that this trend is not always cost-effective, and in fact, may even do more harm than good. We shall examine the case for a Reduced Instruction Set Computer (RISC) being as cost-effective as a Complex Instruction Set Computer (CISC). This paper will argue that the next generation of VLSI computers may be more effectively implemented as RISC's than CISC's.
Economic Costs of Diabetes in the U.S. in 2012
OBJECTIVE This study updates previous estimates of the economic burden of diagnosed diabetes and quantifies the increased health resource use and lost productivity associated with diabetes in 2012. RESEARCH DESIGN AND METHODS The study uses a prevalence-based approach that combines the demographics of the U.S. population in 2012 with diabetes prevalence, epidemiological data, health care cost, and economic data into a Cost of Diabetes Model. Health resource use and associated medical costs are analyzed by age, sex, race/ethnicity, insurance coverage, medical condition, and health service category. Data sources include national surveys, Medicare standard analytical files, and one of the largest claims databases for the commercially insured population in the U.S. RESULTS The total estimated cost of diagnosed diabetes in 2012 is $245 billion, including $176 billion in direct medical costs and $69 billion in reduced productivity. The largest components of medical expenditures are hospital inpatient care (43% of the total medical cost), prescription medications to treat the complications of diabetes (18%), antidiabetic agents and diabetes supplies (12%), physician office visits (9%), and nursing/residential facility stays (8%). People with diagnosed diabetes incur average medical expenditures of about $13,700 per year, of which about $7,900 is attributed to diabetes. People with diagnosed diabetes, on average, have medical expenditures approximately 2.3 times higher than what expenditures would be in the absence of diabetes. For the cost categories analyzed, care for people with diagnosed diabetes accounts for more than 1 in 5 health care dollars in the U.S., and more than half of that expenditure is directly attributable to diabetes. Indirect costs include increased absenteeism ($5 billion) and reduced productivity while at work ($20.8 billion) for the employed population, reduced productivity for those not in the labor force ($2.7 billion), inability to work as a result of disease-related disability ($21.6 billion), and lost productive capacity due to early mortality ($18.5 billion). CONCLUSIONS The estimated total economic cost of diagnosed diabetes in 2012 is $245 billion, a 41% increase from our previous estimate of $174 billion (in 2007 dollars). This estimate highlights the substantial burden that diabetes imposes on society. Additional components of societal burden omitted from our study include intangibles from pain and suffering, resources from care provided by nonpaid caregivers, and the burden associated with undiagnosed diabetes.
Integration challenges of intelligent transportation systems with connected vehicle, cloud computing, and internet of things technologies
Transportation is a necessary infrastructure for our modern society. The performance of transportation systems is of crucial importance for individual mobility, commerce, and for the economic growth of all nations. In recent years modern society has been facing more traffic jams, higher fuel prices, and an increase in CO2 emissions. It is imperative to improve the safety and efficiency of transportation. Developing a sustainable intelligent transportation system requires the seamless integration and interoperability with emerging technologies such as connected vehicles, cloud computing, and the Internet of Things. In this article we present and discuss some of the integration challenges that must be addressed to enable an intelligent transportation system to address issues facing the transportation sector such as high fuel prices, high levels of CO2 emissions, increasing traffic congestion, and improved road safety.
Integrating Trust and Computer Self-Efficacy with TAM: AnEmpirical Assessment of Customersâ Acceptance of BankingInformation Systems (BIS) in Jamaica
Financial institutions all over the world are providing banking services via information systems, such as: automated teller machines (ATMs), Internet banking, and telephone banking, in an effort to remain competitive as well as enhancing customer service. However, the acceptance of such banking information systems (BIS) in developing countries remains open. The classical Technology Acceptance Model (TAM) has been well validated over hundreds of studies in the past two decades. This study contributed to the extensive body of research of technology acceptance by attempting to validate the integration of trust and computer self-efficacy (CSE) constructs into the classical TAM model. Moreover, the key uniqueness of this work is in the context of BIS in a developing country, namely Jamaica. Based on structural equations modeling using data of 374 customers from three banks in Jamaica, this study results indicated that the classic TAM provided a better fit than the extended TAM with Trust and CSE. However, the results also indicated that trust is indeed a significant construct impacting both perceived usefulness and perceived ease-of-use. Additionally, test for gender differences indicated that across all study participants, only trust was found to be significantly different between male and female bank customers. Conclusions and recommendations for future research are also provided.
Cultivate Self-Efficacy for Personal and Organizational Effectiveness
Bandura, A. (2000). Cultivate self-efficacy for personal and organizational effectiveness. In E.
Spatial contrast sensitivity of birds
Contrast sensitivity (CS) is the ability of the observer to discriminate between adjacent stimuli on the basis of their differences in relative luminosity (contrast) rather than their absolute luminances. In previous studies, using a narrow range of species, birds have been reported to have low contrast detection thresholds relative to mammals and fishes. This was an unexpected finding because birds had been traditionally reported to have excellent visual acuity and color vision. This study reports CS in six species of birds that represent a range of visual adaptations to varying environments. The species studied were American kestrels (Falco sparverius), barn owls (Tyto alba), Japanese quail (Coturnix coturnix japonica), white Carneaux pigeons (Columba livia), starlings (Sturnus vulgaris), and red-bellied woodpeckers (Melanerpes carolinus). Contrast sensitivity functions (CSFs) were obtained from these birds using the pattern electroretinogram and compared with CSFs from the literature when possible. All of these species exhibited low CS relative to humans and most mammals, which suggests that low CS is a general characteristic of birds. Their low maximum CS may represent a trade-off of contrast detection for some other ecologically vital capacity such as UV detection or other aspects of their unique color vision.
Structural Basis for Negative Allosteric Modulation of GluN2A-Containing NMDA Receptors
NMDA receptors mediate excitatory synaptic transmission and regulate synaptic plasticity in the central nervous system, but their dysregulation is also implicated in numerous brain disorders. Here, we describe GluN2A-selective negative allosteric modulators (NAMs) that inhibit NMDA receptors by stabilizing the apo state of the GluN1 ligand-binding domain (LBD), which is incapable of triggering channel gating. We describe structural determinants of NAM binding in crystal structures of the GluN1/2A LBD heterodimer, and analyses of NAM-bound LBD structures corresponding to active and inhibited receptor states reveal a molecular switch in the modulatory binding site that mediate the allosteric inhibition. NAM binding causes displacement of a valine in GluN2A and the resulting steric effects can be mitigated by the transition from glycine bound to apo state of the GluN1 LBD. This work provides mechanistic insight to allosteric NMDA receptor inhibition, thereby facilitating the development of novel classes NMDA receptor modulators as therapeutic agents.
Effective communication skills in nursing practice.
This article highlights the importance of effective communication skills for nurses. It focuses on core communication skills, their definitions and the positive outcomes that result when applied to practice. Effective communication is central to the provision of compassionate, high-quality nursing care. The article aims to refresh and develop existing knowledge and understanding of effective communication skills. Nurses reading this article will be encouraged to develop a more conscious style of communicating with patients and carers, with the aim of improving health outcomes and patient satisfaction.
Psychosexual outcome of gender-dysphoric children.
OBJECTIVE To establish the psychosexual outcome of gender-dysphoric children at 16 years or older and to examine childhood characteristics related to psychosexual outcome. METHOD We studied 77 children who had been referred in childhood to our clinic because of gender dysphoria (59 boys, 18 girls; mean age 8.4 years, age range 5-12 years). In childhood, we measured the children's cross-gender identification and discomfort with their own sex and gender roles. At follow-up 10.4 +/- 3.4 years later, 54 children (mean age 18.9 years, age range 16-28 years) agreed to participate. In this group, we assessed gender dysphoria and sexual orientation. RESULTS At follow-up, 30% of the 77 participants (19 boys and 4 girls) did not respond to our recruiting letter or were not traceable; 27% (12 boys and 9 girls) were still gender dysphoric (persistence group), and 43% (desistance group: 28 boys and 5 girls) were no longer gender dysphoric. Both boys and girls in the persistence group were more extremely cross-gendered in behavior and feelings and were more likely to fulfill gender identity disorder (GID) criteria in childhood than the children in the other two groups. At follow-up, nearly all male and female participants in the persistence group reported having a homosexual or bisexual sexual orientation. In the desistance group, all of the girls and half of the boys reported having a heterosexual orientation. The other half of the boys in the desistance group had a homosexual or bisexual sexual orientation. CONCLUSIONS Most children with gender dysphoria will not remain gender dysphoric after puberty. Children with persistent GID are characterized by more extreme gender dysphoria in childhood than children with desisting gender dysphoria. With regard to sexual orientation, the most likely outcome of childhood GID is homosexuality or bisexuality.
Spectral–Spatial Classification of Hyperspectral Data Using Loopy Belief Propagation and Active Learning
In this paper, we propose a new framework for spectral-spatial classification of hyperspectral image data. The proposed approach serves as an engine in the context of which active learning algorithms can exploit both spatial and spectral information simultaneously. An important contribution of our paper is the fact that we exploit the marginal probability distribution which uses the whole information in the hyperspectral data. We learn such distributions from both the spectral and spatial information contained in the original hyperspectral data using loopy belief propagation. The adopted probabilistic model is a discriminative random field in which the association potential is a multinomial logistic regression classifier and the interaction potential is a Markov random field multilevel logistic prior. Our experimental results with hyperspectral data sets collected using the National Aeronautics and Space Administration's Airborne Visible Infrared Imaging Spectrometer and the Reflective Optics System Imaging Spectrometer system indicate that the proposed framework provides state-of-the-art performance when compared to other similar developments.
Minimal suture blepharoplasty: Closure of incisions with autologous fibrin glue
Blepharoplasty incisions can be closed safely with autologous fibrin glue. The fibrinogen, prepared either from a whole-blood or plasmapheresis source, is mixed with commercially available thrombin to form a seal that is both hemostatic and adhesive. The complication rate is low and primarily due to technical factors in the initial cases. 'When compared with standard suture techniques, the incidence of minor problems such as milia formation was much lower. In select cases, the technique of using fibrin glue and a minimal number of sutures may be useful as an alternative method of would closure in blepharoplasty patients.
The Expectation Maximization Algorithm A short tutorial
Revision history 2009-01-09 Corrected grammar in the paragraph which precedes Equation (17). Changed datestamp format in the revision history. 2008-07-05 Corrected caption for Figure (2). Added conditioning on θn for l in convergence discussion in Section (3.2). Changed email contact info to reduce spam. 2006-10-14 Added explanation and disambiguating parentheses in the development leading to Equation (14). Minor corrections. 2006-06-28 Added Figure (1). Corrected typo above Equation (5). Minor corrections. Added hyperlinks. 2005-08-26 Minor corrections. 2004-07-18 Initial revision.
Alveolar bone thickness around maxillary central incisors of different inclination assessed with cone-beam computed tomography
OBJECTIVE To assess the labial and lingual alveolar bone thickness in adults with maxillary central incisors of different inclination by cone-beam computed tomography (CBCT). METHODS Ninety maxillary central incisors from 45 patients were divided into three groups based on the maxillary central incisors to palatal plane angle; lingual-inclined, normal, and labial-inclined. Reformatted CBCT images were used to measure the labial and lingual alveolar bone thickness (ABT) at intervals corresponding to every 1/10 of the root length. The sum of labial ABT and lingual ABT at the level of the root apex was used to calculate the total ABT (TABT). The number of teeth exhibiting alveolar fenestration and dehiscence in each group was also tallied. One-way analysis of variance and Tukey's honestly significant difference test were applied for statistical analysis. RESULTS The labial ABT and TABT values at the root apex in the lingual-inclined group were significantly lower than in the other groups (p < 0.05). Lingual and labial ABT values were very low at the cervical level in the lingual-inclined and normal groups. There was a higher prevalence of alveolar fenestration in the lingual-inclined group. CONCLUSIONS Lingual-inclined maxillary central incisors have less bone support at the level of the root apex and a greater frequency of alveolar bone defects than normal maxillary central incisors. The bone plate at the marginal level is also very thin.
Dynamic Programming Treatment of the Travelling Salesman Problem
The well-known travelling salesman problem is the following: " A salesman is required ~,o visit once and only once each of n different cities starting from a base city, and returning to this city. What path minimizes the to ta l distance travelled by the salesman?" The problem has been treated by a number of different people using a var ie ty of techniques; el. Dantzig, Fulkerson, Johnson [1], where a combination of ingemtity and linear programming is used, and Miller, Tucker and Zemlin [2], whose experiments using an all-integer program of Gomory did not produce results i~ cases with ten cities although some success was achieved in eases of simply four cities. The purpose of this note is to show tha t this problem can easily be formulated in dynamic programming terms [3], and resolved computationally for up to 17 cities. For larger numbers, the method presented below, combined with various simple manipulations, may be used to obtain quick approximate solutions. Results of this nature were independently obtained by M. Held and R. M. Karp, who are in the process of publishing some extensions and computat ional results.
Leveraging Big Data and Business Analytics
F ueled by the growing popularity of social media, e-commerce, and an increased interest in business collaboration, there’s been an explosion of data. However, enterprises don’t always know how to use this “big data” to make (or automate) complex decisions, resulting in a business advantage. Powerful analytics techniques can help enterprises deal with complex decisions by providing new insights, creating a virtuous cycle by spurring an interest in and demand for better techniques, tools, and approaches for leveraging big data and business analytics.
Polygon-Invariant Generalized Hough Transform for High-Speed Vision-Based Positioning
The generalized Hough transform (GHT) is widely used for detecting or locating objects under similarity transformation. However, a weakness of the traditional GHT is its large storage requirement and time-consuming computational complexity due to the 4-D parameter space voting strategy. In this paper, a polygon-invariant GHT (PI-GHT) algorithm, as a novel scale- and rotation-invariant template matching method, is presented for high-speed object vision-based positioning. To demonstrate the performance of PI-GHT, several experiments were carried out to compare this novel algorithm with the other five popular matching methods. Experimental results show that the computational effort required by PI-GHT is smaller than that of the common methods due to the similarity transformations applied to the scale- and rotation-invariant triangle features. Moreover, the proposed PI-GHT maintains inherent robustness against partial occlusion, noise, and nonlinear illumination changes, because the local triangle features are based on the gradient directions of edge points. Consequently, PI-GHT is implemented in packaging equipment for radio frequency identification devices at an average time of 4.13 ms and 97.06% matching rate, to solder paste printing at average time nearly 5 ms with 99.87%. PI-GHT is applied to LED manufacturing equipment to locate multiobjects at least five times improvement in speed with a 96% matching rate.
Uncertainty, Ambiguity and Privacy
In this paper we discuss the importance of ambiguity, uncertainty and limited information on individuals’ decision making in situations that have an impact on their privacy. We present experimental evidence from a survey study that demonstrates the impact of framing a marketing offer on participants’ willingness to accept when the consequences of the offer are uncertain and highly ambiguous.
Endoscopic diagnosis of gastric intestinal metaplasia: a prospective multicenter study.
BACKGROUND Intestinal metaplasia (IM) of the gastric mucosa has long attracted attention as a premalignant lesion involved in gastric carcinogenesis. However, endoscopic diagnosis of IM has remained unclear for a long time. In recent years, the methylene blue staining technique and narrow-band imaging (NBI) magnifying endoscopy have facilitated clinical diagnosis of IM, although these methods have some problems due to their complexity. Simple methods for diagnosis of IM using conventional endoscopy and the indigo carmine contrast (IC) method are necessary. PATIENTS AND METHODS This study was a multicenter, prospective, randomized, comparative study involving 10 facilities. The appearance of IM was examined using conventional and IC methods with an electronic endoscope. RESULTS Subjects included 163 patients, of whom 87 and 76 underwent conventional and IC methods, respectively. Sensitivity, specificity, and receiver operating characteristic/area under thecurve (ROC/AUC) of conventional and IC methods for the detection of IM in the gastric antrum showed that diagnostic performance of the conventional method was higher, but not significantly, than that of the IC method. Sensitivity, specificity and ROC/AUC of conventional and IC methods for the detection of IM in the gastric body showed that the IC method yielded better (but not significantly better) results than the conventional method. CONCLUSION The diagnostic performance of the conventional method did not significantly differ from that of the IC method. A villous appearance, whitish mucosa, and rough mucosal surface, as observed by both methods, and areae gastricae pattern, as observed by the IC method, were useful indicators for endoscopic diagnosis of IM.
A design methodology of chip-to-chip wireless power transmission system
A design methodology to transmit power using a chip-to-chip wireless interface is proposed. The proposed power transmission system is based on magnetic coupling, and the power transmission of 5mW/mm2 was verified. The transmission efficiency trade-off with the transmitted power is also discussed.
Malware classification with recurrent networks
Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3% at a false positive rate of 0.1%.
A quadruped robot with parallel mechanism legs
Summary form only given. The design and control of quadruped robots has become a fascinating research field because they have better mobility on unstructured terrains. Until now, many kinds of quadruped robots were developed, such as JROB-1 [1], BISAM [2], BigDog [3], LittleDog [4], HyQ [5] and Cheetah cub [6]. They have shown significant walking performance. However, most of them use serial mechanism legs and have animal like structure: the thigh and the crus. To swing the crus in swing phase and support the body's weight in stance phase, a linear actuator is attached on the thigh [2, 3, 5, 6], or instead, a rotational actuator is installed on the knee joint [1, 4]. To make the robot more useful in the wild environment, e.g., the detection or manipulation tasks, the payload capability is very important. To carry the sensors or tools, heavy load legged robot is very necessary. Thus the knee actuator should be lightweight, powerful and easy to maintain. However, this can be very costly and hard to satisfy at the same time.
Piriformis syndrome, diagnosis and treatment.
Piriformis syndrome (PS) is an uncommon cause of sciatica that involves buttock pain referred to the leg. Diagnosis is often difficult, and it is one of exclusion due to few validated and standardized diagnostic tests. Treatment for PS has historically focused on stretching and physical therapy modalities, with refractory patients also receiving anesthetic and corticosteroid injections into the piriformis muscle origin, belly, muscle sheath, or sciatic nerve sheath. Recently, the use of botulinum toxin (BTX) to treat PS has gained popularity. Its use is aimed at relieving sciatic nerve compression and inherent muscle pain from a tight piriformis. BTX is being used increasingly for myofascial pain syndromes, and some studies have demonstrated superior efficacy to corticosteroid injection. The success of BTX in treating PS supports the prevailing pathoanatomic etiology of the condition and suggests a promising future for BTX in the treatment of other myofascial pain syndromes.
More Statistical Properties of Order Books and Price Impact
We investigate present some new statistical properties of order books. We analyse data from the Nasdaq and investigate (a) the statistics of incoming limit order prices, (b) the shape of the average order book, and (c) the typical life time of a limit order as a function of the distance from the best price. We also determine the ‘price impact’ function using French and British stocks, and find a logarithmic, rather than a power-law, dependence of the price response on the volume. The weak time dependence of the response function shows that the impact is, surprisingly, quasi-permanent, and suggests that trading itself is interpreted by the market as new information. Many statistical properties of financial markets have already been explored, and have revealed striking similarities between very different markets (different traded assets, different geographical zones, different epochs) [1, 2, 3]. More recently, the statistics of the ‘order book’, which is the ultimate ‘microscopic’ level of description of financial markets, has attracted considerable attention, both from an empirical [4, 5, 6, 8, 7] and theoretical [9, 10, 5, 11, 12, 13, 8, 14, 15] point of view. The order book is the list of all buy and sell limit orders, with their corresponding price and volume, at a given instant of time. We will call a(t) the ask price (best sell price) at time t and b(t) the bid price (best buy price) at time t. The midpoint m(t) is the average between the bid and the ask: m(t) = [a(t)+ b(t)]/2.
Perception of front-of-pack labels according to social characteristics, nutritional knowledge and food purchasing habits.
OBJECTIVE To identify patterns of perception of front-of-pack (FOP) nutrition labels and to determine social factors, nutritional knowledge and attention to packaging features related to such patterns. DESIGN Cross-sectional. Perception was measured using indicators of understanding and acceptability of three simple FOP labels (the 'Green Tick', the logo of the French Nutrition and Health Programme (PNNS logo) and 'simple traffic lights' (STL)) and two detailed formats ('multiple traffic lights' (MTL) and the 'colour range' logo (CR)). Associations of perception patterns with individual characteristics were examined using χ2 tests. SETTING Data from the French NutriNet-Santé cohort study. SUBJECTS A total of 38,763 adults. RESULTS Four perception patterns emerged. Poorly educated individuals were most often found in groups favouring simple formats. The 'favourable to CR' group had a high rate of men and older persons. Poor nutritional knowledge was more frequent in the 'favourable to STL' group, while individuals with substantial knowledge were proportionally more numerous in the 'favourable to MTL' group. The 'favourable to STL' group more frequently self-reported noting price and marketing characteristics during purchasing, while the 'favourable to MTL' and 'favourable to CR' groups declared more interest in nutritional information. The 'favourable to Green Tick and PNNS logo' group self-reported paying closer attention to claims and quality guarantee labels. CONCLUSIONS The 'favourable to MTL' cluster was most frequently represented in our survey. However, simple FOP formats may be most appropriate for increasing awareness of healthy eating among targeted groups with poor nutritional knowledge and little interest in the nutritional quality of packaged foods.
A Supervised Machine Learning Approach to Variable Branching in Branch-And-Bound
We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results.