title
stringlengths
8
300
abstract
stringlengths
0
10k
Analytical Solution to One-dimensional Advection-diffusion Equation with Several Point Sources through Arbitrary Time-dependent Emission Rate Patterns
Advection-diffusion equation and its related analytical solutions have gained wide applications in different areas. Compared with numerical solutions, the analytical solutions benefit from some advantages. As such, many analytical solutions have been presented for the advection-diffusion equation. The difference between these solutions is mainly in the type of boundary conditions, e.g. time patterns of the sources. Almost all the existing analytical solutions to this equation involve simple boundary conditions. Most practical problems, however, involve complex boundary conditions where it is very difficult and sometimes impossible to find the corresponding analytical solutions. In this research, first, an analytical solution of advection-diffusion equation was initially derived for a point source with a linear pulse time pattern involving constant-parameters condition (constant velocity and diffusion coefficient). Hence, using the superposition principle, the derived solution can be extended for an arbitrary time pattern involving several point sources. The given analytical solution was verified using four hypothetical test problems for a stream. Three of these test problems have analytical solutions given by previous researchers while the last one involves a complicated case of several point sources, which can only be numerically solved. The results show that the proposed analytical solution can provide an accurate estimation of the concentration; hence it is suitable for other such applications, as verifying the transport codes. Moreover, it can be applied in applications that involve optimization process where estimation of the solution in a finite number of points (e.g. as an objective function) is required. The limitations of the proposed solution are that it is valid only for constant-parameters condition, and is not computationally efficient for problems involving either a high temporal or a high spatial resolution.
CAD: an algorithm for citation-anchors detection in research papers
Citations are very important parameters and are used to take many important decisions like ranking of researchers, institutions, countries, and to measure the relationship between research papers. All of these require accurate counting of citations and their occurrence (in-text citation counts) within the citing papers. Citation anchors refer to the citation made within the full text of the citing paper for example: ‘[1]’, ‘(Afzal et al, 2015)’, ‘[Afzal, 2015]’ etc. Identification of citation-anchors from the plain-text is a very challenging task due to the various styles and formats of citations. Recently, Shahid et al. highlighted some of the problems such as commonality in content, wrong allotment, mathematical ambiguities, and string variations etc in automatically identifying the in-text citation frequencies. The paper proposes an algorithm, CAD, for identification of citation-anchors and its in-text citation frequency based on different rules. For a comprehensive analysis, the dataset of research papers is prepared: on both Journal of Universal Computer Science (J.UCS) and (2) CiteSeer digital libraries. In experimental study, we conducted two experiments. In the first experiment, the proposed approach is compared with state-of-the-art technique over both datasets. The J.UCS dataset consists of 1200 research papers with 16,000 citation strings or references while the CiteSeer dataset consists of 52 research papers with 1850 references. The total dataset size becomes 1252 citing documents and 17,850 references. The experiments showed that CAD algorithm improved F-score by 44% and 37% respectively on both J.UCS and CiteSeer dataset over the contemporary technique (Shahid et al. in Int J Arab Inf Technol 12:481–488, 2014). The average score is 41% on both datasets. In the second experiment, the proposed approach is further analyzed against the existing state-of-the-art tools: CERMINE and GROBID. According to our results, the proposed approach is best performing with F1 of 0.99, followed by GROBID (F1 0.89) and CERMINE (F1 0.82).
Assessing mammography film-reader performance
withdrawn ArticleInfo ArticleID : 601 ArticleDOI : 10.1186/1710-1492-10-S1-A43 ArticleCitationID : A43 ArticleSequenceNumber : 72 ArticleCategory : Meeting abstract ArticleFirstPage : 1 ArticleLastPage : 1 ArticleHistory : RegistrationDate : 2014–3–3 OnlineDate : 2014–3–3 ArticleCopyright : licensee BioMed Central Ltd.2014 This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http:// creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http:// creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. ArticleGrants : ArticleContext : 132231010S1S1 PDF was created after publishing process.
Randomized controlled trial of mindfulness-based stress reduction (MBSR) for survivors of breast cancer.
OBJECTIVES Considerable morbidity persists among survivors of breast cancer (BC) including high levels of psychological stress, anxiety, depression, fear of recurrence, and physical symptoms including pain, fatigue, and sleep disturbances, and impaired quality of life. Effective interventions are needed during this difficult transitional period. METHODS We conducted a randomized controlled trial of 84 female BC survivors (Stages 0-III) recruited from the H. Lee Moffitt Cancer and Research Institute. All subjects were within 18 months of treatment completion with surgery and adjuvant radiation and/or chemotherapy. Subjects were randomly assigned to a 6-week Mindfulness-Based Stress Reduction (MBSR) program designed to self-regulate arousal to stressful circumstances or symptoms (n=41) or to usual care (n=43). Outcome measures compared at 6 weeks by random assignment included validated measures of psychological status (depression, anxiety, perceived stress, fear of recurrence, optimism, social support) and psychological and physical subscales of quality of life (SF-36). RESULTS Compared with usual care, subjects assigned to MBSR(BC) had significantly lower (two-sided p<0.05) adjusted mean levels of depression (6.3 vs 9.6), anxiety (28.3 vs 33.0), and fear of recurrence (9.3 vs 11.6) at 6 weeks, along with higher energy (53.5 vs 49.2), physical functioning (50.1 vs 47.0), and physical role functioning (49.1 vs 42.8). In stratified analyses, subjects more compliant with MBSR tended to experience greater improvements in measures of energy and physical functioning. CONCLUSIONS Among BC survivors within 18 months of treatment completion, a 6-week MBSR(BC) program resulted in significant improvements in psychological status and quality of life compared with usual care.
Reconstruction of Phase Space of Dynamical Systems Using Method of Time Delay
Selected elements of dynamical system (DS) theory approach to nonlinear time series analysis are introduced. Key role in this concept plays a method of time delay. The method enables us reconstruct phase space trajectory of DS without knowledge of its governing equations. Our variant is tested and compared with wellknown TISEAN package for Lorenz and Hénon systems. Introduction There are number of methods of nonlinear time series analysis (e.g. nonlinear prediction or noise reduction) that work in a phase space (PS) of dynamical systems. We assume that a given time series of some variable is generated by a dynamical system. A specific state of the system can be represented by a point in the phase space and time evolution of the system creates a trajectory in the phase space. From this point of view we consider our time series to be a projection of trajectory of DS to one (or more – when we have more simultaneously measured variables) coordinates of phase space. This view was enabled due to formulation of embedding theorem [1], [2] at the beginning of the 1980s. It says that it is possible to reconstruct the phase space from the time series. One of the most frequently used methods of phase space reconstruction is the method of time delay. The main task while using this method is to determine values of time delay τ and embedding dimension m. We tested individual steps of this method on simulated data generated by Lorenz and Hénon systems. We compared results computed by our own programs with outputs of program package TISEAN created by R. Hegger, H. Kantz, and T. Schreiber [3]. Method of time delay The most frequently used method of PS reconstruction is the method of time delay. If we have a time series of a scalar variable we construct a vector ( ) , ,..., 1 , N i t x i = in phase space in time ti as following: ( ) ( ) ( ) ( ) ( ) ( ) [ ], 1 ,..., 2 , , τ τ τ − + + + = m t x t x t x t x t i i i i i X where i goes from 1 to N – (m – 1)τ, τ is time delay, m is a dimension of reconstructed space (embedding dimension) and M = N – (m – 1)τ is number of points (states) in the phase space. According to embedding theorem, when this is done in a proper way, dynamics reconstructed using this formula is equivalent to the dynamics on an attractor in the origin phase space in the sense that characteristic invariants of the system are conserved. The time delay method and related aspects are described in literature, e.g. [4]. We estimated the two parameters—time delay and embedding dimension—using algorithms below. Choosing a time delay To determine a suitable time delay we used average mutual information (AMI), a certain generalization of autocorrelation function. Average mutual information between sets of measurements A and B is defined [5]:
Behavioral treatment of social phobia in youth: does parent education training improve the outcome?
Social phobia is one of the most common anxiety disorders in children and adolescents, and it runs a fairly chronic course if left untreated. The goals of the present study were to evaluate if a parent education course would improve the outcome for children with a primary diagnosis of social phobia and if comorbidity at the start of treatment would impair the outcome of the social phobia. A total of 55 children, 8-14 years old, were randomly assigned to one of three conditions: 1) Child is treated, 2) Child is treated and parent participates in the course, or 3) A wait-list for 12 weeks. The treatment consisted of individual exposure and group social skills training based on the Beidel, Turner, and Morris (2000) SET-C. Children and parents were assessed pre-, post-, and at one year follow-up with independent assessor ratings and self-report measures. Results showed that there was no significant difference between the two active treatments and both were better than the wait-list. The treatment effects were maintained or furthered at the follow-up. Comorbidity did not lead to worse outcome of social phobia. Comorbid disorders improved significantly from pre-to post-treatment and from post-to follow-up assessment without being targeted in therapy.
Networks, Crowds, and Markets - Reasoning About a Highly Connected World
Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.
Atorvastatin and fenofibrate have comparable effects on VLDL-apolipoprotein C-III kinetics in men with the metabolic syndrome.
OBJECTIVE The metabolic syndrome (MetS) is characterized by insulin resistance and dyslipidemia that may accelerate atherosclerosis. Disturbed apolipoprotein (apo) C-III metabolism may account for dyslipidemia in these subjects. Atorvastatin and fenofibrate decrease plasma apoC-III, but the underlying mechanisms are not fully understood. METHODS AND RESULTS The effects of atorvastatin (40 mg/d) and fenofibrate (200 mg/d) on the kinetics of very-low density lipoprotein (VLDL)-apoC-III were investigated in a crossover trial of 11 MetS men. VLDL-apoC-III kinetics were studied, after intravenous d(3)-leucine administration using gas chromatography-mass spectrometry and compartmental modeling. Compared with placebo, both atorvastatin and fenofibrate significantly decreased (P<0.001) plasma concentrations of triglyceride, apoB, apoB-48, and total apoC-III. Atorvastatin, not fenofibrate, significantly decreased plasma apoA-V concentrations (P<0.05). Both agents significantly increased the fractional catabolic rate (+32% and +30%, respectively) and reduced the production rate of VLDL-apoC-III (-20% and -24%, respectively), accounting for a significant reduction in VLDL-apoC-III concentrations (-41% and -39%, respectively). Total plasma apoC-III production rates were not significantly altered by the 2 agents. Neither treatment altered insulin resistance and body weight. CONCLUSIONS Both atorvastatin and fenofibrate have dual regulatory effects on VLDL-apoC-III kinetics in MetS; reduced production and increased fractional catabolism of VLDL-apoC-III may explain the triglyceride-lowering effect of these agents.
Development of extensible open information extraction
Open information extraction is used to extract information from open-domain documents such as articles in the Internet. Most extraction techniques have common extraction flow starting from collecting documents, preprocessing documents, extracting informations, postprocessing extracted relations and evaluation of extracted relations. Implementation of a new technique requires the user to build the whole process from scratch. Consequently, implementation becomes a complex and time-consuming process. Therefore, a prototype acting as a framework to abstract and split the whole extraction processes into multiple separate processes implemented as components or plugins will allow users to reuse already existing extraction subprocess.
Jointly Trained Sequential Labeling and Classification by Sparse Attention Neural Networks
Sentence-level classification and sequential labeling are two fundamental tasks in language understanding. While these two tasks are usually modeled separately, in reality, they are often correlated, for example in intent classification and slot filling, or in topic classification and named-entity recognition. In order to utilize the potential benefits from their correlations, we propose a jointly trained model for learning the two tasks simultaneously via Long Short-Term Memory (LSTM) networks. This model predicts the sentence-level category and the word-level label sequence from the stepwise output hidden representations of LSTM. We also introduce a novel mechanism of “sparse attention” to weigh words differently based on their semantic relevance to sentence-level classification. The proposed method outperforms baseline models on ATIS and TREC datasets.
Empirical Bernstein Bounds and Sample-Variance Penalization
We give improved constants for data dependent and variance sensitive confidence bounds, called empirical Bernstein bounds, and extend these inequalities to hold uniformly over classes of functions whose growth function is polynomial in the sample sizen. The bounds lead us to consider sample variance penalization , a novel learning method which takes into account the empirical variance of the loss function. We give conditions under which sample variance penalization is effective. In particular, we present a bound on the excess risk incurred by the method. Using this, we argue that there are situations in which the excess risk of our method is of order1/n, while the excess risk of empirical risk minimization is of order 1/ √ n. We show some experimental results, which confirm the theory. Finally, we discuss the potential application of our results to sample compression schemes.
A weight-neutral versus weight-loss approach for health promotion in women with high BMI: A randomized-controlled trial
Weight loss is the primary recommendation for health improvement in individuals with high body mass index (BMI) despite limited evidence of long-term success. Alternatives to weight-loss approaches (such as Health At Every Size - a weight-neutral approach) have been met with their own concerns and require further empirical testing. This study compared the effectiveness of a weight-neutral versus a weight-loss program for health promotion. Eighty women, aged 30-45 years, with high body mass index (BMI ≥ 30 kg/m(2)) were randomized to 6 months of facilitator-guided weekly group meetings using structured manuals that emphasized either a weight-loss or weight-neutral approach to health. Health measurements occurred at baseline, post-intervention, and 24-months post-randomization. Measurements included blood pressure, lipid panels, blood glucose, BMI, weight, waist circumference, hip circumference, distress, self-esteem, quality of life, dietary risk, fruit and vegetable intake, intuitive eating, and physical activity. Intention-to-treat analyses were performed using linear mixed-effects models to examine group-by-time interaction effects and between and within-group differences. Group-by-time interactions were found for LDL cholesterol, intuitive eating, BMI, weight, and dietary risk. At post-intervention, the weight-neutral program had larger reductions in LDL cholesterol and greater improvements in intuitive eating; the weight-loss program had larger reductions in BMI, weight, and larger (albeit temporary) decreases in dietary risk. Significant positive changes were observed overall between baseline and 24-month follow-up for waist-to-hip ratio, total cholesterol, physical activity, fruit and vegetable intake, self-esteem, and quality of life. These findings highlight that numerous health benefits, even in the absence of weight loss, are achievable and sustainable in the long term using a weight-neutral approach. The trial positions weight-neutral programs as a viable health promotion alternative to weight-loss programs for women of high weight.
A 200-V 98.16%-Efficiency Buck LED Driver Using Integrated Current Control to Improve Current Accuracy for Large-Scale Single-String LED Backlighting Applications
This paper presents an average current mode buck dimmable light-emitting diode (LED) driver for large-scale single-string LED backlighting applications. The proposed integrated current control technique can provide exact current control signals by using an autozeroed integrator to enhance the accuracy of the average current of LEDs while driving a large number of LEDs. Adoption of discontinuous low-side current sensing leads to power loss reduction. Adoption of a fast-settling technique allows the LED driver to enter into the steady state within three switching cycles after the dimming signal is triggered. Implemented in a 0.35-μm HV CMOS process, the proposed LED driver achieves 1.7% LED current error and 98.16% peak efficiency over an input voltage range of 110 to 200 V while driving 30 to 50 LEDs.
Association between sleep apnea severity and blood coagulability: treatment effects of nasal continuous positive airway pressure
A prothrombotic state may contribute to the elevated cardiovascular risk in patients with obstructive sleep apnea (OSA). We investigated the relationship between apnea severity and hemostasis factors and effect of continuous positive airway pressure (CPAP) treatment on hemostatic activity. We performed full overnight polysomnography in 44 OSA patients (mean age 47±10 years), yielding apnea–hypopnea index (AHI) and mean nighttime oxyhemoglobin saturation (SpO2) as indices of apnea severity. For treatment, subjects were double-blind randomized to 2 weeks of either therapeutic CPAP (n=18), 3 l/min supplemental nocturnal oxygen (n=16) or placebo–CPAP (<1 cm H2O) (n=10). Levels of von Willebrand factor antigen (VWF:Ag), soluble tissue factor (sTF), D-dimer, and plasminogen activator inhibitor (PAI)-1 antigen were measured in plasma pre- and posttreatment. Before treatment, PAI-1 was significantly correlated with AHI (r=0.47, p=0.001) and mean nighttime SpO2 (r=−0.32, p=0.035), but these OSA measures were not significantly related with VWF:Ag, sTF, and D-dimer. AHI was a significant predictor of PAI-1 (R 2=0.219, standardized β=0.47, p=0.001), independent of mean nighttime SpO2, body mass index (BMI), and age. A weak time-by-treatment interaction for PAI-1 was observed (p=0.041), even after adjusting for age, BMI, pre-treatment AHI, and mean SpO2 (p=0.046). Post hoc analyses suggested that only CPAP treatment was associated with a decrease in PAI-1 (p=0.039); there were no changes in VWF:Ag, sTF, and D-dimer associated with treatment with placebo–CPAP or with nocturnal oxygen. Apnea severity may be associated with impairment in the fibrinolytic capacity. To the extent that our sample size was limited, the observation that CPAP treatment led to a decrease in PAI-1 in OSA must be regarded as tentative.
IL-21 acts directly on B cells to regulate Bcl-6 expression and germinal center responses
During T cell-dependent responses, B cells can either differentiate extrafollicularly into short-lived plasma cells or enter follicles to form germinal centers (GCs). Interactions with T follicular helper (Tfh) cells are required for GC formation and for selection of somatically mutated GC B cells. Interleukin (IL)-21 has been reported to play a role in Tfh cell formation and in B cell growth, survival, and isotype switching. To date, it is unclear whether the effect of IL-21 on GC formation is predominantly a consequence of this cytokine acting directly on the Tfh cells or if IL-21 directly influences GC B cells. We show that IL-21 acts in a B cell-intrinsic fashion to control GC B cell formation. Mixed bone marrow chimeras identified a significant B cell-autonomous effect of IL-21 receptor (R) signaling throughout all stages of the GC response. IL-21 deficiency profoundly impaired affinity maturation and reduced the proportion of IgG1(+) GC B cells but did not affect formation of early memory B cells. IL-21R was required on GC B cells for maximal expression of Bcl-6. In contrast to the requirement for IL-21 in the follicular response to sheep red blood cells, a purely extrafollicular antibody response to Salmonella dominated by IgG2a was intact in the absence of IL-21.
Extending EMV Tokenised Payments to Offline-Environments
Tokenisation has been adopted by the payment industry as a method to prevent Personal Account Number (PAN) compromise in EMV (Europay MasterCard Visa) transactions. The current architecture specified in EMV tokenisation requires online connectivity during transactions. However, it is not always possible to have online connectivity. We identify three main scenarios where fully offline transaction capability is considered to be beneficial for both merchants and consumers. Scenarios include making purchases in locations without online connectivity, when a reliable connection is not guaranteed, and when it is cheaper to carry out offline transactions due to higher communication/payment processing costs involved in online approvals. In this study, an offline contactless mobile payment protocol based on EMV tokenisation is proposed. The aim of the protocol is to address the challenge of providing secure offline transaction capability when there is no online connectivity on either the mobile or the terminal. The solution also provides end-to-end encryption to provide additional security for transaction data other than the token. The protocol is analysed against protocol objectives and we discuss how the protocol can be extended to prevent token relay attacks. The proposed solution is subjected to mechanical formal analysis using Scyther. Finally, we implement the protocol and obtain performance measurements.
Evaluation of the immunomodulatory effect of melatonin on the T-cell response in peripheral blood from systemic lupus erythematosus patients.
Systemic lupus erythematosus (SLE) is an autoimmune disorder characterized by the production of antinuclear autoantibodies. In addition, the involvement of CD4+ T-helper (Th) cells in SLE has become increasingly evident. Although the role of melatonin has been tested in some experimental models of lupus with inconclusive results, there are no studies evaluating the melatonin effect on cells from patients with SLE. Therefore, the aim of this study was to analyse the role of in vitro administered melatonin in the immune response of peripheral leukocytes from treated patients with SLE (n = 20) and age- and sex-matched healthy controls. Melatonin was tested for its effect on the production of key Th1, Th2, Th9, Th17 and innate cytokines. The frequency of T regulatory (Treg) cells and the expression of FOXP3 and BAFF were also explored. Our results are the first to show that melatonin decreased the production of IL-5 and to describe the novel role of melatonin in IL-9 production by human circulating cells. Additionally, we highlighted a two-faceted melatonin effect. Although it acted as a prototypical anti-inflammatory compound, reducing exacerbated Th1 and innate responses in PHA-stimulated cells from healthy subjects, it caused the opposite actions in immune-depressed cells from patients with SLE. Melatonin also increased the number of Treg cells expressing FOXP3 and offset BAFF overexpression in SLE patient cells. These findings open a new field of research in lupus that could lead to the use of melatonin as treatment or cotreatment for SLE.
AI Planning: Systems and Techniques
that can describe a set of actions. .. that can be expected to allow the system to reach a desired goal. A long-standing problem in the field of automated reasoning is designing systems that can describe a set of actions (or a plan) that can be expected to allow the system to reach a desired goal. Ideally, this set of actions is then passed to a robot, a m a n u f a c t u r i n g system, or some other form of effec-tor, which can follow the plan and produce the desired result. The design of such planners has been with AI since its earliest days, and a large number of techniques have been introduced in progressively more ambitious systems over a long period. In addition , planning research has introduced many problems to the field of AI. Some examples are the representation and the reasoning about time, causality, and intentions; physical or other constraints on suitable solutions; uncertainty in the execution of plans; sensation and perception of the real world and the holding of beliefs about it; and multiple agents who might cooperate or interfere. Planning problems, like most AI topics, have been attacked in two major ways: approaches that try to understand and solve the general problem without the use of domain-specific knowledge and approaches that directly use domain heuristics. In planning , these approaches are often referred to as domain dependent (those that use domain-specific heuristics to control the planner's operation) and domain independent (those in which the planning knowledge representation and algorithms are expected to work for a reasonably large variety of application domains). The issues involved in the design of domain-dependent planners are those generally found in applied approaches to AI: the need to justify solutions, the difficulty of knowledge acquisition, and the fact that the design principles might not map well from one application domain to another. Work in domain-independent planning has formed the bulk of AI research in planning. The long history of these efforts (figure 1) has led to the discovery of many recurring problems as well as to certain standard solutions. In addition, there have been a number of This article reviews research in the development of plan generation systems. Our goal is to familiarize the reader with some of the important problems that have arisen in the design of planning systems and to discuss some …
Hierarchical Deep Reinforcement Learning Agent with Counter Self-play on Competitive Games
Deep Reinforcement Learning algorithms lead to agents that can solve difficult decision making problems in complex environments. However, many difficult multi-agent competitive games, especially real-time strategy games are still considered beyond the capability of current deep reinforcement learning algorithms, although there has been a recent effort to change this (OpenAI, 2017; Vinyals et al., 2017). Moreover, when the opponents in a competitive game are suboptimal, the current Nash Equilibrium seeking, selfplay algorithms are often unable to generalize their strategies to opponents that play strategies vastly different from their own. This suggests that a learning algorithm that is beyond conventional self-play is necessary. We develop Hierarchical Agent with Self-Play , a learning approach for obtaining hierarchically structured policies that can achieve higher performance than conventional self-play on competitive games through the use of a diverse pool of sub-policies we get from Counter Self-Play (CSP). We demonstrate that the ensemble policy generated by Hierarchical Agent with Self-Play can achieve better performance while facing unseen opponents that use sub-optimal policies. On a motivating iterated Rock-Paper-Scissor game and a partially observable real-time strategic game (http://generals.io/), we are led to the conclusion that Hierarchical Agent with Self-Play can perform better than conventional self-play as well as achieve 77% win rate against FloBot, an open-source agent which has ranked at position number 2 on the online leaderboards.
Bayesian Policy Gradients via Alpha Divergence Dropout Inference
Policy gradient methods have had great success in solving continuous control tasks, yet the stochastic nature of such problems makes deterministic value estimation difficult. We propose an approach which instead estimates a distribution by fitting the value function with a Bayesian Neural Network. We optimize an α-divergence objective with Bayesian dropout approximation to learn and estimate this distribution. We show that using the Monte Carlo posterior mean of the Bayesian value function distribution, rather than a deterministic network, improves stability and performance of policy gradient methods in continuous control MuJoCo simulations.
Recent advances in malaria genomics and epigenomics
Malaria continues to impose a significant disease burden on low- and middle-income countries in the tropics. However, revolutionary progress over the last 3 years in nucleic acid sequencing, reverse genetics, and post-genome analyses has generated step changes in our understanding of malaria parasite (Plasmodium spp.) biology and its interactions with its host and vector. Driven by the availability of vast amounts of genome sequence data from Plasmodium species strains, relevant human populations of different ethnicities, and mosquito vectors, researchers can consider any biological component of the malarial process in isolation or in the interactive setting that is infection. In particular, considerable progress has been made in the area of population genomics, with Plasmodium falciparum serving as a highly relevant model. Such studies have demonstrated that genome evolution under strong selective pressure can be detected. These data, combined with reverse genetics, have enabled the identification of the region of the P. falciparum genome that is under selective pressure and the confirmation of the functionality of the mutations in the kelch13 gene that accompany resistance to the major frontline antimalarial, artemisinin. Furthermore, the central role of epigenetic regulation of gene expression and antigenic variation and developmental fate in P. falciparum is becoming ever clearer. This review summarizes recent exciting discoveries that genome technologies have enabled in malaria research and highlights some of their applications to healthcare. The knowledge gained will help to develop surveillance approaches for the emergence or spread of drug resistance and to identify new targets for the development of antimalarial drugs and perhaps vaccines.
Designing a Programming Language for Home Automation
The AutoHAN project at the Cambridge Computer Laboratory is developing a range of technologies related to next-generation home networking and automation. Several aspects of the project involve the development of programming languages suitable for use in the home. Languages of this sort will clearly have a significant impact on the usability of domestic electronic devices, and greatly broaden the range of users who might benefit from psychology of programming research. As yet, the design of the AutoHAN languages is not complete. This paper therefore describes the design process and design criteria that have been applied so far. This is as a basis for discussion at this workshop, and for planning of further psychology of programming research related to the AutoHAN programme. Home Automation Background Home networking technologies are rapidly being deployed, if not in the average person’s house, then at least in many research environments. Large companies are experimenting with home networking. Nascent standardisation bodies are competing to define networking and communications protocols (e.g. Waldo/Sun Microsystem 1999, Microsoft Corporation 2000) by which appliances can communicate with control devices and with each other. These trends will soon bring significant challenges to the psychology of programming community. PPIG researchers have occasionally wondered in the past whether the proverbial impossibility of programming a video cassette recorder should be a concern of ours. A few people have investigated the area, but not many. Thomas Green, Alan Blackwell and Rachel Hewson have given some thought to the programming of central heating systems (Blackwell, Green & Hewson submitted), as have Anna Cox and Richard Young (2000). Harold Thimbleby (1993) has analysed the programming interfaces of microwave ovens, and even VCRs (Thimbleby 1991). But even though these results might bring psychological or usability insights, they are seldom considered to be directly relevant to programming language design. “Programming” a VCR is not real programming in the eyes of most computer scientists. But if you want to instruct your VCR to start recording from the front door security camera for 5 minutes after the time that someone presses the front doorbell, this seems a lot more like real programming. So the significance of home networking is that programming in your house may suddenly become a great deal more complicated. If your VCR can talk to your home security system over the home network, how will it know what to say? Perhaps the manufacturer will build in all the required functionality to both devices. But would you trust your VCR manufacturer to do this? Perhaps manufacturers will just keep themselves to themselves, and won’t be tempted to mess with other devices in your house. Would you trust them to do that? On balance, it does seem that homeowners will find themselves with the potential, and maybe the inclination, to define complicated behaviour in their own homes.
Exploring the Influence of Online Consumers ’ Perception on Purchase Intention as Exemplified with an Online Bookstore
The purpose of this study is to use structural equation modeling (SEM) to explore the influence of online bookstore consumers’ perception on their purchase intention. Through literature review, four constructs were used to establish a causal relationship between perception of online shopping and consumers’ purchase intention. Questionnaires based on these four constructs were designed and distributed to the customers of an online bookstore; AMOS software as analytical tool was used to build and confirm a SEM model. Results of this study show that product perception, shopping experience, and service quality have positive and significant influence on consumers’ purchase intention, but perceived risk has negative influence on consumers’ purchase intention, and shopping experience is most important.
Multilevel Inverter Topology for Renewable Energy Grid Integration
In this paper, a novel three-phase parallel grid-connected multilevel inverter topology with a novel switching strategy is proposed. This inverter is intended to feed a microgrid from renewable energy sources (RES) to overcome the problem of the polluted sinusoidal output in classical inverters and to reduce component count, particularly for generating a multilevel waveform with a large number of levels. The proposed power converter consists of <inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula> two-level <inline-formula> <tex-math notation="LaTeX">$(n+1)$</tex-math></inline-formula> phase inverters connected in parallel, where <inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula> is the number of RES. The more the number of RES, the more the number of voltage levels, the more faithful is the output sinusoidal waveform. In the proposed topology, both voltage pulse width and height are modulated and precalculated by using a pulse width and height modulation so as to reduce the number of switching states (i.e., switching losses) and the total harmonic distortion. The topology is investigated through simulations and validated experimentally with a laboratory prototype. Compliance with the <inline-formula><tex-math notation="LaTeX">$\text{IEEE 519-1992}$</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">$\text{IEC 61000-3-12}$</tex-math></inline-formula> standards is presented and an exhaustive comparison of the proposed topology is made against the classical cascaded H-bridge topology.
Discussion: The tear trough ligament: anatomical basis for the tear trough deformity.
I their cadaveric study of 36 hemifaces, the authors describe a ligamentous structure that arises from medial orbital rim periosteum and inserts into dermis. The authors suggest that this is an osteocutaneous ligament that defines the anatomical area referred to as the tear trough or medial nasojugal crease. The conclusion of the article is that the tear trough ligament is the primary etiologic factor in the development of the tear trough deformity; effective treatment requires submuscular preperiosteal release to effect a marked correction. This study defines a tear trough ligament that originates from the maxilla near the medial canthal tendon inferior to the lacrimal crest. It travels to the medial pupillary line, where it becomes continuous with the bilaminar orbicularis retaining ligament. The dissections are meticulous; the anatomy is precise; and the work is clinically relevant in that it explains why release of the orbicularis origin helps to diminish the tear trough deformity. This classic article advances some of the authors’ previous work and ideas concerning lower eyelid and cheek anatomy. The present work addresses one of the most difficult areas and concepts in facial anatomy: the submuscular fascial network of the face. Work in this field began in the late 1800s with Juvara’s early work on spaces and fascial membranes.1 Other studies, including articles by Collier and Yglsias2 and Grodinsky and Holyoke3 from the 1930s defined fusion zones as part of this network. More recently, the concept of facial ligaments has been investigated by a number of authors over the past 30 years. This article defines the ligamentous structure of the tear trough region. An anatomical discussion focuses on the presence, possible function, and exact extent of the facial ligaments. The authors’ work adds validity to the concept of submuscular ligamentous attachments of the face. The pertinent literature has been cited. Points of fixation have been described in the forehead, lower eyelid, and cheek. These may represent points of suspension, as suggested by the authors. Release of these ligaments is then required if the surgical goal is to reposition or elevate soft tissue. There exists another possible function for these soft-tissue ligaments. Fixation of the facial muscles to periosteum, a valid definition of the term “ligament,” has been noted in association with the orbicularis oculi muscle. This occurs when the orbicularis oculi muscle changes in position from deep to superficial, at the lid-cheek junction, as it courses toward the orbital rim. We have observed that this also occurs with the platysma muscle, and probably with the orbicularis oris muscle as well. It is noteworthy that ligaments are associated with all of these muscles as they course over a bony margin. In the case of the platysma muscle, there is a ligament that travels from the muscle to the mandibular border. With the orbicularis oris muscle, a ligament travels from the undersurface of the muscle to the maxilla and mandible. In each case, fascial attachments originate from the undersurface of these facial muscles and insert into bone when the muscle travels over a free bony margin. This adds a point of fixation for the action of the muscle, in effect providing an insertion point as a fulcrum for motion. These muscles—the orbicularis oculi and oris, and platysma—do not otherwise have strong origin at bone, unlike other facial muscles such as the levator labii or anguli muscles. Conceptually, the orbicularis retaining ligament, and the platysma ligament (the mandibular ligament), are bilaminar because fascia from the undersurface of the muscle above and below the bony margin contributes to the structure. This concept suggests this
The economy of brain network organization
The brain is expensive, incurring high material and metabolic costs for its size — relative to the size of the body — and many aspects of brain network organization can be mostly explained by a parsimonious drive to minimize these costs. However, brain networks or connectomes also have high topological efficiency, robustness, modularity and a 'rich club' of connector hubs. Many of these and other advantageous topological properties will probably entail a wiring-cost premium. We propose that brain organization is shaped by an economic trade-off between minimizing costs and allowing the emergence of adaptively valuable topological patterns of anatomical or functional connectivity between multiple neuronal populations. This process of negotiating, and re-negotiating, trade-offs between wiring cost and topological value continues over long (decades) and short (millisecond) timescales as brain networks evolve, grow and adapt to changing cognitive demands. An economical analysis of neuropsychiatric disorders highlights the vulnerability of the more costly elements of brain networks to pathological attack or abnormal development.
Diagnostic and prognostic value of serum antibodies against Pseudomonas aeruginosa in cystic fibrosis.
BACKGROUND Eradication of Pseudomonas aeruginosa in patients with cystic fibrosis (CF) is possible if initiated early in the course of colonisation. To detect P aeruginosa as early as possible is therefore a major goal. This study was undertaken to validate a commercialised test for the detection of serum Pseudomonas antibodies in patients with CF. METHODS A representative cross sectional analysis of serum antibodies against three Pseudomonas antigens (alkaline protease, elastase, and exotoxin A) was performed in 183 patients with CF of mean age 16.7 years and FEV1 85.9% predicted. The results were correlated with microbiological results from the previous 2 years to calculate sensitivity, specificity, positive and negative predictive values. The following 2 years were assessed to determine prognostic predictive values. RESULTS A combination of all three tested antibodies yielded the best results with a sensitivity of 86%, specificity of 96%, and a positive predictive value of 97%. These values were higher if only patients in whom sputum cultures were available were considered (n = 76, sensitivity 95%, specificity 100%, positive predictive value 100%). The prognostic positive predictive value was high in intermittently infected patients (83%) but low in patients free of infection (33%), whereas the prognostic negative predictive value was high in patients free of infection (78%) and low in intermittently infected patients (58%). CONCLUSIONS Regular determination of serum antibodies may be useful in CF patients with negative or intermittent but not with positive P aeruginosa status. A rise in antibody titres indicates probable infection and eradication treatment may be initiated even in the absence of microbiological detection of P aeruginosa.
Scattering Points in Parallel Coordinates
In this paper, we present a novel parallel coordinates design integrated with points (scattering points in parallel coordinates, SPPC), by taking advantage of both parallel coordinates and scatterplots. Different from most multiple views visualization frameworks involving parallel coordinates where each visualization type occupies an individual window, we convert two selected neighboring coordinate axes into a scatterplot directly. Multidimensional scaling is adopted to allow converting multiple axes into a single subplot. The transition between two visual types is designed in a seamless way. In our work, a series of interaction tools has been developed. Uniform brushing functionality is implemented to allow the user to perform data selection on both points and parallel coordinate polylines without explicitly switching tools. A GPU accelerated dimensional incremental multidimensional scaling (DIMDS) has been developed to significantly improve the system performance. Our case study shows that our scheme is more efficient than traditional multi-view methods in performing visual analysis tasks.
Lumped circuit model analysis of meander line antennas
The past decade has seen phenomenal advances in portable electronics technology like mobile phones, RFID tags and MP3 players. This has led to the development of System in Package (SiP) which combines all the necessary components into a single package. Miniaturization of RF circuit technology has resulted in the need for miniaturized antennas. Meander line antennas are widely used in applications where compactness and miniaturization are key objectives. Although this antenna has been widely used, yet no simple analytical model except those using complicated numerical techniques is available. In this paper, we present a simple analytical model to calculate the resonant frequency of the antenna employing a lumped equivalent circuit model. Circuit response of a typical meander line antenna using our model has been compared with simulated antenna response using Method of Moments based simulator IE3D with good agreement, thereby validating our approach.
An approach for detecting, quantifying, and visualizing the evolution of a research field: A practical application to the Fuzzy Sets Theory field
This paper presents an approach to analyze the thematic evolution of a given research field. This approach combines performance analysis and science mapping for detecting and visualizing conceptual subdomains (particular themes or general thematic areas). It allows us to quantify and visualize the thematic evolution of a given research field. To do this, coword analysis is used in a longitudinal framework in order to detect the different themes treated by the research field across the given time period. The performance analysis uses different bibliometric measures, including the h-index, with the purpose of measuring the impact of both the detected themes and thematic areas. The presented approach includes a visualization method for showing the thematic evolution of the studied field. Then, as an example, the thematic evolution of the Fuzzy Sets Theory field is analyzed using the two most important journals in the topic: Fuzzy Sets and Systems and IEEE Transactions on Fuzzy Systems. © 2010 Elsevier Ltd. All rights reserved.
Max-margin Learning for Lower Linear Envelope Potentials in Binary Markov Random Fields
The standard approach to max-margin parameter learning for Markov random fields (MRFs) involves incrementally adding the most violated constraints during each iteration of the algorithm. This requires exact MAP inference, which is intractable for many classes of MRF. In this paper, we propose an exact MAP inference algorithm for binary MRFs containing a class of higher-order models, known as lower linear envelope potentials. Our algorithm is polynomial in the number of variables and number of linear envelope functions. With tractable inference in hand, we show how the parameters and corresponding feature vectors can be represented in a max-margin framework for efficiently learning lower linear envelope potentials.
Generative and incremental implementation for a scripting interface
Many systems may benefit from scripting support, but the implementation of it is seldom trivial, especially if the system has not originally been developed with scripting support in mind. In this paper we describe a generative, incremental process for creating an intuitive Python interface to a large, hierarchic COM library. The approach is illuminated with the original, real-life case study.
The Best of Both Worlds: Learning Geometry-based 6D Object Pose Estimation
We address the task of estimating the 6D pose of known rigid objects, from RGB and RGB-D input images, in scenarios where the objects are heavily occluded. Our main contribution is a new modular processing pipeline. The first module localizes all known objects in the image via an existing instance segmentation network. The next module densely regresses the object surface positions in its local coordinate system, using an encoder-decoder network. The third module is purely a geometry-based algorithm to output the final 6D object poses. While the first two modules are learned from data, and the last one not, we believe that this is the best of both worlds: geometry-based and learningbased algorithms for object 6D pose estimation. This is validated by achieving state-of-the-art results for RGB input and a slight improvement over state-of-the-art for RGBD input. However, in contrast to previous work, we achieve these results with the same pipeline for RGB and RGB-D input. Furthermore, to obtain these results, we give a second contribution of a new 3D occlusion-aware and objectcentric data augmentation procedure.
Exploratory Study of a Robot Approaching a Person in the Context of Handing Over an Object
This paper presents the results from a Human-Robot Interaction study that investigates the issues of participants’ preferences in terms of the robot approach direction (directionRAD), robot base approach interaction distance (distanceRBAID), robot handing over hand distance (distanceRHOHD), robot handing over arm gesture (gestureRHOAG), and the coordination of both the robot approaching and gestureRHOAG in the context of a robot handing over an object to a seated person. The results from this study aim at informing the development of a Human Aware Manipulation Planner. Twelve participants with some previous human-robot interaction experience were recruited for the trial. The results show that a majority of the participants prefer the robot to approach from the front and hand them a can of soft drink in the front sector of their personal zone. The robot handing over hand position had the most influence on determining from where the robot should approach (i.e distanceRAD). Legibility and perception of risk seem to be the deciding factor on how participants choose their preferred robot arm-base approach coordination for handing over a can. Detailed discussions of the results conclude the paper.
A Service-Oriented Approach for Dynamic Chaining of Virtual Network Functions over Multi-Provider Software-Defined Networks
Emerging technologies such as Software-Defined Networks (SDN) and Network Function Virtualization (NFV) promise to address cost reduction and flexibility in network operation while enabling innovative network service delivery models. However, operational network service delivery solutions still need to be developed that actually exploit these technologies, especially at the multi-provider level. Indeed, the implementation of network functions as software running over a virtualized infrastructure and provisioned on a service basis let one envisage an ecosystem of network services that are dynamically and flexibly assembled by orchestrating Virtual Network Functions even across different provider domains, thereby coping with changeable user and service requirements and context conditions. In this paper we propose an approach that adopts Service-Oriented Architecture (SOA) technology-agnostic architectural guidelines in the design of a solution for orchestrating and dynamically chaining Virtual Network Functions. We discuss how SOA, NFV, and SDN may complement each other in realizing dynamic network function chaining through service composition specification, service selection, service delivery, and placement tasks. Then, we describe the architecture of a SOA-inspired NFV orchestrator, which leverages SDN-based network control capabilities to address an effective delivery of elastic chains of Virtual Network Functions. Preliminary results of prototype implementation and testing activities are also presented. The benefits for Network Service Providers are also described that derive from the adaptive network service provisioning in a multi-provider environment through the orchestration of computing and networking services to provide end users with an enhanced service experience.
Getting stress out of stressed-out stress granules.
Amyotrophic lateral sclerosis (ALS) pathology is linked to the aberrant aggregation of specific proteins, including TDP-43, FUS, and SOD1, but it is not clear why these aggregation events cause ALS. In this issue of The EMBO Journal, Mateju et al (2017) report a direct link between misfolded proteins accumulating in stress granules and the phase transition of these stress granules from liquid to solid. This discovery provides a model connecting protein aggregation to stress granule dysfunction.
Embarrassingly Parallel Variational Inference in Nonconjugate Models
We develop a parallel variational inference (VI) procedure for use in data-distributed settings, where each machine only has access to a subset of data and runs VI independently, without communicating with other machines. This type of “embarrassingly parallel” procedure has recently been developed for MCMC inference algorithms; however, in many cases it is not possible to directly extend this procedure to VI methods without requiring certain restrictive exponential family conditions on the form of the model. Furthermore, most existing (nonparallel) VI methods are restricted to use on conditionally conjugate models, which limits their applicability. To combat these issues, we make use of the recently proposed nonparametric VI to facilitate an embarrassingly parallel VI procedure that can be applied to a wider scope of models, including to nonconjugate models. We derive our embarrassingly parallel VI algorithm, analyze our method theoretically, and demonstrate our method empirically on a few nonconjugate models.
Neuromorphic Silicon Neuron Circuits
Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin-Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.
Cassandra: a decentralized structured storage system
Cassandra is a distributed storage system for managing very large amounts of structured data spread out across many commodity servers, while providing highly available service with no single point of failure. Cassandra aims to run on top of an infrastructure of hundreds of nodes (possibly spread across different data centers). At this scale, small and large components fail continuously. The way Cassandra manages the persistent state in the face of these failures drives the reliability and scalability of the software systems relying on this service. While in many ways Cassandra resembles a database and shares many design and implementation strategies therewith, Cassandra does not support a full relational data model; instead, it provides clients with a simple data model that supports dynamic control over data layout and format. Cassandra system was designed to run on cheap commodity hardware and handle high write throughput while not sacrificing read efficiency.
The learning and use of traversability affordance using range images on a mobile robot
We are interested in how the concept of affordances can affect our view to autonomous robot control, and how the results obtained from autonomous robotics can be reflected back upon the discussion and studies on the concept of affordances. In this paper, we studied how a mobile robot, equipped with a 3D laser scanner, can learn to perceive the traversability affordance and use it to wander in a room tilled with spheres, cylinders and boxes. The results showed that after learning, the robot can wander around avoiding contact with non-traversable objects (i.e. boxes, upright cylinders, or lying cylinders in certain orientation), but moving over traversable objects (such as spheres, and lying cylinders in a rollable orientation with respect to the robot) rolling them out of its way. We have shown that for each action approximately 1% of the perceptual features were relevant to determine whether it is afforded or not and that these relevant features are positioned in certain regions of the range image. The experiments are conducted both using a physics-based simulator and on a real robot.
A fully portable robot system for cleaning solar panels
Dust and dirt particles accumulating on PV panels decrease the solar energy reaching the cells, thereby reducing their overall power output. Hence, cleaning the PV panels is a problem of great practical engineering interest in solar PV power generation. In this paper, the problem is reviewed and methods for dust removal are discussed. A portable robotic cleaning device is developed and features a versatile platform which travels the entire length of a panel. An Arduino microcontroller is used to implement the robot's control system. Initial testing of the robot has provided favorable results and shows that such a system is viable. Future improvements on the design are discussed, especially the different methods of transporting the robot from one panel to another. In conclusion, it is found that robotic cleaning solution is practical and can help in maintaining the clean PV panel efficiency.
Study on WeChat User Behaviors of University Graduates
WeChat was published in China by Tencent company since Jan. 21st 2011. After 100 million users in Mar. 2012, Tencent announced that the number of WeChat's users reached 300 million in Jan 15th 2013. WeChat has become one of the most heated topics within the last 2 years and its every move is attracting attention from various academia, industries, and enterprises. This study selected 50 university graduates as object and analyzed their We Chat using patterns through investigation and survey with statistic methods. The author designed the methods of user study and invited users to participate in the user study at first. Second, by using the collected data from university graduates from 2013 to Mar 2014, the author analyzed the quantities and composition of their contacts in We Chat, figured out the source, type and goal of the public platform they are following, recorded the time, type, form and time relativity of their first 100 posts in the "Moments" function. Through analyzing the data, the author intended to study the real situation in which university graduates virtually socialize and communicate with each other using WeChat and the popularity of WeChat on building social network with strangers. Also, this paper has certain contribution to how to improve the function and service of WeChat. What's more, it analyzed both positive and negative influence of We Chat on students' socialization.
Android Malware Characterization Using Metadata and Machine Learning Techniques
Android Malware has emerged as a consequence of the increasing popularity of smartphones and tablets. While most previous work focuses on inherent characteristics of Android apps to detect malware, this study analyses indirect features and meta-data to identify patterns in malware applications. Our experiments show that: (1) the permissions used by an application offer only moderate performance results; (2) other features publicly available at Android Markets are more relevant in detecting malware, such as the application developer and certificate issuer, and (3) compact and efficient classifiers can be constructed for the early detection of malware applications prior to code inspection or sandboxing.
Bayesian Landmark Learning for Mobile Robot Localization
To operate successfully in indoor environments, mobile robots must be able to localize themselves. Most current localization algorithms lack flexibility, autonomy, and often optimality, since they rely on a human to determine what aspects of the sensor data to use in localization (e.g., what landmarks to use). This paper describes a learning algorithm, called BaLL, that enables mobile robots to learn what features/landmarks are best suited for localization, and also to train artificial neural networks for extracting them from the sensor data. A rigorous Bayesian analysis of probabilistic localization is presented, which produces a rational argument for evaluating features, for selecting them optimally, and for training the networks that approximate the optimal solution. In a systematic experimental study, BaLL outperforms two other recent approaches to mobile robot localization.
Bilateral subthalamic stimulation effects on oral force control in Parkinson's disease
Dysarthria in Parkinson's disease (PD) consists of articulatory, phonatory and respiratory impairment. Bilateral subthalamic nucleus (STN) stimulation greatly improves motor disability, but its long-term effect on speech within a large group of patients has not been precisely evaluated. The aim of this study was to determine the effect of bilateral STN stimulation on oral force control in PD. We measured forces of the upper lip, lower lip and tongue in twenty-six PD patients treated with bilateral STN stimulation. Measurements of the articulatory organ force, as well as a motor evaluation using the Unified Parkinson's Disease Rating Scale (UPDRS), were made with and without STN stimulation. Maximal voluntary force (MVF), reaction time (RT), movement time (MT), imprecision of the peak force (PF) and the hold phase (HP) were all improved with STN stimulation during the articulatory force task, as well as the motor examination scores of the UPDRS. It seems that the beneficial STN stimulation-induced effect on articulatory forces persisted whatever the duration of post-surgical follow-up. However, dysarthria evaluated by the UPDRS was worse in two subgroups of patients with a one to two year and three to five year post-surgical follow-up, in comparison with a subgroup of patients with a three month follow-up. STN stimulation has a beneficial long-term effect on the articulatory organs involved in speech production, and this indicates that parkinsonian dysarthria is associated, at least in part, with an alteration in STN neuronal activity. Nevertheless, to confirm the persistence of the beneficial effect of STN stimulation on parkinsonian dysarthria, a longitudinal evaluation is still needed.
The Spanish Adaptation of the Sport Motivation Scale-II in Adolescent Athletes.
The aim of this study was to adapt and validate the Spanish version of the Sport Motivation Scale-II (S-SMS-II) in adolescent athletes. The sample included 766 Spanish adolescents (263 females and 503 males; average age = 13.71 ± 1.30 years old). The methodological steps established by the International Test Commission were followed. Four measurement models were compared employing the maximum likelihood estimation (with six, five, three, and two factors). Then, factorial invariance analyses were conducted and the effect sizes were calculated. Finally, the reliability was calculated using Cronbach's alpha, omega, and average variance extracted coefficients. The five-factor S-SMS-II showed the best indices of fit (Cronbach's alpha .64 to .74; goodness of fit index .971, root mean square error of approximation .044, comparative fit index .966). Factorial invariance was also verified across gender and between sport-federated athletes and non-federated athletes. The proposed S-SMS-II is discussed according to previous validated versions (English, Portuguese, and Chinese).
Mining Twitter for Adverse Drug Reaction Mentions : A Corpus and Classification Benchmark
With many adults using social media to discuss health information, researchers have begun diving into this resource to monitor or detect health conditions on a population level. Twitter, specifically, has flourished to several hundred million users and could present a rich information source for the detection of serious medical conditions, like adverse drug reactions (ADRs). However, Twitter also presents unique challenges due to brevity, lack of structure, and informal language. We present a freely available, manually annotated corpus of 10,822 tweets, which can be used to train automated tools to mine Twitter for ADRs. We collected tweets utilizing drug names as keywords, but expanding them by applying an algorithm to generate misspelled versions of the drug names for maximum coverage. We annotated each tweet for the presence of a mention of an ADR, and for those that had one, annotated the mention (including span and UMLS IDs of the ADRs). Our inter-annotator agreement for the binary classification had a Kappa value of 0.69, which may be considered substantial (Viera & Garrett, 2005). We evaluated the utility of the corpus by training two classes of machine learning algorithms: Naïve Bayes and Support Vector Machines. The results we present validate the usefulness of the corpus for automated mining tasks. The classification corpus is available from http://diego.asu.edu/downloads.
Serum vitamin D and the metabolic syndrome among osteoporotic postmenopausal female patients of a family practice clinic in Jordan.
BACKGROUND Vitamin D deficiency and insufficiency and the metabolic syndrome are two common health issues worldwide. The association between these two health problems is subject to debate. OBJECTIVES This study aims to investigate the association between vitamin D deficiency or insufficiency and the metabolic syndrome in a sample of osteoporotic postmenopausal women attending a family practice clinic in Amman-Jordan. MATERIAL AND METHODS This was an observational cross sectional study. It was carried out in the family practice clinic in Jordan University Hospital. The study included all postmenopausal osteoporotic women attending the clinic between June 2011 and May 2012, yielding a total of 326 subjects. The association between metabolic syndrome and serum vitamin D levels was investigated. RESULTS Waist circumference, body mass index, triglycerides and fasting blood sugar were significantly higher among postmenopausal women with metabolic syndrome, but HDL cholesterol was significantly lower (p<0.05). The prevalence of metabolic syndrome among all study participants was 42.9%. Triglycerides and LDL cholesterol were significantly higher among women deficiency or insufficiency (p<0.05). The prevalence of vitamin D deficiency or insufficiency was 45.7%. Among patients with metabolic syndrome, the prevalence of vitamin D deficiency or insufficiency was 50.7%. CONCLUSIONS Findings of the current study suggest a lack of relationship between serum vitamin D and metabolic syndrome. However, a significant inverse relationship was found between serum vitamin D levels and both serum triglycerides and LDL levels.
Performance Comparison and Optimization of Channel Coding for Acoustic Communication in Shallow Waters
The coded communications using non-coherent orthogonal modulation and capacity-approaching binary channel codes namely low-density parity check code (LDPC) and turbo code are investigated in this paper with the focus on the main three characteristic effects of an underwater channel, namely, multi-path propagation, Doppler spread and ambient noise. Additionally, a new method was implemented and tested successfully to identify and eliminate the highamplitude noise from the received dataset.
Overview of Electric Motor Technologies Used for More Electric Aircraft (MEA)
This paper presents an overview of motor drive technologies used for safety-critical aerospace applications, with a particular focus placed on the choice of candidate machines and their drive topologies. Aircraft applications demand high reliability, high availability, and high power density while aiming to reduce weight, complexity, fuel consumption, operational costs, and environmental impact. New electric driven systems can meet these requirements and also provide significant technical and economic improvements over conventional mechanical, hydraulic, or pneumatic systems. Fault-tolerant motor drives can be achieved by partitioning and redundancy through the use of multichannel three-phase systems or multiple single-phase modules. Analytical methods are adopted to compare caged induction, reluctance, and PM motor technologies and their relative merits. The analysis suggests that the dual (or triple) three-phase PMAC motor drive may be a favored choice for general aerospace applications, striking a balance between necessary redundancy and undue complexity, while maintaining a balanced operation following a failure. The modular single-phase approach offers a good compromise between size and complexity but suffers from high total harmonic distortion of the supply and high torque ripple when faulted. For each specific aircraft application, a parametrical optimization of the suitable motor configuration is needed through a coupled electromagnetic and thermal analysis, and should be verified by finite-element analysis.
Ripple: Communicating through Physical Vibration
This paper investigates the possibility of communicating through vibrations. By modulating the vibration motors available in all mobile phones, and decoding them through accelerometers, we aim to communicate small packets of information. Of course, this will not match the bit rates available through RF modalities, such as NFC or Bluetooth, which utilize a much larger bandwidth. However, where security is vital, vibratory communication may offer advantages. We develop Ripple, a system that achieves up to 200 bits/s of secure transmission using off-the-shelf vibration motor chips, and 80 bits/s on Android smartphones. This is an outcome of designing and integrating a range of techniques, including multicarrier modulation, orthogonal vibration division, vibration braking, side-channel jamming, etc. Not all these techniques are novel; some are borrowed and suitably modified for our purposes, while others are unique to this relatively new platform of vibratory communication.
G2 inferior vena cava filter: retrievability and safety.
PURPOSE To assess the retrievability of the G2 inferior vena cava (IVC) filter and factors influencing the safety and technical success of retrieval. MATERIALS AND METHODS From October 2006 through June 2008, G2 IVC filters were placed in 140 consecutive patients who needed prophylaxis against pulmonary embolism (PE). General indications for filter placement included history of thromboembolic disease (n = 98) and high risk for PE (n = 42); specific indications included contraindication to anticoagulation (n = 120), prophylaxis in addition to anticoagulation (n = 16), and failure of anticoagulation (n = 4). Filter dwell time, technical success of filter retrieval, and complications related to placement or retrieval were retrospectively evaluated in patients who underwent filter removal. RESULTS Twenty-seven attempts at G2 filter removal were made in 26 patients (12 men; age range, 24-88 years; mean age, 55.4 y) after a mean period of 122 days (range, 11-260 d). Data were collected retrospectively with institutional review board approval. Filter removal was successful in all 27 attempts (100%). Tilting of the filter (> or =15 degrees ) occurred in five cases (18.5%), with probable filter incorporation into the right lateral wall of the IVC in one. Other complications of retrieval such as filter thrombosis, significant filter migration, filter fracture, and caval occlusion were not observed. CONCLUSIONS G2 IVC filter retrieval has a high technical success rate and a low complication rate. Technical success appears to be unaffected by the dwell time within the reported range.
Low-Power Near-Threshold 10T SRAM Bit Cells With Enhanced Data-Independent Read Port Leakage for Array Augmentation in 32-nm CMOS
The conventional six-transistor static random access memory (SRAM) cell allows high density and fast differential sensing but suffers from half-select and read-disturb issues. Although the conventional eight-transistor SRAM cell solves the read-disturb issue, it still suffers from low array efficiency due to deterioration of read bit-line (RBL) swing and $\text{I}_{\mathbf {on}}/\text{I}_{\mathbf {off}}$ ratio with increase in the number of cells per column. Previous approaches to solve these issues have been afflicted by low performance, data-dependent leakage, large area, and high energy per access. Therefore, in this paper, we present three iterations of SRAM bit cells with nMOS-only based read ports aimed to greatly reduce data-dependent read port leakage to enable 1k cells/RBL, improve read performance, and reduce area and power over conventional and 10T cell-based works. We compare the proposed work with other works by recording metrics from the simulation of a 128-kb SRAM constructed with divided-wordline-decoding architecture and a 32-bit word size. Apart from large improvements observed over conventional cells, up to 100-mV improvement in read-access performance, up to 19.8% saving in energy per access, and up to 19.5% saving in the area are also observed over other 10T cells, thereby enlarging the design and application gamut for memory designers in low-power sensors and battery-enabled devices.
A communication strategy and brochure for relatives of patients dying in the ICU.
BACKGROUND There is a need for close communication with relatives of patients dying in the intensive care unit (ICU). We evaluated a format that included a proactive end-of-life conference and a brochure to see whether it could lessen the effects of bereavement. METHODS Family members of 126 patients dying in 22 ICUs in France were randomly assigned to the intervention format or to the customary end-of-life conference. Participants were interviewed by telephone 90 days after the death with the use of the Impact of Event Scale (IES; scores range from 0, indicating no symptoms, to 75, indicating severe symptoms related to post-traumatic stress disorder [PTSD]) and the Hospital Anxiety and Depression Scale (HADS; subscale scores range from 0, indicating no distress, to 21, indicating maximum distress). RESULTS Participants in the intervention group had longer conferences than those in the control group (median, 30 minutes [interquartile range, 19 to 45] vs. 20 minutes [interquartile range, 15 to 30]; P<0.001) and spent more of the time talking (median, 14 minutes [interquartile range, 8 to 20] vs. 5 minutes [interquartile range, 5 to 10]). On day 90, the 56 participants in the intervention group who responded to the telephone interview had a significantly lower median IES score than the 52 participants in the control group (27 vs. 39, P=0.02) and a lower prevalence of PTSD-related symptoms (45% vs. 69%, P=0.01). The median HADS score was also lower in the intervention group (11, vs. 17 in the control group; P=0.004), and symptoms of both anxiety and depression were less prevalent (anxiety, 45% vs. 67%; P=0.02; depression, 29% vs. 56%; P=0.003). CONCLUSIONS Providing relatives of patients who are dying in the ICU with a brochure on bereavement and using a proactive communication strategy that includes longer conferences and more time for family members to talk may lessen the burden of bereavement. (ClinicalTrials.gov number, NCT00331877.)
Electrophoretic and spectroscopic characterization of the protein patterns formed in different surfactant solutions.
The complexations between catalase and the sodium perfluorooctanoate/sodium octanoate and sodium perfluorooctanoate/sodium dodecanoate systems have been studied by a combination of electrophoresis and spectroscopy measurements. The numbers of adsorption sites on the protein were determined from the observed increases of the zeta-potential as a function of surfactant concentration in the regions where the adsorption was a consequence of the hydrophobic effect. The Gibbs energies of adsorption of the surfactants onto the protein were evaluated and the results show that for all systems, Gibbs energies are negative and larger, in absolute values, at low values of surfactant concentration where binding to the high energy sites takes place, and become less negative as more surfactant molecules bind, suggesting a saturation process. The role of hydrophobic interactions in these systems has been demonstrated to be the predominant. Spectroscopy measurements suggest conformational changes on catalase depending on the surfactant mixture as well as the mixed ratio. No isosbestic point or shifts have been found showing that catalase has spectrophotometrically one kind of binding site for these surfactant mixtures.
Enhancing semantic search using case-based modular ontology
In this paper, we present a semantic search approach based on Case-based modular Ontology. Our work aims to improve ontology-based information retrieval by the integration of the traditional information retrieval, the use of ontology and the case based reasoning (CBR). In fact, our recommender approach uses the CBR with ontology for case representation and indexing. Ontology-based similarity is used to retrieve similar cases and to provide end users with alternative recommendations. The main contribution of this work is the use of a CBR mechanism and an ontological representation for two purposes: Resource Retrieval from Web and ontology enrichment from cases.
Understanding the determinants of cloud computing adoption
Purpose – The purpose of this paper is to investigate the factors that affect the adoption of cloud computing by firms belonging to the high-tech industry. The eight factors examined in this study are relative advantage, complexity, compatibility, top management support, firm size, technology readiness, competitive pressure, and trading partner pressure. Design/methodology/approach – A questionnaire-based survey was used to collect data from 111 firms belonging to the high-tech industry in Taiwan. Relevant hypotheses were derived and tested by logistic regression analysis. Findings – The findings revealed that relative advantage, top management support, firm size, competitive pressure, and trading partner pressure characteristics have a significant effect on the adoption of cloud computing. Research limitations/implications – The research was conducted in the high-tech industry, which may limit the generalisability of the findings. Practical implications – The findings offer cloud computing service providers with a better understanding of what affects cloud computing adoption characteristics, with relevant insight on current promotions. Originality/value – The research contributes to the application of new technology cloud computing adoption in the high-tech industry through the use of a wide range of variables. The findings also help firms consider their information technologies investments when implementing cloud computing.
Designing high-energy lithium-sulfur batteries.
Due to their high energy density and low material cost, lithium-sulfur batteries represent a promising energy storage system for a multitude of emerging applications, ranging from stationary grid storage to mobile electric vehicles. This review aims to summarize major developments in the field of lithium-sulfur batteries, starting from an overview of their electrochemistry, technical challenges and potential solutions, along with some theoretical calculation results to advance our understanding of the material interactions involved. Next, we examine the most extensively-used design strategy: encapsulation of sulfur cathodes in carbon host materials. Other emerging host materials, such as polymeric and inorganic materials, are discussed as well. This is followed by a survey of novel battery configurations, including the use of lithium sulfide cathodes and lithium polysulfide catholytes, as well as recent burgeoning efforts in the modification of separators and protection of lithium metal anodes. Finally, we conclude with an outlook section to offer some insight on the future directions and prospects of lithium-sulfur batteries.
Color transfer in correlated color space
In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method.
Vagal withdrawal to a sad film predicts subsequent recovery from depression.
Cardiac vagal tone, as indexed by abnormalities in the level and/or reactivity of respiratory sinus arrhythmia (RSA), has been related to psychiatric impairment, including risk for depression. Longitudinal studies of depression have focused on RSA levels and have found mixed support for the hypothesis that low RSA levels predict a more pernicious course of depression. The current investigation focuses on the relation between RSA reactivity and the course of depression. We measured depressed persons' RSA reactivity to sadness-, fear-, and amusement-inducing emotion films and reassessed participants' diagnostic status 6 months later. Depressed persons who exhibited a higher degree of vagal withdrawal to the sad film were more likely to recover from depression. Implications for the study of RSA in depression are discussed.
A New Control Strategy of an Electric-Power-Assisted Steering System
The control of electric-power-assisted steering (EPAS) systems is a challenging problem due to multiple objectives and the need for several pieces of information to implement the control. The control objectives are to generate assist torque with fast responses to driver's torque commands, ensure system stability, attenuate vibrations, transmit the road information to the driver, and improve the steering wheel returnability and free-control performance. The control must also be robust to modeling errors and parameter uncertainties. To achieve these objectives, a new control strategy is introduced in this paper. A reference model is used to generate an ideal motor angle that can guarantee the desired performance, and then, a sliding-mode control is used to track the desired motor angle. This reference model is built using a dynamic mechanical EPAS model, which is driven by the driver torque, the road reaction torque, and the desired assist torque. To implement the reference model with a minimum of sensors, a sliding-mode observer with unknown inputs and robust differentiators are employed to estimate the driver torque, the road reaction torque, and the system's states. With the proposed control strategy, there is no need for different algorithms, rules for switching between these algorithms, or fine-tuning of several parameters. In addition, our strategy improves system performance and robustness and reduces costs. The simulation results show that the proposed control structure can satisfy the desired performance.
Transition-based Dependency DAG Parsing Using Dynamic Oracles
In most of the dependency parsing studies, dependency relations within a sentence are often presented as a tree structure. Whilst the tree structure is sufficient to represent the surface relations, deep dependencies which may result to multi-headed relations require more general dependency structures, namely Directed Acyclic Graphs (DAGs). This study proposes a new dependency DAG parsing approach which uses a dynamic oracle within a shift-reduce transitionbased parsing framework. Although there is still room for improvement on performance with more feature engineering, we already obtain competitive performances compared to static oracles as a result of our initial experiments conducted on the ITU-METU-Sabancı Turkish Treebank (IMST).
A single layer wideband differential-fed patch antenna array with SIW feeding networks
A single layer wideband SIW-fed differential patch array is proposed in this paper. A SIW-CPS-CMS (substrate integrated waveguide - coupled lines - coupled microstrip line) transition is designed and has a bandwidth of about 50%, covering the E-band and W-band. The differential phase deviation between the coupled microstrip lines is less than 7.5° within the operation band. A 1×4 array and a 4×4 array are designed. The antenna is composed of SIW parallel power di vider network, SIW-CPS-CMS transition, and series differential-fed patch array. Simulated results show that the bandwidth of the 1×4 array and 4×4 array are 37% and 12%, and the realized gain are 10.5-12 dB and 17.2-20.2dB within the corresponding operation band, respectively. The features of single layer and wideband on impedance and gain of the proposed SIW-fed differential patch array make it a good candidate for automotive radar or other millimeter wave applications.
Identity recognition with palm vein feature using local binary pattern rotation Invariant
Biometrics are being used as one of many alternatives in Recognition System. Palm vein is one of many features that could be used in biometrics. Several advantages of palm vein is that they were not easily damaged and rather hard to be duplicated because it is located deep inside skin layers. Palm vein is also cannot be seen with plain eyes or ordinary cameras. The research in this biometrics recognition system is done using Local Binary Pattern Rotation Invariant (LBPROT) as a feature extraction method on palm vein image. The matching process itself was done by using Cosine Distance to determine the nearest distance to which image. Test result shows that the optimal configuration of the method is by using 16 regions, 15 radius length, 8 neighborhood pixel, and 256×256 pixel image resolution. Another result shown by setting the threshold level in 0.085 is that the FAR score reached 0.11698 and the FRR score reached 0.1175, with 11.7% EER score and recognition accuracy rate in 96%.
Road Damage Detection Using Deep Neural Networks with Images Captured Through a Smartphone
Research on damage detection of road surfaces using image processing techniques has been actively conducted, achieving considerably high detection accuracies. Many studies only focus on the detection of the presence or absence of damage. However, in a real-world scenario, when the road managers from a governing body need to repair such damage, they need to clearly understand the type of damage in order to take effective action. In addition, in many of these previous studies, the researchers acquire their own data using different methods. Hence, there is no uniform road damage dataset available openly, leading to the absence of a benchmark for road damage detection. This study makes three contributions to address these issues. First, to the best of our knowledge, for the first time, a large-scale road damage dataset is prepared. This dataset is composed of 9,053 road damage images captured with a smartphone installed on a car, with 15,435 instances of road surface damage included in these road images. In order to generate this dataset, we cooperated with 7 municipalities in Japan and acquired road images for more than 40 hours. These images were captured in a wide variety of weather and illuminance conditions. In each image, we annotated the bounding box representing the location and type of damage. Next, we used a state-of-the-art object detection method using convolutional neural networks to train the damage detection model with our dataset, and compared the accuracy and runtime speed on both, using a GPU server and a smartphone. Finally, we demonstrate that the type of damage can be classified into eight types with high accuracy by applying the proposed object detection method. The road damage dataset, our experimental results, and the developed smartphone application used in this study are publicly available (https://github.com/sekilab/RoadDamageDetector/). ∗[email protected]
Digital pulse width modulator architectures
This paper presents a survey and classification of architectures for integrated circuit implementation of digital pulse-width modulators (DPWM) targeting digital control of high-frequency switching DC-DC power converters. Previously presented designs are identified as particular cases of the proposed classification. In order to optimize circuit resources in terms of occupied area and power consumption, a general architecture based on tapped delay lines is proposed, which includes segmentation of the input digital code to drive binary weighted delay cells and thermometer-decoded unary delay cells. Integrated circuit design of a particular example of the segmented DPWM is described. The segmented DPWM prototype chip operates at 1 MHz switching frequency and has low power consumption and very small silicon area (0.07 mm/sup 2/ in a standard 0.5 micron CMOS process). Experimental results validate the functionality of the proposed segmented DPWM.
MIRPLib - A library of maritime inventory routing problem instances: Survey, core model, and benchmark results
This paper presents a detailed description of a particular class of deterministic single product maritime inventory routing problems (MIRPs), which we call deep-sea MIRPs with inventory tracking at every port. This class involves vessel travel times between ports that are significantly longer than the time spent in port and require inventory levels at all ports to be monitored throughout the planning horizon. After providing a comprehensive literature survey of this class, we introduce a core model for it cast as a mixed-integer linear program. This formulation is quite general and incorporates assumptions and families of constraints that are most prevalent in practice. We also discuss other modeling features commonly found in the literature and how they can be incorporated into the core model. We then offer a unified discussion of some of the most common advanced techniques used for improving the bounds of these problems. Finally, we present a library, called MIRPLib, of publicly available test problem instances for MIRPs with inventory tracking at every port. Despite a growing interest in combined routing and inventory management problems in a maritime setting, no data sets are publicly available, which represents a significant “barrier to entry” for those interested in related research. Our main goal for MIRPLib is to help maritime inventory routing gain maturity as an important and interesting class of planning problems. As a means to this end, we (1) make available benchmark instances for this particular class of MIRPs; (2) provide the mixed-integer linear programming community with a set of optimization problem instances from the maritime transportation domain in LP and MPS format; and (3) provide a template for other researchers when specifying characteristics of MIRPs arising in other settings. Best known computational results are reported for each instance.
Learning through inquiry: student difficulties with online course-based Material
This study investigates the case-based learning experience of 133 undergraduate veterinarian science students. Using qualitative methodologies from relational Student Learning Research, variation in the quality of the learning experience was identified, ranging from coherent, deep, quality experiences of the cases, to experiences that separated significant aspects, such as the online case histories, laboratory test results, and annotated images emphasizing symptoms, from the meaning of the experience. A key outcome of this study was that a significant percentage of the students surveyed adopted a poor approach to learning with online resources in a blended experience even when their overall learning experience was related to cohesive conceptions of veterinary science, and that the difference was even more marked for less successful students. The outcomes from the study suggest that many students are unsure of how to approach the use of online resources in ways that are likely to maximise benefits for learning in blended experiences, and that the benefits from case-based learning such as authenticity and active learning can be threatened if issues closely associated with qualitative variation arising from incoherence in the experience are not addressed.
A Low-Power 26-GHz Transformer-Based Regulated Cascode SiGe BiCMOS Transimpedance Amplifier
Low-power high-speed optical receivers are required to meet the explosive growth in data communication systems. This paper presents a 26 GHz transimpedance amplifier (TIA) that employs a transformer-based regulated cascode (RGC) input stage which provides passive negative-feedback gain that enhances the effective transconductance of the TIA's input common-base transistor; reducing the input resistance and isolating the parasitic photodiode capacitance. This allows for considerable bandwidth extension without significant noise degradation or power consumption. Further bandwidth extension is achieved through series inductive peaking to isolate the photodetector capacitance from the TIA input. The optimum choice of series inductive peaking value and key transformer parameters for bandwidth extension and jitter minimization is analyzed. Fabricated in a 0.25-µm SiGe BiCMOS technology and tested with an on-chip 150 fF capacitor to emulate a photodiode, the TIA achieves a 53 dBΩ single-ended transimpedance gain with a 26 GHz bandwidth and 21.3 pA/$\sqrt{\rm Hz}$ average input-referred noise current spectral density. Total chip power including output buffering is 28.2 mW from a 2.5 V supply, with the core TIA consuming 8.2 mW, and the chip area including pads is 960 µm x 780 µm.
Accountability in algorithmic decision making
A view from computational journalism.
A Compact UWB Band-Notched Printed Monopole Antenna With Defected Ground Structure
A simple and compact UWB printed monopole antenna with filtering characteristic is presented. The proposed antenna consists of a defected ground structure (DGS) and a radiating patch with arc-shaped step that is notched by removing two squares at the bottom. By using a modified shovel-shaped defected ground structure, band-notched characteristic that in volves both operating frequency band of Dedicated Short-Range Communication (DSRC) systems and WLAN is obtained. Omnidirectional H-plane radiation patterns and appropriate impedance characteristic are the main features of the proposed antenna that are achieved by designing the lower edges of the radiating patch in the form of arc-shaped step. The designed antenna has a small size of 15 × 18 mm2 and provides the impedance bandwidth of more than 128% between 3.1 and 14 GHz for VSWR <; 2, with notch frequency band at 5.13-6.1 GHz.
LibGuides: Geography 283 - Carroll: Books
This guide is for John Carroll's Geography 283 - Introduction to Spatial Data class. How to find books
A PSO based integrated functional link net and interval type-2 fuzzy logic system for predicting stock market indices
This paper presents an integrated functional link interval type-2 fuzzy neural system (FLIT2FNS) for predicting the stock market indices. The hybrid model uses a TSK (Takagi–Sugano–Kang) type fuzzy rule base that employs type-2 fuzzy sets in the antecedent parts and the outputs from the Functional Link Artificial Neural Network (FLANN) in the consequent parts. Two other approaches, namely the integrated FLANN and type-1 fuzzy logic system and Local Linear Wavelet Neural Network (LLWNN) are also presented for a comparative study. Backpropagation and particle swarm optimization (PSO) learning algorithms have been used independently to optimize the parameters of all the forecasting models. To test the model performance, three well known stock market indices like the Standard’s & Poor’s 500 (S&P 500), Bombay stock exchange (BSE), and Dow Jones industrial average (DJIA) are used. The mean absolute percentage error (MAPE) and root mean square error (RMSE) are used to find out the performance of all the three models. Finally, it is observed that out of three methods, FLIT2FNS performs the best irrespective of the time horizons spanning from 1 day to 1 month. © 2011 Elsevier B.V. All rights reserved.
Profiling mobile malware behaviour through hybrid malware analysis approach
Nowadays, the usage of mobile device among the community worldwide has been tremendously increased. With this proliferation of mobile devices, more users are able to access the internet for variety of online application and services. As the use of mobile devices and applications grows, the rate of vulnerabilities exploitation and sophistication of attack towards the mobile user are increasing as well. To date, Google's Android Operating System (OS) are among the widely used OS for the mobile devices, the openness design and ease of use have made them popular among developer and user. Despite the advantages the android-based mobile devices have, it also invited the malware author to exploit the mobile application on the market. Prior to this matter, this research focused on investigating the behaviour of mobile malware through hybrid approach. The hybrid approach correlates and reconstructs the result from the static and dynamic malware analysis in producing a trace of malicious event. Based on the finding, this research proposed a general mobile malware behaviour model that can contribute in identifying the key features in detecting mobile malware on an Android Platform device.
Reading order independent grouping proof for RFID tags
Using RFID to verify the simultaneous presence of more than one tag has generated interest since the Yoking Proof mechanism was proposed by A. Juels (2004). Various protocols have been proposed ever since. However, all these protocols require that the tags are ordered in a certain way. This article proposes an order-independent protocol which improves the efficiency and reduced failure rates. Privacy is also protected by not sending the identity in plain text.
Automated Attendance Monitoring System using Android Platform
In today’s world, a paper based approach is followed for marking attendance, where the students sign on the attendance sheets. This data is then manually entered into the system. Managing the attendance of the students during lectures is a difficult task and it becomes more difficult during the report generation phase. This is because the process of marking attendance and maintaining the data is not fully automated and manual computation produces errors and also wastes a lot of time. For this reason, the development of Attendance Monitoring System (AMS) using android platform is proposed.
EXERCISE TRAINING IMPROVES ENDOTHELIAL FUNCTION IN RESISTANCE ARTERIES OF YOUNG PREHYPERTENSIVES
Prehypertension is associated with reduced conduit artery endothelial function and perturbation of oxidant/antioxidant status. It is unknown whether endothelial dysfunction persists to resistance arteries and whether exercise training affects oxidant/antioxidant balance in young prehypertensives. We examined resistance artery function using venous occlusion plethysmography measurement of forearm (FBF) and calf blood flow (CBF) at rest and during reactive hyperaemia (RH), as well as lipid peroxidation (8-iso-PGF2α) and antioxidant capacity (Trolox-equivalent antioxidant capacity; TEAC) before and after exercise intervention or time control. Forty-three unmedicated prehypertensive and 15 matched normotensive time controls met screening requirements and participated in the study (age: 21.1±0.8 years). Prehypertensive subjects were randomly assigned to resistance exercise training (PHRT; n=15), endurance exercise training (PHET; n=13) or time-control groups (PHTC; n=15). Treatment groups exercised 3 days per week for 8 weeks. Peak and total FBF were lower in prehypertensives than normotensives (12.7±1.2 ml min−1 per100 ml tissue and 89.1±7.7 ml min−1 per 100 ml tissue vs 16.3±1.0 ml min−1 per 100 ml tissue and 123.3±6.4 ml min−1 per 100 ml tissue, respectively; P<0.05). Peak and total CBF were lower in prehypertensives than normotensives (15.3±1.2 ml min−1 per 100 ml tissue and 74±8.3 ml min−1 per 100 ml tissue vs 20.9±1.4 ml min−1 per 100 ml tissue and 107±9.2 ml min−1 per 100 ml tissue, respectively; P<0.05). PHRT and PHET improved humoral measures of TEAC (+24 and +30%) and 8-iso-PGF2α (−43 and −40%, respectively; P⩽0.05). This study provides evidence that young prehypertensives exhibit reduced resistance artery endothelial function and that short-term (8 weeks) resistance or endurance training are effective in improving resistance artery endothelial function and oxidant/antioxidant balance in young prehypertensives.
Nonlinear Processes in Geophysics An extension of conditional nonlinear optimal perturbation approach and its applications
The approach of conditional nonlinear optimal perturbation (CNOP) was previously proposed to find the optimal initial perturbation (CNOP-I) in a given constraint. In this paper, we extend the CNOP approach to search for the optimal combined mode of initial perturbations and model parameter perturbations. This optimal combined mode, also named CNOP, has two special cases: one is CNOP-I that only links with initial perturbations and has the largest nonlinear evolution at a prediction time; while the other is merely related to the parameter perturbations and is called CNOP-P, which causes the largest departure from a given reference state at a prediction time. The CNOP approach allows us to explore not only the first kind of predictability related to initial errors, but also the second kind of predictability associated with model parameter errors, moreover, the predictability problems of the coexistence of initial errors and parameter errors. With the CNOP approach, we study the ENSO predictability by a theoretical ENSO model. The results demonstrate that the prediction errors caused by the CNOP errors are only slightly larger than those yielded by the CNOP-I errors and then the model parameter errors may play a minor role in producing significant uncertainties for ENSO predictions. Thus, it is clear that the CNOP errors and their resultant prediction errors illustrate the combined effect on predictability of initial errors and model parameter errors and can be used to explore the relative importance of initial errors and parameter errors in yielding considerable prediction errors, which helps identify the dominant source of the errors that cause prediction uncertainties. It is finally expected that more realistic models will be adopted to investigate this use of CNOP. Correspondence to: W. Duan ([email protected])
Examining Technology Uses in the Classroom: Developing Fraction Sense Using Virtual Manipulative Concept Tutorials
This paper describes a classroom teaching experiment conducted in three fifth-grade mathematics classrooms with students of different achievement levels. Virtual fraction manipulative concept tutorials were used in three one-hour class sessions to investigate the learning characteristics afforded by these technology tools. The virtual fraction manipulative concept tutorials exhibited the following learning characteristics that supported students during their learning of equivalence and fraction addition: (1) Allowed discovery learning through experimentation and hypothesis testing; (2) Encouraged students to see mathematical relationships; (3) Connected iconic and symbolic modes of representation explicitly; and (4) Prevented common error patterns in fraction addition. Technology can play a major role in making sense of mathematics, and has been used in classrooms to enhance mathematics instruction. Students who use appropriate technology persist longer, enjoy learning more, and demonstrate gains in mathematics performance (Goldman & Pellegrino, 1987; Okolo, Bahr, & Reith, 1993). Recent developments in computer technology have created innovative technology tools available at no cost on the World Wide Web called virtual manipulatives. A virtual manipulative is “an interactive, Web-based visual representation of a dynamic object that presents opportunities for constructing mathematical knowledge" (Moyer, Bolyard, & Spikell, 2002). In this project, we used two virtual manipulative applets from the National Library of Virtual Manipulatives (http://matti.usu.edu/nlvm/index.html) and one applet from the NCTM electronic standards (http://nctm.org) to reinforce fraction concepts in three fifthgrade classes with students of different ability levels. Our goal was to examine learning characteristics that supported students during their learning of equivalence and fraction addition. The fraction applets used in this three-day project are concept tutorials with instructions on using the manipulatives and activities that accompany the applets. They provide links to the NCTM Standards (National Council of Teachers of Mathematics, 2000) and are beneficial for visual learners. Advantages of these sites are that they are interactive, give the user control and ability to manipulate objects, and provide
Movie recommender system for profit maximization
Traditional recommender systems minimize prediction error with respect to users' choices. Recent studies have shown that recommender systems have a positive effect on the provider's revenue. In this paper we show that by providing a set of recommendations different than the one perceived best according to user acceptance rate, the recommendation system can further increase the business' utility (e.g. revenue), without any significant drop in user satisfaction. Indeed, the recommendation system designer should have in mind both the user, whose taste we need to reveal, and the business, which wants to promote specific content. We performed a large body of experiments comparing a commercial state-of-the-art recommendation engine with a modified recommendation list, which takes into account the utility (or revenue) which the business obtains from each suggestion that is accepted by the user. We show that the modified recommendation list is more desirable for the business, as the end result gives the business a higher utility (or revenue). To study possible reduce in satisfaction by providing the user worse suggestions, we asked the users how they perceive the list of recommendation that they received. Differences in user satisfaction between the lists is negligible, and not statistically significant. We also uncover a phenomenon where movie consumers prefer watching and even paying for movies that they have already seen in the past than movies that are new to them.
RN-BSN education: 21st century barriers and incentives.
DESIGN Qualitative using phenomenological inquiry. Methods Purposive sample of six RN-BSN students participated in focus group interviews. Data were analysed using Colaizzi's phenomenological method. FINDINGS Incentives included: (1) being at the right time in life; (2) working with options; (3) Achieving a personal goal; (4) BSN provides a credible professional identity; (5) encouragement from contemporaries; and (6) user-friendly RN-BSN programmes. Barriers included: (1) time; (2) fear; (3) lack of recognition for past educational and life accomplishments; (4) equal treatment of BSN, ASN and diploma RNs; and (5) negative ASN or diploma school experience. CONCLUSIONS RN-BSN educational mobility is imperative as: (a) 70% of practicing RNs (USA) are educated at the ASN or diploma level; (b) nurse academicians and leaders are retiring in large numbers; and (c) research links BSN-educated RNs with improved patient outcomes. IMPLICATIONS FOR NURSING MANAGEMENT RN-BSN educational mobility is imperative to nurse managers and nurse administrators because: (a) research links BSN-educated RNs with improved patient outcomes; (b) nurse leaders and academicians are retiring in large numbers; and (c) approximately 70% of practicing RNs (USA) are educated at the associate degree or diploma level with only 15% moving on to achieve a degree past the associate level. Measures to foster incentives and inhibit barriers (caring curricula and recognition of different educational levels) should be implemented at all levels of nursing practice, management and academia.
A Convolutional Encoder Model for Neural Machine Translation
The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. In this paper we present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the entire source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT’16 English-Romanian translation we achieve competitive accuracy to the state-of-theart and we outperform several recently published results on the WMT’15 EnglishGerman task. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT’14 English-French translation. Our convolutional encoder speeds up CPU decoding by more than two times at the same or higher accuracy as a strong bi-directional LSTM baseline.
Less is More : Financial Constraints and Innovative Efficiency
We show that financial constraints may benefit innovation by improving the efficiency of innovative activities. We measure firm-level innovative efficiency by patents (or patent citations) scaled by R&D (research and development) investment or the number of employees, and find that financial constraints are positively associated with innovative efficiency. Tests using the 1989 junk bond crisis as an exogenous shock to financial constraints suggest a causal interpretation for the link. Consistent with agency problems, the positive effect of financial constraints on innovative efficiency is stronger among firms with high excess cash holdings and low investment opportunities, and among firms in less competitive industries. Financial constraints appear to mitigate free cash flow problems that induce firms to make unproductive R&D investment in fields out of their direct expertise. Our findings point to a bright side of the role of financial constraints for corporate investment in intangible assets. JEL Classification: G32, G34, O32
Characterization of freshwater natural dissolved organic matter (DOM): mechanistic explanations for protective effects against metal toxicity and direct effects on organisms.
Dissolved organic matter (DOM) exerts direct and indirect influences on aquatic organisms. In order to better understand how DOM causes these effects, potentiometric titration was carried out for a wide range of autochthonous and terrigenous freshwater DOM isolates. The isolates were previously characterized by absorbance and fluorescence spectroscopy. Proton binding constants (pKa) were grouped into three classes: acidic (pKa≤5), intermediate (58.5). Generally, the proton site densities (LT) showed maximum peaks at the acidic and basic ends around pKa values of 3.5 and 10, respectively. More variably positioned peaks occurred in the intermediate pKa range. The acid-base titrations revealed the dominance of carboxylic and phenolic ligands with a trend for more autochthonous sources to have higher total LT. A summary parameter, referred to as the Proton Binding Index (PBI), was introduced to summarize chemical reactivity of DOMs based on the data of pKa and LT. Then, the already published spectroscopic data were explored and the specific absorbance coefficient at 340nm (i.e. SAC340), an index of DOM aromaticity, was found to exhibit a strong correlation with PBI. Thus, the tendencies observed in the literature that darker organic matter is more protective against metal toxicity and more effective in altering physiological processes in aquatic organisms can now be rationalized on a basis of chemical reactivity to protons.
Scale Space Meshing of Raw Data Point Sets
This paper develops a scale space strategy for orienting and meshing exactly and completely a raw point set. The scale space is based on the intrinsic heat equation, also called mean curvature motion (MCM). A simple iterative scheme implementing MCM directly on the raw point set is described, and a mathematical proof of its consistency with MCM is given. Points evolved by this MCM implementation can be trivially backtracked to their initial raw position. Therefore, both the orientation and mesh of the data point set obtained at a smooth scale can be transported back on the original. The gain in visual accuracy is demonstrated on archaeological objects by comparison with several state of the art meshing methods.
Multi-dimensional visualization of large-scale marine hydrological environmental data
With the constant deepening of research on marine environment simulation and information expression, there are higher and higher requirements for the sense of reality of ocean data visualization results and the real-time interaction in the visualization process. This paper tackle the challenge of key technology of three-dimensional interaction and volume rendering technology based on GPU technology, develops large scale marine hydrological environmental data-oriented visualization software and realizes oceanographic planar graph, contour line rendering, isosurface rendering, factor field volume rendering and dynamic simulation of current field. To express the spatial characteristics and real-time update of massive marine hydrological environmental data better, this study establishes nodes in the scene for the management of geometric objects to realize high-performance dynamic rendering. The system employs CUDA (Computing Unified Device Architecture) parallel computing for the improvement of computation rate, uses NetCDF (Network Common Data Form) file format for data access and applies GPU programming technology to realize fast volume rendering of marine water environmental factors. The visualization software of marine hydrological environment developed can simulate and show properties and change process of marine water environmental factors efficiently and intuitively. © 2016 Elsevier Ltd. All rights reserved.
Dynamically Adaptive Multipath Routing based on AODV
Mobile ad hoc networks are typically characterized by high mobility and frequent link failures that result in low throughput and high end-to-end delay. To reduce the number of route discoveries due to such broken paths, multipath routing can be utilized so that alternate paths are available. Current approaches to multipath routing make use of pre-computed routes determined during route discovery. These solutions, however, suffer during high mobility because the alternate paths are not actively maintained. Hence, precisely when needed, the routes are often broken. To overcome this problem, we present an adaptive multipath solution. In this approach, multiple paths are formed during the route discovery process. All the paths are maintained by means of periodic update packets unicast along each path. These update packets measure the signal strength of each hop along the alternate paths. At any point of time, only the path with the strongest signal strength is used for data transmission. In this paper, we present two variations of our protocol and evaluate both with respect to two previously published multipath routing protocols. Simulation results show that the proposed solutions result in significant performance improvement.
Accuracy Assessment of Land Use / Land Cover Classification Using Remote Sensing and GIS
Remote sensing is one of the tool which is very important for the production of Land use and land cover maps through a process called image classification. For the image classification process to be successfully, several factors should be considered including availability of quality Landsat imagery and secondary data, a precise classification process and user’s experiences and expertise of the procedures. The objective of this research was to classify and map land-use/land-cover of the study area using remote sensing and Geospatial Information System (GIS) techniques. This research includes two sections (1) Landuse/Landcover (LULC) classification and (2) accuracy assessment. In this study supervised classification was performed using Non Parametric Rule. The major LULC classified were agriculture (65.0%), water body (4.0%), and built up areas (18.3%), mixed forest (5.2%), shrubs (7.0%), and Barren/bare land (0.5%). The study had an overall classification accuracy of 81.7% and kappa coefficient (K) of 0.722. The kappa coefficient is rated as substantial and hence the classified image found to be fit for further research. This study present essential source of information whereby planners and decision makers can use to sustainably plan the environment.
Mechanics of Precurved-Tube Continuum Robots
This paper presents a new class of thin, dexterous continuum robots, which we call active cannulas due to their potential medical applications. An active cannula is composed of telescoping, concentric, precurved superelastic tubes that can be axially translated and rotated at the base relative to one another. Active cannulas derive bending not from tendon wires or other external mechanisms but from elastic tube interaction in the backbone itself, permitting high dexterity and small size, and dexterity improves with miniaturization. They are designed to traverse narrow and winding environments without relying on ldquoguidingrdquo environmental reaction forces. These features seem ideal for a variety of applications where a very thin robot with tentacle-like dexterity is needed. In this paper, we apply beam mechanics to obtain a kinematic model of active cannula shape and describe design tools that result from the modeling process. After deriving general equations, we apply them to a simple three-link active cannula. Experimental results illustrate the importance of including torsional effects and the ability of our model to predict energy bifurcation and active cannula shape.
Imaging Breathing Rate in the CO2Absorption Band
Following up on our previous work, we have developed one more non-contact method to measure human breathing rate. We have retrofitted our mid-wave infra-red (MWIR) imaging system with a narrow band-pass filter in the CO2 absorption band (4.3 mum). This improves the contrast between the foreground (i.e., expired air) and background (e.g., wall). Based on the radiation information within the breath flow region, we get the mean dynamic thermal signal. This signal is quasi-periodic due to the interleaving of high and low intensities corresponding to expirations and inspirations respectively. We sample the signal at a constant rate and then determine the breathing frequency through Fourier analysis. We have performed experiments on 9 subjects at distances ranging from 6 - 8 ft. We compared the breathing rate computed by our novel method with ground-truth measurements obtained via a traditional contact device (PowerLab/4SP from ADInstruments with an abdominal transducer). The results show high correlation between the two modalities. For the first time, we report a Fourier based breathing rate computation method on a MWIR signal in the CO2 absorption band. The method opens the way for desktop, unobtrusive monitoring of an important vital sign, that is, breathing rate. It may find widespread applications in preventive medicine as well as sustained physiological monitoring of subjects suffering from chronic ailments
STN-OCR: A single Neural Network for Text Detection and Text Recognition
Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network [16], that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.
Cervical Cancer Diagnosis Using Random Forest Classifier With SMOTE and Feature Reduction Techniques
Cervical cancer is the fourth most common malignant disease in women’s worldwide. In most cases, cervical cancer symptoms are not noticeable at its early stages. There are a lot of factors that increase the risk of developing cervical cancer like human papilloma virus, sexual transmitted diseases, and smoking. Identifying those factors and building a classification model to classify whether the cases are cervical cancer or not is a challenging research. This study aims at using cervical cancer risk factors to build classification model using Random Forest (RF) classification technique with the synthetic minority oversampling technique (SMOTE) and two feature reduction techniques recursive feature elimination and principle component analysis (PCA). Most medical data sets are often imbalanced because the number of patients is much less than the number of non-patients. Because of the imbalance of the used data set, SMOTE is used to solve this problem. The data set consists of 32 risk factors and four target variables: Hinselmann, Schiller, Cytology, and Biopsy. After comparing the results, we find that the combination of the random forest classification technique with SMOTE improve the classification performance.
Transfer Learning for Named-Entity Recognition with Neural Networks
Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.
Alcohol's role in gastrointestinal tract disorders.
When alcohol is consumed, the alcoholic beverages first pass through the various segments of the gastrointestinal (GI) tract. Accordingly, alcohol may interfere with the structure as well as the function of GI-tract segments. For example, alcohol can impair the function of the muscles separating the esophagus from the stomach, thereby favoring the occurrence of heartburn. Alcohol-induced damage to the mucosal lining of the esophagus also increases the risk of esophageal cancer. In the stomach, alcohol interferes with gastric acid secretion and with the activity of the muscles surrounding the stomach. Similarly, alcohol may impair the muscle movement in the small and large intestines, contributing to the diarrhea frequently observed in alcoholics. Moreover, alcohol inhibits the absorption of nutrients in the small intestine and increases the transport of toxins across the intestinal walls, effects that may contribute to the development of alcohol-related damage to the liver and other organs.
Elementary Siphons of Petri Nets and Deadlock Control in FMS
For responsiveness, in the Petri nets theory framework deadlock prevention policies based elementary siphons control are often utilized to deal with deadlocks caused by the sharing of resources in flexible manufacturing system (FMS) which is developing the theory of efficient strict minimal siphons of an S3PR. Analyzer of Petri net models and their P-invariant analysis, and deadlock control are presented as tools for modelling, efficiency structure analysis, control, and investigation of the FMSs when different policies can be implemented for the deadlock prevention. We are to show an effective deadlock prevention policy of a special class of Petri nets namely elementary siphons. As well, both structural analysis and reachability graph analysis and simulation are used for analysis and control of Petri nets. This work is successfully applied Petri nets to deadlock analysis using the concept of elementary siphons, for design of supervisors of some supervisory control problems of FMS and simulation of Petri net tool with MATLAB.
Reactive stream processing for data-centric publish/subscribe
The Internet of Things (IoT) paradigm has given rise to a new class of applications wherein complex data analytics must be performed in real-time on large volumes of fast-moving and heterogeneous sensor-generated data. Such data streams are often unbounded and must be processed in a distributed and parallel manner to ensure timely processing and delivery to interested subscribers. Dataflow architectures based on event-based design have served well in such applications because events support asynchrony, loose coupling, and helps build resilient, responsive and scalable applications. However, a unified programming model for event processing and distribution that can naturally compose the processing stages in a dataflow while exploiting the inherent parallelism available in the environment and computation is still lacking. To that end, we investigate the benefits of blending Reactive Programming with data distribution frameworks for building distributed, reactive, and high-performance stream-processing applications. Specifically, we present insights from our study integrating and evaluating Microsoft .NET Reactive Extensions (Rx) with OMG Data Distribution Service (DDS), which is a standards-based publish/subscribe middleware suitable for demanding industrial IoT applications. Several key insights from both qualitative and quantitative evaluation of our approach are presented.
The Gulf Coast Onshore Offshore Experiment: Some preliminary results
The purpose of the Gulf Coast Onshore Offshore Experiment, which was carried out in cooperation with the United States Coast and Geodetic Survey, was to determine crustal structure of a section across the Gulf Coast Geosyncline where the sediments are near maximum thickness. There is a longstanding difficulty in conceiving of a mechanism by which the deep sedimentary basins are formed. There appear to be two possibilities: (1) deposition of sediments on a continental crust which was downbuckled to accommodate them by some unknown means; or (2) that wherever the sediments were formed initially, they were finally deposited in deep water. In the latter case one would expect that they would be underlain by a thin oceanic crust.
Corner Block List: An Effective and Efficient Topological Representation of Non-Slicing Floorplan
In this paper, a corner block list -- a new efficient topological representation for non-slicing floorplan is proposed with applications to VLSI floorplan and building block placement. Given a corner block list, it takes only linear time to construct the floorplan. Unlike the O-tree structure, which determines the exact floorplan based on given block sizes, corner block list defines the floorplan independent of the block sizes. Thus, the structure is better suited for floorplan optimization with various size configurations of each block. Based on this new structure and the simulated annealing technique, an efficient floorplan algorithm is given. Soft blocks and the aspect ratio of the chip are taken into account in the simulated annealing process. The experimental results demonstrate the algorithm is quite promising.
Elastic Allocation of Docker Containers in Cloud Environments
Docker containers wrap up a piece of software together with everything it needs for the execution and enable to easily run it on any machine. For their execution in the Cloud, we need to identify an elastic set of virtual machines that can accommodate those containers, while considering the diversity of their requirements. In this paper, we briefly describe our formulation of the Elastic provisioning of Virtual machines for Container Deployment (EVCD), which takes explicitly into account the heterogeneity of container requirements and virtual machine resources. Afterwards, we evaluate the EVCD formulation with the aim of demonstrating its flexibility in optimizing multiple QoS metrics.
Automatic Performance Diagnosis and Tuning in Oracle
Performance tuning in modern database systems requires a lot of expertise, is very time consuming and often misdirected. Tuning attempts often lack a methodology that has a holistic view of the database. The absence of historical diagnostic information to investigate performance issues at first occurrence exacerbates the whole tuning process often requiring that problems be reproduced before they can be correctly diagnosed. In this paper we describe how Oracle overcomes these challenges and provides a way to perform automatic performance diagnosis and tuning. We define a new measure called ‘Database Time’ that provides a common currency to gauge the performance impact of any resource or activity in the database. We explain how the Automatic Database Diagnostic Monitor (ADDM) automatically diagnoses the bottlenecks affecting the total database throughput and provides actionable recommendations to alleviate them. We also describe the types of performance measurements that are required to perform an ADDM analysis. Finally we show how ADDM plays a central role within Oracle 10g’s manageability framework to self-manage a database and provide a comprehensive tuning solution.