title
stringlengths
8
300
abstract
stringlengths
0
10k
Sustainability of lifestyle changes following an intensive lifestyle intervention in insulin resistant adults: Follow-up at 2-years.
The objective of this study was to determine whether overweight insulin resistant individuals who lost weight and improved cardiovascular risk factors during a 4-month lifestyle intervention could sustain these lifestyle changes in the long-term. Seventy-nine insulin resistant adults were randomised to a control group or either a modest or intensive lifestyle intervention group for 4-months. Thereafter the two intervention groups were combined and all participants were followed-up at 8, 12 and 24 months. Anthropometry, blood pressure, fasting glucose, lipids, insulin and aerobic fitness were measured and dietary intake was assessed. An interview was conducted to determine factors which participants perceived facilitated or hindered maintenance of healthy lifestyle habits. Seventy-two (91.1%), sixty-nine (87.3%) and sixty-two (78.5%) participants were retained at 8, 12 and 24-month respectively. At 4-months the adjusted difference in weight between the modest and control groups was -3.4 kg (95% CI -5.4, -1.3) p=0.002 and intensive and control groups was -4.7 kg (-6.9, -2.4) p=0.0001 respectively. At 2-years there were no significant differences for weight when the initial 3 groups were compared or when the combined intervention group was compared with the control group. At 2-years, 64% of participants reported that more frequent follow-up would have helped them to maintain healthy lifestyle habits. Even intensive counselling for 4-months with 4-monthly and then yearly monitoring were not enough for maintaining lifestyle changes sufficient to sustain weight loss. More frequent monitoring for an indefinite period was perceived by two-thirds of participants as necessary for them to maintain their initial lifestyle changes.
Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection
Facial landmark detection has received much attention in recent years, with two detection paradigms emerging: local approaches, where each facial landmark is modeled individually and with the help of a shape model; and holistic approaches, where the face appearance and shape are modeled jointly. In recent years both of these approaches have shown great performance gains for facial landmark detection even under "in-the-wild" conditions of varying illumination, occlusion and image quality. However, their accuracy and robustness are very often reduced for profile faces where face alignment is more challenging (e.g., no more facial symmetry, less defined features and more variable background). In this paper, we present a new model, named Holistically Constrained Local Model (HCLM), which unifies local and holistic facial landmark detection by integrating head pose estimation, sparse-holistic landmark detection and dense-local landmark detection. We evaluate our new model on two publicly available datasets, 300-W and AFLW, as well as a newly introduced dataset, IJB-FL which includes a larger proportion of profile face poses. Our HCLM model shows state-of-the-art performance, especially with extreme head poses.
AIVAT: A New Variance Reduction Technique for Agent Evaluation in Imperfect Information Games
Evaluating agent performance when outcomes are stochastic and agents use randomized strategies can be challenging when there is limited data available. The variance of sampled outcomes may make the simple approach of Monte Carlo sampling inadequate. This is the case for agents playing heads-up no-limit Texas hold’em poker, where manmachine competitions typically involve multiple days of consistent play by multiple players, but still can (and sometimes did) result in statistically insignificant conclusions. In this paper, we introduce AIVAT, a low variance, provably unbiased value assessment tool that exploits an arbitrary heuristic estimate of state value, as well as the explicit strategy of a subset of the agents. Unlike existing techniques which reduce the variance from chance events, or only consider game ending actions, AIVAT reduces the variance both from choices by nature and by players with a known strategy. The resulting estimator produces results that significantly outperform previous state of the art techniques. It was able to reduce the standard deviation of a Texas hold’em poker man-machine match by 85% and consequently requires 44 times fewer games to draw the same statistical conclusion. AIVAT enabled the first statistically significant AI victory against professional poker players in no-limit hold’em. Furthermore, the technique was powerful enough to produce statistically significant results versus individual players, not just an aggregate pool of the players. We also used AIVAT to analyze a short series of AI vs human poker tournaments, producing statistical significant results with as few as 28 matches.
Design and Analysis of a Totally Decoupled Flexure-Based XY Parallel Micromanipulator
In this paper, a concept of totally decoupling is proposed for the design of a flexure parallel micromanipulator with both input and output decoupling. Based on flexure hinges, the design procedure for an XY totally decoupled parallel stage (TDPS) is presented, which is featured with decoupled actuation and decoupled output motion as well. By employing (double) compound parallelogram flexures and a compact displacement amplifier, a class of novel XY TDPS with simple and symmetric structures are enumerated, and one example is chosen for further analysis. The kinematic and dynamic modeling of the manipulator are conducted by resorting to compliance and stiffness analysis based on the matrix method, which are validated by finite-element analysis (FEA). In view of predefined performance constraints, the dimension optimization is carried out by means of particle swarm optimization, and a prototype of the optimized stage is fabricated for performance tests. Both FEA and experimental studies well validate the decoupling property of the XY stage that is expected to be adopted into micro-/nanoscale manipulations.
Experimental personality designs: analyzing categorical by continuous variable interactions.
Theories hypothesizing interactions between a categorical and one or more continuous variables are common in personality research. Traditionally, such hypotheses have been tested using nonoptimal adaptations of analysis of variance (ANOVA). This article describes an alternative multiple regression-based approach that has greater power and protects against spurious conclusions concerning the impact of individual predictors on the outcome in the presence of interactions. We discuss the structuring of the regression equation, the selection of a coding system for the categorical variable, and the importance of centering the continuous variable. We present in detail the interpretation of the effects of both individual predictors and their interactions as a function of the coding system selected for the categorical variable. We illustrate two- and three-dimensional graphical displays of the results and present methods for conducting post hoc tests following a significant interaction. The application of multiple regression techniques is illustrated through the analysis of two data sets. We show how multiple regression can produce all of the information provided by traditional but less optimal ANOVA procedures.
Characteristics of online and offline health information seekers and factors that discriminate between them.
Increasing number of individuals are using the internet to meet their health information needs; however, little is known about the characteristics of online health information seekers and whether they differ from individuals who search for health information from offline sources. Researchers must examine the primary characteristics of online and offline health information seekers in order to better recognize their needs, highlight improvements that may be made in the arena of internet health information quality and availability, and understand factors that discriminate between those who seek online vs. offline health information. This study examines factors that differentiate between online and offline health information seekers in the United States. Data for this study are from a subsample (n = 385) of individuals from the 2000 General Social Survey. The subsample includes those respondents who were asked Internet and health seeking module questions. Similar to prior research, results of this study show that the majority of both online and offline health information seekers report reliance upon health care professionals as a source of health information. This study is unique in that the results illustrate that there are several key factors (age, income, and education) that discriminate between US online and offline health information seekers; this suggests that general "digital divide" characteristics influence where health information is sought. In addition to traditional digital divide factors, those who are healthier and happier are less likely to look exclusively offline for health information. Implications of these findings are discussed in terms of the digital divide and the patient-provider relationship.
Tutorial on Probabilistic Topic Modeling: Additive Regularization for Stochastic Matrix Factorization
Probabilistic topic modeling of text collections is a powerful tool for statistical text analysis. In this tutorial we introduce a novel non-Bayesian approach, called Additive Regularization of Topic Models. ARTM is free of redundant probabilistic assumptions and provides a simple inference for many combined and multi-objective topic models.
Inhaled mannitol improves lung function in cystic fibrosis.
BACKGROUND The airways in patients with cystic fibrosis (CF) are characterized by the accumulation of tenacious, dehydrated mucus that is a precursor for chronic infection, inflammation, and tissue destruction. The clearance of mucus is an integral component of daily therapy. Inhaled mannitol is an osmotic agent that increases the water content of the airway surface liquid, and improves the clearance of mucus with the potential to improve lung function and respiratory health. To this end, this study examined the efficacy and safety of therapy with inhaled mannitol over a 2-week period. METHODS This was a randomized, double-blind, placebo-controlled, crossover study. Thirty-nine subjects with mild-to-moderate CF lung disease inhaled 420 mg of mannitol or placebo twice daily for 2 weeks. Following a 2-week washout period, subjects were entered in the reciprocal treatment arm. Lung function, respiratory symptoms, quality of life, and safety were assessed. RESULTS Mannitol treatment increased FEV(1) from baseline by a mean of 7.0% (95% confidence interval [CI], 3.3 to 10.7) compared to placebo 0.3% (95% CI, - 3.4 to 4.0; p < 0.001). The absolute improvement with mannitol therapy was 121 mL (95% CI, 56.3 to 185.7), which was significantly more than that with placebo (0 mL; 95% CI, - 64.7 to 64.7). The forced expiratory flow in the middle half of the FVC increased by 15.5% (95% CI, - 6.5 to 24.6) compared to that with placebo (increase, 0.7%; 95% CI, - 8.3 to 9.7; p < 0.02). The safety profile of mannitol was adequate, and no serious adverse events related to treatment were observed. CONCLUSIONS Inhaled mannitol treatment over a period of 2 weeks significantly improved lung function in patients with CF. Mannitol therapy was safe and well tolerated. TRIAL REGISTRATION (ClinicalTrials.gov) Identifier: NCT00455130.
Wisdom and the Senses: The Way of Creativity
Erikson explores the crucial role played by the physical senses at every stage of psychological growth from birth to old age. She explores the parallels between the creation of art as we usually define it and the creation of the self.
Shock-Based Causal Inference in Corporate Finance and Accounting Research
We study shock-based methods for credible causal inference in corporate finance research. We focus on corporate governance research, survey 13,461 papers published between 2001 and 2011 in 22 major accounting, economics, finance, law, and management journals; and identify 863 empirical studies in which corporate governance is associated with firm value or other characteristics. We classify the methods used in these studies and assess whether they support a causal link between corporate governance and firm value or another outcome. Only a stall minority of studies have convincing causal inference strategies. The convincing strategies largely rely on external shocks – usually from legal rules – often called “natural experiments”. We examine the 74 shock-based papers and provide a guide to shock-based research design, which stresses the common features across different designs and the value of using combined designs.
Effect of Homocysteine-Lowering Nutrients on Blood Lipids: Results from Four Randomised, Placebo-Controlled Studies in Healthy Humans
BACKGROUND Betaine (trimethylglycine) lowers plasma homocysteine, a possible risk factor for cardiovascular disease. However, studies in renal patients and in obese individuals who are on a weight-loss diet suggest that betaine supplementation raises blood cholesterol; data in healthy individuals are lacking. Such an effect on cholesterol would counteract any favourable effect on homocysteine. We therefore investigated the effect of betaine, of its precursor choline in the form of phosphatidylcholine, and of the classical homocysteine-lowering vitamin folic acid on blood lipid concentrations in healthy humans. METHODS AND FINDINGS We measured blood lipids in four placebo-controlled, randomised intervention studies that examined the effect of betaine (three studies, n = 151), folic acid (two studies, n = 75), and phosphatidylcholine (one study, n = 26) on plasma homocysteine concentrations. We combined blood lipid data from the individual studies and calculated a weighted mean change in blood lipid concentrations relative to placebo. Betaine supplementation (6 g/d) for 6 wk increased blood LDL cholesterol concentrations by 0.36 mmol/l (95% confidence interval: 0.25-0.46), and triacylglycerol concentrations by 0.14 mmol/l (0.04-0.23) relative to placebo. The ratio of total to HDL cholesterol increased by 0.23 (0.14-0.32). Concentrations of HDL cholesterol were not affected. Doses of betaine lower than 6 g/d also raised LDL cholesterol, but these changes were not statistically significant. Further, the effect of betaine on LDL cholesterol was already evident after 2 wk of intervention. Phosphatidylcholine supplementation (providing approximately 2.6 g/d of choline) for 2 wk increased triacylglycerol concentrations by 0.14 mmol/l (0.06-0.21), but did not affect cholesterol concentrations. Folic acid supplementation (0.8 mg/d) had no effect on lipid concentrations. CONCLUSIONS Betaine supplementation increased blood LDL cholesterol and triacylglycerol concentrations in healthy humans, which agrees with the limited previous data. The adverse effects on blood lipids may undo the potential benefits for cardiovascular health of betaine supplementation through homocysteine lowering. In our study phosphatidylcholine supplementation slightly increased triacylglycerol concentrations in healthy humans. Previous studies of phosphatidylcholine and blood lipids showed no clear effect. Thus the effect of phosphatidylcholine supplementation on blood lipids remains inconclusive, but is probably not large. Folic acid supplementation does not seem to affect blood lipids and therefore remains the preferred treatment for lowering of blood homocysteine concentrations.
A practical traffic management system for integrated LTE-WiFi networks
Mobile operators are leveraging WiFi to relieve the pressure posed on their networks by the surging bandwidth demand of applications. However, operators often lack intelligent mechanisms to control the way users access their WiFi networks. This lack of sophisticated control creates poor network utilization, which in turn degrades the quality of experience (QoE). To meet user traffic demands, it is evident that operators need solutions that optimally balance user traffic across cellular and WiFi networks. Motivated by the lack of practical solutions in this space, we design and implement ATOM - an end-to-end system for adaptive traffic offloading for WiFi-LTE deployments. ATOM has two novel components: (i) A network interface selection algorithm that maps user traffic across WiFi and LTE to optimize user QoE and (ii) an interface switching service that seamlessly re-directs ongoing user sessions in a cost-effective and standards-compatible manner. Our evaluations on a real LTE-WiFi testbed using YouTube traffic reveals that ATOM reduces video stalls by 3-4 times compared to naive solutions.
The Use of Attack and Protection Trees to Analyze Security for an Online Banking System
Online banking has become increasingly important to the profitability of financial institutions as well as adding convenience for their customers. As the number of customers using online banking increases, online banking systems are becoming more desirable targets for criminals to attack. To maintain their customers' trust and confidence in the security of their online bank accounts, financial institutions must identify how attackers compromise accounts and develop methods to protect them. Attack trees and protection trees are a cost effective way to do this. Attack trees highlight the weaknesses in a system and protection trees provide a methodical means of mitigating these weaknesses. In this paper, a notional online banking system is analyzed and protection solutions are proposed for varying budgets
Effects of a Circuit Training Program on Muscular and Cardiovascular Endurance and their Maintenance in Schoolchildren
The purpose of this study was to evaluate the effects of a circuit training program along with a maintenance program on muscular and cardiovascular endurance in children in a physical education setting. Seventy two children 10-12 years old from four different classes were randomly grouped into either an experimental group (n = 35) or a control group (n = 37) (two classes for each group). After an eight-week development program carried out twice a week and a four-week detraining period, the experimental group performed a four-week maintenance program once a week. The program included one circuit of eight stations of 15/45 to 35/25 seconds of work/rest performed twice. Abdominal muscular endurance (sit-ups in 30 seconds test), upper-limbs muscular endurance (bent arm hang test), and cardiovascular endurance (20-m endurance shuttle run test) were measured at the beginning and at the end of the development program, and at the end of the maintenance program. After the development program, muscular and cardiovascular endurance increased significantly in the experimental group (p < 0.05). The gains obtained remained after the maintenance program. The respective values did not change in the control group (p > 0.05). The results showed that the circuit training program was effective to increase and maintain both muscular and cardiovascular endurance among schoolchildren. This could help physical education teachers design programs that permit students to maintain fit muscular and cardiovascular endurance levels.
A 1-pJ/bit, 10-Gb/s/ch Forwarded-Clock Transmitter Using a Resistive Feedback Inverter-Based Driver in 65-nm CMOS
An energy-efficient forwarded-clock transmitter that offers a scalable pre-emphasis equalization and output voltage swing is presented. A resistive-feedback inverter-based driver is used to overcome the drawbacks of the conventional drivers. Moreover, half-rate clocking structure is employed in order to minimize power consumption in 65-nm CMOS technology. The proposed transmitter consists of two data lanes, a shared clock lane, and a global impedance regulator. The prototype chip is fabricated in 65-nm CMOS technology and occupies an active area of 0.15 mm2. The proposed transmitter achieves 100-250 mV single-ended swing and exhibits the energy efficiency of 1 pJ/bit at the per-pin data rate of 10 Gb/s.
Supermarket commodity identification using convolutional neural networks
In recent years, with the rapid development of deep learning, it has achieved great success in the field of image recognition. In this paper, we applied the convolution neural network (CNN) on supermarket commodity identification, contributing to the study of supermarket commodity identification. Different from the QR code identification of supermarket commodity, our work applied the CNN using the collected images of commodity as input. This method has the characteristics of fast and non-contact. In this paper, we mainly did the following works: 1. Collected a small dataset of supermarket goods. 2. Built Different convolutional neural network frameworks in caffe and trained the dataset using the built networks. 3. Improved train methods by finetuning the trained model.
ULA-OP: an advanced open platform for ultrasound research
The experimental test of novel ultrasound (US) investigation methods can be made difficult by the lack of flexibility of commercial US machines. In the best options, these only provide beamformed radiofrequency or demodulated echo-signals for acquisition by an external PC. More flexibility is achieved in high-level research platforms, but these are typically characterized by high cost and large size. This paper presents a powerful but portable US system, specifically developed for research purposes. The system design has been based on high-level commercial integrated circuits to obtain the maximum flexibility and wide data access with minimum of electronics. Preliminary applications involving nonstandard imaging transmit/receive strategies and simultaneous B-mode and multigate spectral Doppler mode are discussed.
The coming paradigm shift in forensic identification science.
Converging legal and scientific forces are pushing the traditional forensic identification sciences toward fundamental change. The assumption of discernible uniqueness that resides at the core of these fields is weakened by evidence of errors in proficiency testing and in actual cases. Changes in the law pertaining to the admissibility of expert evidence in court, together with the emergence of DNA typing as a model for a scientifically defensible approach to questions of shared identity, are driving the older forensic sciences toward a new scientific paradigm.
Three-year results from a randomised controlled trial comparing prostheses supported by 5-mm long implants or by longer implants in augmented bone in posterior atrophic edentulous jaws.
PURPOSE To evaluate whether 5-mm short dental implants could be an alternative to augmentation with anorganic bovine bone and placement of at least 10-mm long implants in posterior atrophic jaws. MATERIALS AND METHODS Fifteen patients with bilateral atrophic mandibles (5 mm to 7 mm bone height above the mandibular canal) and 15 patients with bilateral atrophic maxillae (4 mm to 6 mm bone height below the maxillary sinus), and bone thickness of at least 8 mm, were randomised according to a split-mouth design to receive one to three 5-mm short implants or at least 10-mm long implants in augmented bone. Mandibles were vertically augmented with interpositional bone blocks and maxillary sinuses with particulated bone via a lateral window. Implants were placed after 4 months, submerged and loaded, after another 4 months, with provisional prostheses. Four months later, definitive provisionally cemented prostheses were delivered. Outcome measures were: prosthesis and implant failures; any complication and peri-implant marginal bone level changes. RESULTS In five augmented mandibles, the planned 10-mm long implants could not be placed and shorter implants (7 mm and 8.5 mm) had to be used instead. Three years after loading, two patients, one treated in the mandible and one in the maxilla, dropped out. Three prostheses (1 mandibular and 2 maxillary) failed in the short implant group versus none in the long implant group. In mandibles, one long implant failed versus two short implants in 1 patient. In maxillae, one long implant failed versus three short implants in 2 patients. There were no statistically significant differences in the failures. Eight patients had 13 complications at short implants (1 patient accounted for 6 complications) and 11 patients had 13 complications at long implants. There were no statistically significant differences in complications (P = 0.63, difference = 0.10, 95% CI from -0.22 to 0.42). Three years after loading, patients with mandibular implants lost on average 1.44 mm at short implants and 1.63 mm at long implants of peri-implant marginal bone. This difference was not statistically significant (difference = 0.24 mm; 95% CI -0.01, 0.49 P = 0.059). In maxillae, patients lost on average 1.02 mm at short implants and 1.54 mm at long implants. This difference was statistically significant (difference = 0.41 mm; 95% CI 0.21, 0.60, P = 0.001). CONCLUSIONS Three years after loading, 5-mm short implants achieved similar results as longer implants in augmented bone. Short implants might be a preferable choice to vertical bone augmentation, especially in mandibles, since the treatment is faster and cheaper, however there are still insufficient data on the long-term prognosis of short implants.
Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin
In the Bitcoin system, participants are rewarded for solving cryptographic puzzles. In order to receive more consistent rewards over time, some participants organize mining pools and split the rewards from the pool in proportion to each participant's contribution. However, several attacks threaten the ability to participate in pools. The block withholding (BWH) attack makes the pool reward system unfair by letting malicious participants receive unearned wages while only pretending to contribute work. When two pools launch BWH attacks against each other, they encounter the miner's dilemma: in a Nash equilibrium, the revenue of both pools is diminished. In another attack called selfish mining, an attacker can unfairly earn extra rewards by deliberately generating forks. In this paper, we propose a novel attack called a fork after withholding (FAW) attack. FAW is not just another attack. The reward for an FAW attacker is always equal to or greater than that for a BWH attacker, and it is usable up to four times more often per pool than in BWH attack. When considering multiple pools --- the current state of the Bitcoin network -- the extra reward for an FAW attack is about 56% more than that for a BWH attack. Furthermore, when two pools execute FAW attacks on each other, the miner's dilemma may not hold: under certain circumstances, the larger pool can consistently win. More importantly, an FAW attack, while using intentional forks, does not suffer from practicality issues, unlike selfish mining. We also discuss partial countermeasures against the FAW attack, but finding a cheap and efficient countermeasure remains an open problem. As a result, we expect to see FAW attacks among mining pools.
Automatic feature localisation with constrained local models
We present an efficient and robust method of locating a set of feature points in an object of interest. From a training set we construct a joint model of the appearance of each feature together with their relative positions. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the Active Appearance Models (AAM) [T.F. Cootes, G.J. Edwards, C.J. Taylor, Active appearance models, in: Proceedings of the 5th European Conference on Computer Vision 1998, vol. 2, Freiburg, Germany, 1998.]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to a wide range of data sets, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on photographs of human faces, magnetic resonance (MR) images of the brain and a set of dental panoramic tomograms. We also show improved tracking performance on a challenging set of in car video sequences. 2008 Elsevier Ltd. All rights reserved.
Use Cases, Requirements, and Design Considerations for 5G V2X
Ultimate goal of next generation Vehicle-to-everything (V2X) communication systems is enabling accident-free cooperative automated driving that uses the available roadway efficiently. To achieve this goal, the communication system will need to enable a diverse set of use cases, each with a specific set of requirements. We discuss the main use case categories, analyze their requirements, and compare them against the capabilities of currently available communication technologies. Based on the analysis, we identify a gap and point out towards possible system design for 5G V2X that could close the gap. Furthermore, we discuss an architecture of the 5G V2X radio access network that incorporates diverse communication technologies, including current and cellular systems in centimeter wave and millimeter wave, IEEE 802.11p and vehicular visible light communications. Finally, we discuss the role of future 5G V2X systems in enabling more efficient vehicular transportation: from improved traffic flow through reduced inter-vehicle spacing on highways and coordinated intersections in cities (the cheapest way to increasing the road capacity), to automated smart parking (no more visits to the parking!), ultimately enabling seamless end-to-end personal mobility.
Web Personalization Techniques for E-commerce
With the advent of the Internet, there is a dramatic growth of data available on the World Wide Web. To reduce information overload and create customer loyalty, E-commerce businesses use Web Personalization, a significant tool that provides them with important competitive advantages. Despite the growing interest in personalized systems, it is difficult to implement such a system. This is because many business-critical issues must be considered before the appropriate personalization techniques can be identified. In this study, online businesses are classified into a number of categories. After that, personalization techniques that are used nowadays in E-commerce businesses are described. Finally, guidelines for selecting suitable personalization techniques for applications in each E-commerce business domain are proposed. The results of the study suggest that both customer-driven and business-driven personalization systems should be promoted on the site in order to increase customer satisfaction.
SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints
This paper addresses the problem of path prediction for multiple interacting agents in a scene, which is a crucial step for many autonomous platforms such as self-driving cars and social robots. We present SoPhie; an interpretable framework based on Generative Adversarial Network (GAN), which leverages two sources of information, the path history of all the agents in a scene, and the scene context information, using images of the scene. To predict a future path for an agent, both physical and social information must be leveraged. Previous work has not been successful to jointly model physical and social interactions. Our approach blends a social attention mechanism with a physical attention that helps the model to learn where to look in a large scene and extract the most salient parts of the image relevant to the path. Whereas, the social attention component aggregates information across the different agent interactions and extracts the most important trajectory information from the surrounding neighbors. SoPhie also takes advantage of GAN to generates more realistic samples and to capture the uncertain nature of the future paths by modeling its distribution. All these mechanisms enable our approach to predict socially and physically plausible paths for the agents and to achieve state-of-the-art performance on several different trajectory forecasting benchmarks.
Androgenetic alopecia: an evidence-based treatment update.
BACKGROUND Androgenetic alopecia (AGA) is one of the most common chronic problems seen by dermatologists worldwide. It is characterized by progressive hair loss, especially of scalp hair, and has distinctive patterns of loss in women versus men, but in both genders the central scalp is most severely affected. It often begins around puberty and is known to effect self-esteem and the individual's quality of life. In contrast to the high prevalence of AGA, approved therapeutic options are limited. In addition to the scarce pharmacologic treatments, there are numerous nonprescription products claimed to be effective in restoring hair in androgenetic alopecia. OBJECTIVES The purpose of this paper is to review published medical and non-medical treatments for male and female AGA using the American College of Physicians evidence assessment methods. MEDLINE, EMBASE and Cochrane Library were searched for systematic reviews, randomized controlled trials, open studies, case reports and relevant studies of the treatment of male and female AGA. The relevant articles were classified according to grade and level of evidence. RESULTS The medical treatments with the best level of evidence classification for efficacy and safety for male AGA are oral finasteride and topical minoxidil solution. For female AGA, topical minoxidil solution appears to be the most effective and safe treatment. The medical treatments corresponding to the next level of evidence quality are some commonly used therapeutic non-FDA-approved options including oral and topical anti-hormonal treatments. Surgical treatment of follicular unit hair transplantation is an option in cases that have failed medical treatment although there is high variation in outcomes. LIMITATIONS Some articles, especially those concerning traditional herbs claimed to promote hair regrowth, were published in non-English, local journals. CONCLUSIONS An assessment of the evidence quality of current publications indicates that oral finasteride (for men only) and topical minoxidil (for men and women) are the best treatments of AGA.
Optimal Choice for Number of Strands in a Litz-Wire Transformer Winding
The number of strands to minimize loss in a litz-wire transformer winding is determined. With fine stranding, the ac resistance factor decreases, but dc resistance increases because insulation occupies more of the window area. A power law to model insulation thickness is combined with standard analysis of proximity-effect losses.
Hierarchical Clustering: Objective Functions and Algorithms
Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a ‘good’ hierarchical clustering is one that minimizes a particular cost function [21]. He showed that this cost function has certain desirable properties: in order to achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining ‘good’ objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a ‘natural’ ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem, and design algorithms for this scenario.
Two Axes Solar Tracker Based on Solar Maps, Controlled by a Low-Power Microcontroller
There are actually several solutions for two axis solar tracking systems using electromechanical devices, in which a controller detects the Sun apparent position, and controls the position of the structure supporting the panels toward the sun by enabling the engines movement. This work studies the solution of two axis solar tracking system based on solar maps, which can predict the exact apparent position of the Sun, by the latitude's location, thereby avoiding the need to use sensors or guidance systems. To accomplish this, it is used a low-power microcontroller, suitably programmed, to control two electric motors to ensure that the panels supporting structure is always oriented towards the sun.
Suture length to wound length ratio and healing of midline laparotomy incisions.
The effect of suture length to wound length ratio on the healing of midline laparotomy wounds closed with a continuous suture was evaluated in a prospective clinical trial. All patients undergoing abdominal procedures through a midline incision were included except those with an incisional hernia after previous midline operation. The total incidence of wound infection was 36 of 454 patients (7.9 per cent) and wound dehiscence requiring reoperation occurred in three patients (0.7 per cent). Incisional hernia was found in 18.7 per cent of 363 patients alive 12 months after surgery. Multivariate analysis identified the suture length to wound length ratio, age and major wound infection as independent risk factors for the development of hernia, which occurred in 9.0 per cent of patients when the suture length to wound length ratio was > or = 4 and in 23.7 per cent (P = 0.001) when it was < 4. The suture length to wound length ratio is an important parameter for healing of midline incisions closed with a continuous suture technique. The incidence of incisional hernia is lower when such wounds are sutured with a ratio > or = 4.
'We desperately need some help here'--The experience of legal experts with sexual assault and evidence collection in rural communities.
INTRODUCTION Approximately 30% of people in rural communities report a sexual assault within their lifetime. The medico-legal response to a report of sexual assault may leave a significant impact on the victim. The purpose of this article is to examine the experiences of legal providers from rural communities, who assist victims of sexual assault. METHODS A sample of expert participants were interviewed and included seven commonwealth attorneys (the state prosecuting attorneys in Virginia), six sheriffs or police investigators, and five victim-witness advocates, all from rural areas of Virginia. Qualitative data were collected by in-person interviews with a hermeneutic-phenomenological format. RESULTS The experts interviewed described prosecution difficulties related to evidence collection and unrealistic jury expectations. These legal experts also shared frustrations with limitations in local services and limitations in the experiences of local sexual assault nurse examiners. CONCLUSIONS This study provides a context for understanding the rural medico-legal response to sexual assault and for the importance of the role of the sexual assault nurse examiner to rural populations. Interdisciplinary collaboration is key to improving prosecution outcomes as well as victim support after reporting.
Reddit Temporal N-gram Corpus and its Applications on Paraphrase and Semantic Similarity in Social Media using a Topic-based Latent Semantic Analysis
This paper introduces a new large-scale n-gram corpus that is created specifically from social media text. Two distinguishing characteristics of this corpus are its monthly temporal attribute and that it is created from 1.65 billion comments of user-generated text in Reddit. The usefulness of this corpus is exemplified and evaluated by a novel Topic-based Latent Semantic Analysis (TLSA) algorithm. The experimental results show that unsupervised TLSA outperforms all the state-of-the-art unsupervised and semi-supervised methods in SEMEVAL 2015: paraphrase and semantic similarity in Twitter tasks.
The Human Hippocampus and Spatial and Episodic Memory
Finding one's way around an environment and remembering the events that occur within it are crucial cognitive abilities that have been linked to the hippocampus and medial temporal lobes. Our review of neuropsychological, behavioral, and neuroimaging studies of human hippocampal involvement in spatial memory concentrates on three important concepts in this field: spatial frameworks, dimensionality, and orientation and self-motion. We also compare variation in hippocampal structure and function across and within species. We discuss how its spatial role relates to its accepted role in episodic memory. Five related studies use virtual reality to examine these two types of memory in ecologically valid situations. While processing of spatial scenes involves the parahippocampus, the right hippocampus appears particularly involved in memory for locations within an environment, with the left hippocampus more involved in context-dependent episodic or autobiographical memory.
Droplet-trace-based array partitioning and a pin assignment algorithm for the automated design of digital microfluidic biochips
Microfluidics-based biochips combine electronics with biology to open new application areas such as point-of-care medical diagnostics, on-chip DNA analysis, and automated drug discovery. Bioassays are mapped to microfluidic arrays using synthesis tools, and they are executed through the manipulation of sample and reagent droplets by electrical means. Most prior work on CAD for biochips has assumed independent control of electrodes using a large number of (electrical) input pins. Such solutions are not feasible for low-cost disposable biochips that are envisaged for many field applications. A more promising design strategy is to divide the microfluidic array into smaller partitions and use a small number of electrodes to control the electrodes in each partition. We propose a partitioning algorithm based on the concept of "droplet trace", which is extracted from the scheduling and droplet routing results produced by a synthesis tool. An efficient pin assignment method, referred to as the "Connect-5 algorithm", is combined with the array partitioning technique based on droplet traces. The array partitioning and pin assignment methods are evaluated using a set of multiplexed bioassays.
3D modelling and simulation of a crawler robot in ROS/Gazebo
Modelling and animation of crawler UGV's caterpillars is a complicated task, which has not been completely resolved in ROS/Gazebo simulators. In this paper, we proposed an approximation of track-terrain interaction of a crawler UGV, perform modelling and simulation of Russian crawler robot "Engineer" within ROS/Gazebo and visualize its motion in ROS/RViz software. Finally, we test the proposed model in heterogeneous robot group navigation scenario within uncertain Gazebo environment.
Types of Hair Dye and Their Mechanisms of Action
Hair color change by dye application is a common procedure among women. Hair dyes are classified, according to color resistance, into temporary, semipermanent, demipermanent and permanent. The first two are based on molecules which are already colored. Temporary dyes act through dye deposition on cuticles, but semipermanent may penetrate a little into the cortex and so the color resists up to six washes. Demipermanent and permanent dyes are based on color precursors, called oxidation dyes, and the final shade is developed by their interactions with an oxidizing agent, but they differ from the alkalizing agent used. In oxidation systems, there is an intense diffusion of the molecules into the cortex, what promotes a longer color resistance. Dyes and color precursors present differences related to chromophore groups, hair fiber affinity, water solubility, and photo stability. The aim of this review is to discuss the differences among hair dye products available in the market and their action mechanisms, molecular structures, application methods, and some aspects of formulations.
The evolution of institutions for collective action
In 1985, the National Academy of Sciences sponsored a conference in Annapolis, Maryland, to discuss common property resource management. This conference was a watershed in the development of the theoretical underpinning of institutional design for successful common pool resource (CPR) management. Since then, an international network of over 2,000 researchers has developed, and the International Association for the Study of Common Property (IASCP), formed in 1989, has held two successful international conferences. Dominating the intellectual evolution of the field has been the work of Elinor Ostrom, co-director of the Workshop in Political Theory and Policy Analysis at Indiana University. Her book, Governing the Commons, presents a lucid exposition of the current state of institutional analysis of common property problems. Part of the Cam-bridge series on Political Economy of Institutions and Decisions, the book addresses how common pool resources may be managed successfully without falling prey to the "tragedy of the commons." Common pool resources are characterized by subtractability (i.e., withdrawal by one user reduces the amount of the resource left for other users) and joint use by a group of appropriators. Thus, a common village grazing field has forage for a limited number of beasts, and all the villagers are entitled to pasture their animals on the field. Community rules of access and management are required to sustain the field from season to season. Problems in managing CPRs arise when the rational individual determines that he will still have access to the resource even if he does not fully contribute to its maintenance (the "free rider" problem). An extensive literature discusses the effect of free riders, concluding that common pool resources will inevitably fall into ruin. One of two solutions is usually offered to avoid this problem: centralized governmental regulation or privatization. Noting the numerous occasions in which common pool resources are managed successfully with neither centralized governmental control nor privatization, Ostrom argues for a third approach to resolving the problem of the commons: the design of durable cooperative institutions that are organized and governed by the resource users. In Governing the Commons she examines small-scale common-pool resources. Resource user groups examined range in size from 50-15,000 people who rely substantially on the common pool resource for their economic well-being. She has further
Church-based Work with the Homeless
This article explores the theological underpinnings of “hospitality” in one London Churches’ Cold Weather Shelter [CCWS]. This ecumenical initiative consists of seven churches in a local area offering shelter to homeless people for one night a week during the winter months. interview transcripts from seven team leaders are reflected upon using a “theology in four voices” approach. Despite different theologies of mission in operation, volunteers and guests are unified in their identification of hospitality as triggering four types of human transformation, which can be rooted in a living Christian tradition: transformation through seeing, conversing, eating together and discovering joys and abundance. This operant theology is brought in to conversation with a significant formal voice: Karl Barth’s surprisingly practical “theo-anthropology.” This theological methodology used with and for homeless people can benefit other Church shelters, church practitioners and theological academy.
Probiotics in shrimp aquaculture: avenues and challenges.
As an alternative strategy to antibiotic use in aquatic disease management, probiotics have recently attracted extensive attention in aquaculture. However, the use of terrestrial bacterial species as probiotics for aquaculture has had limited success, as bacterial strain characteristics are dependent upon the environment in which they thrive. Therefore, isolating potential probiotic bacteria from the marine environment in which they grow optimally is a better approach. Bacteria that have been used successfully as probiotics belong to the genus Vibrio and Bacillus, and the species Thalassobacter utilis. Most researchers have isolated these probiotic strains from shrimp culture water, or from the intestine of different penaeid species. The use of probiotic bacteria, based on the principle of competitive exclusion, and the use of immunostimulants are two of the most promising preventive methods developed in the fight against diseases during the last few years. It also noticed that probiotic bacteria could produce some digestive enzymes, which might improve the digestion of shrimp, thus enhancing the ability of stress resistance and health of the shrimp. However, the probiotics in aquatic environment remain to be a controversial concept, as there was no authentic evidence / real environment demonstrations on the successful use of probiotics and their mechanisms of action in vivo. The present review highlights the potential sources of probiotics, mechanism of action, diversity of probiotic microbes and challenges of probiotic usage in shrimp aquaculture.
A New Method for Optimization of Dynamic Ride Sharing System
Dynamic ridesharing is a profitable way to reduce traffic and carbon emissions by providing an opportunity for a flexible and affordable service that utilizes vehicle seating space. Matching of ride seeker requests with the rides, distributed over the roads is a tedious work. While fulfilling the request of all passengers, the total travel distance of the trip may get increased. Therefore, this article proposes optimal dynamic ridesharing system which matches rides and requests in real time by satisfying multiple participant constraints (e.g. time bounds, availability of empty seat, maximum allowed deviation distance and minimized route ride) to minimize the total travel distance. To efficiently match ride givers and riders we are proposing a novel dynamic ride matching algorithm MRB (Minimal route bisearching algorithm) considering all above mentioned constraints. We demonstrate working of our algorithm by developing a prototype and evaluated our system on GPS (Global positioning system) trajectories of Lahore city dataset. Evaluated results are compared with existing algorithms which shows that our system significantly reduces the travel distance and computation cost in comparison with other recent ride searching methods to maximize efficiency.
Neurological manifestations of gastrointestinal disorders, with particular reference to the differential diagnosis of multiple sclerosis
Neurological manifestations of gastrointestinal disorders are described, with particular reference to those resembling multiple sclerosis (MS) on clinical or MRI grounds. Patients with celiac disease can present cerebellar ataxia, progressive myoclonic ataxia, myelopathy, or cerebral, brainstem and peripheral nerve involvement. Antigliadin antibodies can be found in subjects with neurological dysfunction of unknown cause, particularly in sporadic cerebellar ataxia (“gluten ataxia”). Patients with Whipple's disease can develop mental and psychiatric changes, suprancuclear gaze palsy, upper motoneuron signs, hypothalamic dysfunction, cranial nerve abnormalities, seizures, ataxia, myorhythmia and sensory deficits. Neurological manifestations can complicate inflammatory bowel disease (e.g. ulcerative colitis and Crohn's disease) due to vascular or vasculitic machanisms. Cases with both Crohn's disease and MS or cerebral vasculitis are described. Epilepsy, chronic inflammatory polyneuropathy, muscle involvement and myasthenia gravis are also reported. The central nervous system can be affected in patients with hepatitis C virus (HCV) infection because of vasculitis associated with HCV-related cryoglobulinemia. Mitochondrial neurogastrointestinal encephalopathy (MNGIE) is a disease caused by multiple deletions of mitochondrial DNA. It is characterized by peripheral neuropathy, ophthalmoplegia, deafness, leukoencephalopathy, and gastrointestinal symptoms due to visceral neuropathy. Neurological manifestations can be the consequence of vitamin B1, nicotinamide, vitamin B12, vitamin D, or vitamin E deficiency and from nutritional deficiency states following gastric surgery.
Robust control and H∞-optimization - Tutorial paper
-The paper presents a tutorial exposition of ~=-optimal regulation theory, emphasizing the relevance of the mixed sensitivity problem for robust control system design.
Vulvar verruciform xanthoma: ten cases associated with lichen sclerosus, lichen planus, or other conditions.
BACKGROUND Verruciform xanthoma (VX) is a rare benign tumor that usually involves the oral cavity. Since the first report of this tumor in 1971, only 9 cases have been reported on the vulva, and 3 of these were associated with another vulvar condition. We describe the clinicopathologic features of 10 patients with vulvar VX and focus on their associated conditions. OBSERVATION The mean age of the patients was 68 years (range, 51-80 years). The VX lesions were asymptomatic, yellowish-orange verrucous plaques. The diagnosis was clinically suspected in 2 cases; other suggested diagnoses were condyloma or squamous cell carcinoma. All of the patients had an associated vulvar condition: lichen sclerosus (6 patients), lichen planus (2 patients), Paget disease, or radiodermatitis. Under microscopy, the VX lesions displayed parakeratosis, acanthosis without atypia, and elongated rete ridges. Xanthomatous cells were aggregated in the papillary dermis. CONCLUSIONS Vulvar VX is a benign tumor with misleading clinical features. All 10 cases were associated with a vulvar condition, mainly a lichen sclerosus. Therefore, VX might represent a reaction pattern induced by different conditions, mainly characterized by damage to the dermoepidermal junction. When confronted with the diagnosis of vulvar VX, clinicians may look for an associated vulvar condition.
A Semi-Physiological Population Model to Quantify the Effect of Hematocrit on Everolimus Pharmacokinetics and Pharmacodynamics in Cancer Patients
INTRODUCTION AND OBJECTIVE Everolimus (a drug from the class of mammalian target of rapamycin [mTOR] inhibitors) is associated with frequent toxicity-related dose reductions. Everolimus accumulates in erythrocytes, but the extent to which hematocrit affects everolimus plasma pharmacokinetics and pharmacodynamics is unknown. We aimed to investigate the everolimus pharmacokinetics/pharmacodynamics and the influence of hematocrit in cancer patients. METHODS A semi-physiological pharmacokinetic model for everolimus was developed from pharmacokinetic data from 73 patients by non-linear mixed-effects modeling. Using a simulation study with a known pharmacodynamic model describing S6K1 (a downstream mTOR effector) inhibition, we investigated the impact of hematocrit. RESULTS The apparent volume of distribution of the central and peripheral compartment were estimated to be 207 L with a relative standard error (RSE) of 5.0 % and 485 L (RSE 4.2 %), respectively, with an inter-compartmental clearance of 72.1 L/h (RSE 3.2 %). The apparent intrinsic clearance was 198 L/h (RSE 4.3 %). A decrease in hematocrit from 45 % to 20 % resulted in a predicted reduction in whole-blood exposure of ~50 %, but everolimus plasma pharmacokinetics and pharmacodynamics were not affected. The predicted S6K1 inhibition was at a plateau level in the approved dose of 10 mg once daily. CONCLUSIONS A population pharmacokinetic model was developed for everolimus in cancer patients. Hematocrit influenced whole-blood pharmacokinetics, but not plasma pharmacokinetics or pharmacodynamics. Everolimus whole-blood concentrations should always be corrected for hematocrit. Since predicted mTOR inhibition was at a plateau level in the approved dose, dose reductions may have only a limited impact on mTOR inhibition.
SoK: Systematic Classification of Side-Channel Attacks on Mobile Devices
Side-channel attacks on mobile devices have gained increasing attention since their introduction in 2007. While traditional side-channel attacks, such as power analysis attacks and electromagnetic analysis attacks, required physical presence of the attacker as well as expensive equipment, an (unprivileged) application is all it takes to exploit the leaking information on modern mobile devices. Given the vast amount of sensitive information that are stored on smartphones, the ramifications of side-channel attacks affect both the security and privacy of users and their devices. In this paper, we propose a new categorization system for side-channel attacks on mobile devices, which is necessary since side-channel attacks have evolved significantly since their introduction during the smartcard era. Our proposed classification system allows to analyze side-channel attacks systematically, and facilitates the development of novel countermeasures. Besides this new categorization system, the extensive overview of existing attacks and attack strategies provides valuable insights on the evolving field of side-channel attacks on mobile devices. We conclude by discussing open issues and challenges in this context and outline possible future research directions.
Bevacizumab and Combination Chemotherapy in rectal cancer Until Surgery (BACCHUS): a phase II, multicentre, open-label, randomised study of neoadjuvant chemotherapy alone in patients with high-risk cancer of the rectum
In locally advanced rectal cancer (LARC) preoperative chemoradiation (CRT) is the standard of care, but the risk of local recurrence is low with good quality total mesorectal excision (TME), although many still develop metastatic disease. Current challenges in treating rectal cancer include the development of effective organ-preserving approaches and the prevention of subsequent metastatic disease. Neoadjuvant systemic chemotherapy (NACT) alone may reduce local and systemic recurrences, and may be more effective than postoperative treatments which often have poor compliance. Investigation of intensified NACT is warranted to improve outcomes for patients with LARC. The objective is to evaluate feasibility and efficacy of a four-drug regimen containing bevacizumab prior to surgical resection. This is a multi-centre, randomized phase II trial. Eligible patients must have histologically confirmed LARC with distal part of the tumour 4–12 cm from anal verge, no metastases, and poor prognostic features on pelvic MRI. Sixty patients will be randomly assigned in a 1:1 ratio to receive folinic acid + flurourcil + oxaliplatin (FOLFOX) + bevacizumab (BVZ) or FOLFOX + irinotecan (FOLFOXIRI) + BVZ, given in 2 weekly cycles for up to 6 cycles prior to TME. Patients stop treatment if they fail to respond after 3 cycles (defined as ≥ 30 % decrease in Standardised Uptake Value (SUV) compared to baseline PET/CT). The primary endpoint is pathological complete response rate. Secondary endpoints include objective response rate, MRI tumour regression grade, involved circumferential resection margin rate, T and N stage downstaging, progression-free survival, disease-free survival, overall survival, local control, 1-year colostomy rate, acute toxicity, compliance to chemotherapy. In LARC, a neoadjuvant chemotherapy regimen - if feasible, effective and tolerable would be suitable for testing as the novel arm against the current standards of short course preoperative radiotherapy (SCPRT) and/or fluorouracil (5FU)-based CRT in a future randomised phase III trial. Clinical trial identifier BACCHUS: NCT01650428
Observable dynamics and coordinate systems for automotive target tracking
We investigate several coordinate systems and dynamical vector fields for target tracking to be used in driver assistance systems. We show how to express the discrete dynamics of maneuvering target vehicles in arbitrary coordinates starting from the target's and the own (ego) vehicle's assumed dynamical model in global coordinates. We clarify the notion of “ego compensation” and show how non-inertial effects are to be included when using a body-fixed coordinate system for target tracking. We finally compare the tracking error of different combinations of target tracking coordinates and dynamical vector fields for simulated data.
Prioritizing test cases for regression testing
Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can provide faster feedback on the system under test, and let software engineers begin locating and correcting faults earlier than might otherwise be possible. In previous work, we reported the results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: (1) can prioritization techniques be effective when aimed at specific modified versions; (2) what tradeoffs exist between fine granularity and coarse granularity prioritization techniques; (3) can the incorporation of measures of fault proneness into prioritization techniques improve their effectiveness? This paper reports the results of new experiments addressing these questions.
Dynamic Security Design and Corporate Financing In preparation for the Handbook of Economics and Finance , Volume 2
This essay considers dynamic security design and corporate financing, with particular emphasis on informational micro-foundations. The central idea is that firm insiders must retain an appropriate share of firm risk, either to align their incentives with those of outside investors (moral hazard) or to signal favorable information about the quality of the firm’s assets. Informational problems lead to inevitable inefficiencies imperfect risk sharing, the possibility of bankruptcy, investment distortions, etc. The design of contracts that minimize these inefficiencies is a central question. This essay explores the implications of dynamic security design on firm operations and asset prices.
Data-to-Text Generation with Content Selection and Planning
Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model outperforms strong baselines improving the state-of-the-art on the recently released ROTOWIRE dataset.
Combining Reward Shaping and Curriculum Learning for Training Agents with High Dimensional Continuous Action Spaces
The needs for training agent with high dimensional continuous action spaces will increase as the robot hardware such as robotic arms and humanoid robots are becoming more and more sophisticated. However, it is difficult and time-consuming task. To tackle the problem, we combine reward shaping and curriculum learning. More specifically, the rewards are provided to the agent for every step it takes and the difficulty of the problem gradually increases depending on the agent learning. Both reward function and curriculum are designed to make the agent achieve its objective. The simulation results demonstrate that the proposed scheme outperforms the comparisons.
Children ’ s Eye Movements during Listening : Developmental Evidence for a Constraint-Based Theory of Sentence Processing
Many comprehension studies of grammatical development have focused on the ultimate interpretation that children assign to sentences and phrases, yielding somewhat static snapshots of children's emerging grammatical knowledge. Studies of the dynamic processes underlying children's language comprehension have to date been rare, owing in part to the lack of online sentence processing techniques suitable for use with children. In this chapter, we describe recent work from our research group, which examines the moment-by-moment interpretation decisions of children (age 4 to 6 years) while they listen to spoken sentences. These real-time measures were obtained by recording the children's eye movements as they visually interrogated and manipulated objects in response to spoken instructions. The first of these studies established some striking developmental differences in processing ability, with the youngest children showing an inability to use relevant properties of the referential scene to resolve temporary grammatical ambiguities (Trueswell, Sekerina, Hill, & Logrip, 1999). This finding could be interpreted as support for an early encapsulated syntactic processor that has difficulty using non-syntactic information to revise parsing commitments. However, we will review evidence from a series of follow-up experiments which suggest that this pattern arises from a developing interactive parsing system. Under this account, adult and child sentence comprehension is a " perceptual guessing game " in which multiple statistical cues are used to recover detailed linguistic structure. These cues, which include lexical-distribution evidence, verb semantic biases, and referential scene information, come " online " (become automated) at different points in the course of development. The developmental timing of these effects is related to their differential reliability and ease of detection in the input.
Hyperparameter estimation for satellite image restoration using a MCMC maximum-likelihood method
The satellite image deconvolution problem is ill-posed and must be regularized. Herein, we use an edge-preserving regularization model using a ' function, involving two hyperparameters. Our goal is to estimate the optimal parameters in order to automatically reconstruct images. We propose to use the Maximum Likelihood Estimator (MLE), applied to the observed image. We need sampling from prior and posterior distributions. Since the convolution prevents from using standard samplers, we have developed a modiied Geman-Yang algorithm, using an auxiliary variable and a cosine transform. We present a Markov Chain Monte Carlo Maximum Likelihood (MCMCML) technique which is able to simultaneously achieve the estimation and the reconstruction.
Sensorless Brushless DC Motor Drive Based on the Zero-Crossing Detection of Back Electromotive Force (EMF) From the Line Voltage Difference
This paper describes a position sensorless operation of permanent magnet brushless direct current (BLDC) motor. The position sensorless BLDC drive proposed, in this paper, is based on detection of back electromotive force (back EMF) zero crossing from the terminal voltages. The proposed method relies on a difference of line voltages measured at the terminals of the motor. It is shown, in the paper, that this difference of line voltages provides an amplified version of an appropriate back EMF at its zero crossings. The commutation signals are obtained without the motor neutral voltage. The effectiveness of the proposed method is demonstrated through simulation and experimental results.
Bioconductor: open software development for computational biology and bioinformatics
The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples.
Analysis and Management Measures of Water Pollution in Laboratory
Due to the laboratory environmental protection concepts and technology reasons,laboratory water pollution has become a major environmental issues need to be resolved.If the laboratory sewage is directly discharged into the sewer without treatment,it would seriously endanger the social environment and human health.This article explored the laboratory water pollution problems from improving laboratory environmental awareness and skills.
The Tianchi forming cone eruptive magmatic evolutionary series of the Chanbaishan and stratigraphic division
On the basis of the lithological features of the Tianchi volcanic rocks, chemical compositions and mode of occurrence, the volcanic forming cone eruption can be divided into four stages, and each eruptive characteristic and evolutionary regularity are described at the same time. The magmatic evolutionary can be devided into two cycles, the characteristic and lithological feature of each cycle were described, last, we mad a stratigraphic division and stratigraphic correlation with the adjacent strata.
Style in the Age of Instagram: Predicting Success within the Fashion Industry using Social Media
Fashion is a multi-billion dollar industry with social and economic implications worldwide. To gain popularity, brands want to be represented by the top popular models. As new faces are selected using stringent (and often criticized) aesthetic criteria, a priori predictions are made difficult by information cascades and other fundamental trend-setting mechanisms. However, the increasing usage of social media within and without the industry may be affecting this traditional system. We therefore seek to understand the ingredients of success of fashion models in the age of Instagram. Combining data from a comprehensive online fashion database and the popular mobile image-sharing platform, we apply a ma- chine learning framework to predict the tenure of a cohort of new faces for the 2015 Spring/Summer season throughout the subsequent 2015-16 Fall / Winter season. Our framework successfully predicts most of the new popular models who appeared in 2015. In particular, we find that a strong social media presence may be more important than being under contract with a top agency, or than the aesthetic standards sought after by the industry.
Seq-NMS for Video Object Detection
Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip. Recently, there have been major advances for doing object detection in a single image. These methods typically contain three phases: (i) object proposal generation (ii) object classification and (iii) post-processing. We propose a modification of the post-processing phase that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip. We show that our method obtains superior results to state-of-the-art single image object detection techniques. Our method placed 3 in the video object detection (VID) task of the ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015).
Analysis of eye-tracking experiments performed on a Tobii T60
Commercial eye-gaze trackers have the potential to be an important tool for quantifying the benefits of new visualization techniques. The expense of such trackers has made their use relatively infrequent in visualization studies. As such, it is difficult for researchers to compare multiple devices – obtaining several demonstration models is impractical in cost and time, and quantitative measures from real-world use are not readily available. In this paper, we present a sample protocol to determine the accuracy of a gaze-tacking device.
Predicting Future Lane Changes of Other Highway Vehicles using RNN-based Deep Models
In the event of sensor failure, it is necessary for autonomous vehicles to safely execute emergency maneuvers while avoiding other vehicles on the road. In order to accomplish this, the sensor-failed vehicle must predict the future semantic behaviors of other drivers, such as lane changes, as well as their future trajectories given a small window of past sensor observations. We address the first issue of semantic behavior prediction in this paper, by introducing a prediction framework that leverages the power of recurrent neural networks (RNNs) and graphical models. Our prediction goal is to predict the future categorical driving intent, for lane changes, of neighboring vehicles up to three seconds into the future given as little as a one-second window of past LIDAR, GPS, inertial, and map data. We collect real-world data containing over 500,000 samples of highway driving using an autonomous Toyota vehicle. We propose a pair of models that leverage RNNs: first, a monolithic RNN model that tries to directly map inputs to future behavior through a long-short-term-memory network. Second, we propose a composite RNN model by adopting the methodology of Structural Recurrent Neural Networks (RNNs) to learn factor functions and take advantage of both the highlevel structure of graphical models and the sequence modeling power of RNNs, which we expect to afford more transparent modeling and activity than the monolithic RNN. To demonstrate our approach, we validate our models using authentic interstate highway driving to predict the future lane change maneuvers of other vehicles neighboring our autonomous vehicle. We find that both RNN models outperform baselines, and they outperform each other in certain conditions.
Warehousing and Protecting Big Data: State-Of-The-Art-Analysis, Methodologies, Future Challenges
This paper proposes a comprehensive critical survey on the issues of warehousing and protecting big data, which are recognized as critical challenges of emerging big data research. Indeed, both are critical aspects to be considered in order to build truly, high-performance and highly-flexible big data management systems. We report on state-of-the-art approaches, methodologies and trends, and finally conclude by providing open problems and challenging research directions to be considered by future efforts.
From Seed Discovery to Deep Reconstruction: Predicting Saliency in Crowd via Deep Networks
Although saliency prediction in crowd has been recently recognized as an essential task for video analysis, it is not comprehensively explored yet. The challenges lie in that eye fixations in crowded scenes are inherently "distinct" and "multi-modal", which differs from those in regular scenes. To this end, the existing saliency prediction schemes typically rely on hand designed features with shallow learning paradigm, which neglect the underlying characteristics of crowded scenes. In this paper, we propose a saliency prediction model dedicated for crowd videos with two novelties: 1) Distinct units are discovered using deep representation learned by a Stacked Denoising Auto-Encoder (SDAE), considering perceptual properties of crowd saliency; 2) Contrast-based saliency is measured through deep reconstruction errors in the second SDAE trained on all units excluding distinct units. A unified model is integrated for online processing crowd saliency. Extensive evaluations on two crowd video benchmark datasets demonstrate that our approach can effectively explore crowd saliency mechanism in two-stage SDAEs and achieve significantly better results than state-of-the-art methods, with robustness to parameters.
A framework for HDR stereo matching using multi-exposed images
Real world scenes that contain high dynamic range illumination present a special challenge for stereo matching algorithms due to a lack of texture in overor under-exposed image regions. In this paper, we discuss possibilities for combining state-of-the-art stereo matching algorithms with High Dynamic Range (HDR) imaging techniques, in order to exploit a set of multi-exposed input images of both the left and right stereo view for high-quality stereo reconstruction. We sketch the overall concept of our HDR stereo matching framework and demonstrate some first steps of its implementation, including the acquisition of HDR stereo test data, stereo matching experiments on tone-mapped images, and ideas for combining disparity maps derived from different exposures.
Evaluation of Local Spatio-temporal Features for Action Recognition
Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature, and promising recognition results were demonstrated for different action datasets. The comparison of those methods, however, is limited given the different experimental settings and various recognition methods used. The purpose of this paper is first to define a common evaluation setup to compare local space-time detectors and descriptors. All experiments are reported for the same bag-of-features SVM recognition framework. Second, we provide a systematic evaluation of different spatio-temporal features. We evaluate the performance of several space-time interest point detectors and descriptors along with their combinations on datasets with varying degree of difficulty. We also include a comparison with dense features obtained by regular sampling of local space-time patches. Feature detectors. In our experimental evaluation, we consider the following feature detectors. (1) The Harris3D detector [3] extends the Harris detector for images to image sequences. At each video point, a spatio-temporal second-moment matrix μ is computed using a separable Gaussian smoothing function and space-time gradients. Interest points are located at local maxima of H = det(μ)− k trace3(μ). (2) The Cuboid detector [1] is based on temporal Gabor filters. The response function has the form: R = (I ∗ g ∗ hev) + (I ∗ g ∗ hod), where g(x,y;σ) is the 2D Gaussian smoothing kernel, and hev and hod are 1D Gabor filters. Interest points are detected at local maxima of R. (3) The Hessian detector [6] is a spatio-temporal extension of the Hessian saliency measure. The determinant of the 3D Hessian matrix is used to measure saliency. The determinant of the Hessian is computed over several spatial and temporal scales. A non-maximum suppression algorithm selects extrema as interest points. (4) Dense sampling extracts multi-scale video blocks at regular positions in space and time and for varying scales. In our experiments, we sample cuboids with 50% spatial and temporal overlap. Feature descriptors. The following feature descriptors are investigated. (1) For the Cuboid descriptor [1], gradients computed for each pixel in a cuboid region are concatenated into a single vector. PCA projects vectors to a lower dimensional space. (2) The HOG/HOF descriptors [4] divide a cuboid region into a grid of cells. For each cell, 4-bin histograms of gradient orientations (HOG) and 5-bin histograms of optic flow (HOF) are computed. Normalized histograms are concatenated into HOG, HOF as well as HOG/HOF descriptor vectors. (3) The HOG3D descriptor [2] is based on histograms of 3D gradient orientations. Gradients are computed via an integral video representations. Regular polyhedrons are used to uniformly quantize the orientation of spatio-temporal gradients. A given 3D volume is divided into a grid of cells. The corresponding descriptor concatenates gradient histograms of all cells. (4) The extended SURF (ESURF) descriptor [6] extends the image SURF descriptor to videos. Again 3D cuboids are divided into a grid of cells. Each cell is represented by a weighted sum of uniformly sampled responses of Haar-wavelets aligned with the three axes. Experimental Setup. We represent video sequences as a bag of local spatio-temporal features [5]. Spatio-temporal features are first quantized HOG3D HOG/HOF HOG HOF Cuboids ESURF Harris3D 89.0% 91.8% 80.9% 92.1% – – Cuboids 90.0% 88.7% 82.3% 88.2% 89.1% – Hessian 84.6% 88.7% 7767% 88.6% – 81.4% Dense 85.3% 86.1% 79.0% 88.0% – – Table 1: Average accuracy on the KTH actions dataset.
Nietzsche, Virtue and the Horror of Existence
Robert Solomon argues that Nietzsche is committed to a virtue ethic like Aristotle's. Solomon’s approach seems unaware of Nietzsche’s belief in the horror of existence. A life that contains as much suffering as Nietzsche expects a life to contain, could not be considered a good life by Aristotle. To go further, as Nietzsche does in his doctrines of eternal recurrence and amor fati, to advocate loving such a fate, to refuse to change the slightest detail, Aristotle would find debased. Nietzsche is committed to a virtue ethic, but not an Aristotelian one. I It has been argued that Nietzsche is committed to a virtue ethic. Solomon, for example, claims that Nietzsche is more like Aristotle than Kant. Aristotle’s ethics, he holds, is not one of rules and principles—especially not universal ones. It is concerned with excellence and is still involved with the Homeric warrior tradition. The purpose of such an ethic is to maximize people’s potential and that will always be unequal for
Project risk management : lessons learned from software development environment
The challenges and realities in applying effective software risk management processes are difficult, in particular integrating the risk management processes into software development organizations. However, the benefits of implementing effective risk management tools and techniques in software development project are equally great. Current perceptions and emerging trends of various software risk management practices are reviewed and risks specific to software development projects are identified. Implementing effective risk management process will succeed by changing the organizational culture. This paper addresses lessons learned from implementing project risk management practices in software development environment.  2003 Elsevier Science Ltd. All rights reserved.
Impact Of Employee Participation On Job Satisfaction , Employee Commitment And Employee Productivity
It is widely believed that the employee participation may affect employee’s job satisfaction; employee productivity, employee commitment and they all can create comparative advantage for the organization. The main intention of this study was to find out relationship among employee participation, job satisfaction, employee productivity and employee commitment. For the matter 34 organizations from Oil & Gas, Banking and Telecommunication sectors were contacted, of which 15 responded back. The findings of this study are that employee participation not only an important determinant of job satisfaction components. Increasing employee participation will have a positive effect on employee’s job satisfaction, employee commitment and employee productivity. Naturally increasing employee participation is a long-term process, which demands both attention from management side and initiative from the employee side.
Discursive Meaning Creation in Crowdfunding: A Socio-material Perspective
Crowdfunding is an exciting new phenomenon with the potential to disrupt early-stage capital markets. Enabled through specialized internet websites and social media, entrepreneurs now have a new source for start-up capital (estimated at $2.8 billion in 2012). Currently, entrepreneurs need to network through intermediaries to have access to wealthy investors. Crowdfunding bypasses these intermediaries and brings the ability to raise capital to the crowd. Consequently, decisions to fund an entrepreneurial endeavor are not made through ‘who you know’ and back-room deals, but through the discourse that occurs through the crowdfunding project page. The purpose of this research is to analyze and understand this discourse and the meaning it creates over the course of a crowdfunding campaign. The lens of sociomateriality in conjunction with discourse analysis is used to identify how meaning is created and its influence on the IS artifact.
Enhancing Video Summarization via Vision-Language Embedding
This paper addresses video summarization, or the problem of distilling a raw video into a shorter form while still capturing the original story. We show that visual representations supervised by freeform language make a good fit for this application by extending a recent submodular summarization approach [9] with representativeness and interestingness objectives computed on features from a joint vision-language embedding space. We perform an evaluation on two diverse datasets, UT Egocentric [18] and TV Episodes [45], and show that our new objectives give improved summarization ability compared to standard visual features alone. Our experiments also show that the vision-language embedding need not be trained on domainspecific data, but can be learned from standard still image vision-language datasets and transferred to video. A further benefit of our model is the ability to guide a summary using freeform text input at test time, allowing user customization.
Group morality and intergroup relations: cross-cultural and experimental evidence.
An observational, cross-cultural study and an experimental study assessed behaviors indicative of a moral code that condones, and even values, hostility toward outgroups. The cross-cultural study, which used data from the Standard Cross-Cultural Sample (Murdock & White, 1969), found that for preindustrial societies, as loyalty to the ingroup increased the tendency to value outgroup violence more than ingroup violence increased, as did the tendencies to engage in more external than internal warfare, and enjoy war. The experimental study found that relative to guilt-prone group members who were instructed to remain objective, guilt-prone group members who were instructed to be empathic with their ingroup were more competitive in an intergroup interaction. The findings from these studies suggest that group morality is associated with intergroup conflict.
Industrial Control System Network Intrusion Detection by Telemetry Analysis
Until recently, industrial control systems (ICSs) used “air-gap” security measures, where every node of the ICS network was isolated from other networks, including the Internet, by a physical disconnect. Attaching ICS networks to the Internet benefits companies and engineers who use them. However, as these systems were designed for use in the air-gapped security environment, protocols used by ICSs contain little to no security features and are vulnerable to various attacks. This paper proposes an approach to detect the intrusions into network attached ICSs by measuring and verifying data that is transmitted through the network but is not inherently the data used by the transmission protocol-network telemetry. Using simulated PLC units, the developed IDS was able to achieve 94.3 percent accuracy when differentiating between machines of an attacker and engineer on the same network, and 99.5 percent accuracy when differentiating between attacker and engineer on the Internet.
Cyber-physical system design contracts
This paper introduces design contracts between control and embedded software engineers for building Cyber-Physical Systems (CPS). CPS design involves a variety of disciplines mastered by teams of engineers with diverse backgrounds. Many system properties influence the design in more than one discipline. The lack of clearly defined interfaces between disciplines burdens the interaction and collaboration. We show how design contracts can facilitate interaction between 2 groups: control and software engineers. A design contract is an agreement on certain properties of the system. Every party specifies requirements and assumptions on the system and the environment. This contract is the central point of interdomain communication and negotiation. Designs can evolve independently if all parties agree to a contract or designs can be modified iteratively in negotiation processes. The main challenge lies in the definition of a concise but sufficient contract. We discuss design contracts that specify timing and functionality, two important properties control and software engineers have to agree upon. Various design approaches have been established and implemented successfully to address timing and functionality. We formulate those approaches as design contracts and propose guidelines on how to choose, derive and employ them. Modeling and simulation support for the design contracts is discussed using an illustrative example.
Bidirectional LSTM-RNN with Bi-attention for reading comprehension (
12 In this work, we implemented bi-directional LSTM-RNN network to solve 13 the reading comprehension problem. The problem is, given a question and a 14 context (contains the answer to the question), find the answer in the context. 15 Following the method in paper [11], we use bi-attention to make the link 16 from question to context and from context to question, to make good use of 17 the information of relationship between the two parts. By using inner 18 product, we find the probabilities of the context word to be the first or last 19 word of answer. Also, we used some improvement to the paper reducing the 20 training time and improving the accuracy. After adjusting parameters, the 21 best model has performance of F1=48% and EM=33% leaderboard. 22 23
N=2 Gauged Supergravity with Stable dS Vacuum and Masses of Ultra-Light Scalars in Einstein Field Equations
In most of the models of dark energy it is assumed that the cosmological constant is equal to zero and the potential energy V (φ) of the scalar field driving the present stage of acceleration, slowly decreases and eventually vanishes as the field rolls to φ = ∞. In this case, after a transient dS-like stage, the speed of expansion of the Universe decreases, and the Universe reaches Minkowski regime. Recently, it was found that one can describe dark energy in some d = 4 extended supergravities that have dS solutions. These dS solutions correspond to the extrema of the effective potential V (φ) for some scalar fields φ. In this paper, we introduce a non-minimal coupling between the scalar curvature and the density of a scalar in the form L = −ξ√gRφ∗φ = −ξR̃φ∗φ, ξ = 1 6 and we consider a complex potential in the form V (φφ∗) = pm(1 − ωφφ) where p is a constant of order of unity, φ∗ is a complex field and ω is a parameter assumed to be ≪ 1. It was shown that the mass squared of the scalar field is quantized in units of the cosmological constant as m = nΛ, where n are some integers in agreement with N = 2 gauged supergravity with stable dS vacuum. We show that this result could have important consequences on dark energy problem. We discuss some important consequences on standard cosmology withouit violating any of the well-known test of General Relativity. I-Introduction It is well believed today that the cosmological constant describes the energy density of the vacuum (empty space), and it is a potentially important contributor to the dynamical history of the Universe. Recent observations of Type Ia supernovae and the CMB indicates that the Universe is in accelerated regime [1]. The total energy of the universe consists in fact of ordinary matter and dark matter. One can interpret the dark energy as the vacuum energy corresponding to the cosmological Einstein constant or as the slowly changing energy of a certain scalar field
A dual-feed circularly-polarized traveling-wave array antenna
A traveling-wave circularly-polarized microstrip array antenna is presented in this paper. It uses a circularly polarized dual-feed radiating element. The element is a rectangular patch with two chamfered corners. It is fed by microstrip lines, making it possible for the radiating element and feed lines to be realized and integrated in a single layer. A four-element array is designed, built and tested. Measured performance of the antenna is presented, where a good agreement between the simulated and measured results is obtained and demonstrated.
IL-Miner: Instance-Level Discovery of Complex Event Patterns
Complex event processing (CEP) matches patterns over a continuous stream of events to detect situations of interest. Yet, the definition of an event pattern that precisely characterises a particular situation is challenging: there are manifold dimensions to correlate events, including time windows and value predicates. In the presence of historic event data that is labelled with the situation to detect, event patterns can be learned automatically. To cope with the combinatorial explosion of pattern candidates, existing approaches work on a type-level and discover patterns based on predefined event abstractions, aka event types. Hence, discovery is limited to patterns of a fixed granularity and users face the burden to manually select appropriate event abstractions. We present IL-MINER, a system that discovers event patterns by genuinely working on the instance-level, not assuming a priori knowledge on event abstractions. In a multi-phase process, ILMINER first identifies relevant abstractions for the construction of event patterns. The set of events explored for pattern discovery is thereby reduced, while still providing formal guarantees on correctness, minimality, and completeness of the discovery result. Experiments using real-world datasets from diverse domains show that IL-MINER discovers a much broader range of event patterns compared to the state-of-the-art in the field.
A Successive Approximation Recursive Digital Low-Dropout Voltage Regulator With PD Compensation and Sub-LSB Duty Control
This paper presents a recursive digital low-dropout (RLDO) regulator that improves response time, quiescent power, and load regulation dynamic range over prior digital LDO designs by 1–2 orders of magnitude. The proposed RLDO enables a practical digital replacement to analog LDOs by using an SAR-like binary search algorithm in a coarse loop and a sub-LSB pulse width modulation duty control scheme in a fine loop. A proportional-derivative compensation scheme is employed to ensure stable operation independent of load current, the size of the output decoupling capacitor, and clock frequency. Implemented in 0.0023 mm2 in 65 nm CMOS, the 7-bit RLDO achieves, at a 0.5-V input, a response time of 15.1 ns with a figure of merit of 199.4 ps, along with stable operation across a 20 000 $\times $ dynamic load range.
Deep Learning in Drug Discovery.
Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of "deep learning". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks.
An efficient FPGA overlay for portable custom instruction set extensions
Custom instruction set extensions can substantially boost performance of reconfigurable softcore CPUs. While this approach is commonly tailored to one specific FPGA system, we are presenting a fine-grained FPGA-like overlay architecture which can be implemented in the user logic of various FPGA families from different vendors. This allows the execution of a portable application consisting of a program binary and an overlay configuration in a completely heterogeneous environment. Furthermore, we are presenting different optimizations for dramatically reducing the implementation cost of the proposed overlay architecture. In particular, this includes the mapping of the overlay interconnection network directly into the switch fabric of the hosting FPGA. Our case study demonstrates an overhead reduction of an order of magnitude as compared to related approaches.
German adaptation of the Resources for Enhancing Alzheimer’s Caregiver Health II: study protocol of a single-centred, randomised controlled trial
BACKGROUND Caring for a family member with dementia is extremely stressful, and contributes to psychiatric and physical illness among caregivers. Therefore, a comprehensive programme called Resources for Enhancing Alzheimer's Caregiver Health II (REACH II) was developed in the United States to enhance the health of Alzheimer's caregivers. REACH II causes a clear reduction of the stress and burdens faced by informal caregivers at home. The aim of this protocol is to adapt, apply, and evaluate this proven intervention programme in a German-speaking area for the first time. This newly adapted intervention is called Deutsche Adaption der Resources for Enhancing Alzheimer's Caregiver Health (DeREACH). METHODS A total of 138 informal caregivers at home are recruited in a single-centred, randomised controlled trial. The intervention (DeREACH) consists of nine home visits and three telephone contacts over six months, all of which focus on safety, psychological well-being and self-care, social support, problem behaviour and preventive health-related behaviours. A complex intervention assessment on effectiveness will be adopted when the primary outcome - namely, the reduction of caregiver burden - and other secondary outcomes, including changes with regard to anxiety and depression, somatisation, health-related quality of life, and perceived social support, are measured at baseline, as well as immediately and three months after the intervention. The change from baseline to post-intervention assessment with regard to the primary outcome will be compared between treatment and control group using t-tests for independent samples. DISCUSSION It is anticipated that this study will show that DeREACH effectively reduces caregiver burden and therefore works under the conditions of a local German health-care system. If successful, this programme will provide an effective intervention programme in the German-speaking area to identify and develop the personal capabilities of informal caregivers to cope with the burdens of caring for people with dementia.
Self-Sorting Map: An Efficient Algorithm for Presenting Multimedia Data in Structured Layouts
This paper presents the Self-Sorting Map (SSM), a novel algorithm for organizing and presenting multimedia data. Given a set of data items and a dissimilarity measure between each pair of them, the SSM places each item into a unique cell of a structured layout, where the most related items are placed together and the unrelated ones are spread apart. The algorithm integrates ideas from dimension reduction, sorting, and data clustering algorithms. Instead of solving the continuous optimization problem that other dimension reduction approaches do, the SSM transforms it into a discrete labeling problem. As a result, it can organize a set of data into a structured layout without overlap, providing a simple and intuitive presentation. The algorithm is designed for sorting all data items in parallel, making it possible to arrange millions of items in seconds. Experiments on different types of data demonstrate the SSM's versatility in a variety of applications, ranging from positioning city names by proximities to presenting images according to visual similarities, to visualizing semantic relatedness between Wikipedia articles.
Epithalamus calcifications in schizophrenia
We evaluated the prevalence and the size of epithalamus calcifications (EC) and choroid plexus calcifications (CPC) on computed tomography (CT) scans in a group of 64 schizophrenic patients and in a group of 31 healthy controls. The associations between cerebral calcifications, demographic variables, and other brain morphological characteristics (particularly cerebral ventricular size and cortical atrophy) in both, patients and controls, were also considered. A significant increase in size of the epithalamic-region calcifications in schizophrenic patients was found, whereas there was no evidence of increase in both, dimension and prevalence, of choroid plexus calcification. Such dimensional increase was unrelated to the duration of illness and therefore did not seem to be iatrogenic or secondary to the disease. A correlation was found between epithalamus calcifications and cortical atrophy and third-ventricle enlargement, suggesting that calcifications of this cerebral region may be associated with lesions of third-periventricular areas and of circuitries hypothesized to be involved in the pathophysiology of schizophrenia.
Analysis of complications and patient satisfaction in pedicled transverse rectus abdominis myocutaneous and deep inferior epigastric perforator flap breast reconstruction.
The purpose of this study was to evaluate complications and patient satisfaction after pedicled transverse rectus abdominis myocutaneous (TRAM) and deep inferior epigastric perforator (DIEP) flap reconstruction at a single institution. There were 346 patients identified from 1999 to 2006 who underwent 197 pedicled TRAM and 217 DIEP flap reconstructions. Flap complication rates were similar between groups, whereas pedicled TRAM reconstructions had higher rates of abdominal bulge (9.5% vs. 2.3%, P = 0.0071) and hernias (3.9% vs. 0%, P = 0.0052). DIEP flap patients had significantly higher general satisfaction (81.7% vs. 70.2%, P = 0.0395), whereas aesthetic satisfaction was similar between groups. Furthermore, DIEP flap patients, particularly those undergoing bilateral reconstructions, were more likely to choose the same type of reconstruction compared with pedicled TRAM patients (92.5% vs. 80.7%, P = 0.0113). Understanding the differences in complications and satisfaction will help physicians and patients make informed decisions about abdominal-based autologous breast reconstruction.
Translating a heart disease lifestyle intervention into the community: the South Asian Heart Lifestyle Intervention (SAHELI) study; a randomized control trial
BACKGROUND South Asians (Asian Indians and Pakistanis) are the second fastest growing ethnic group in the United States (U.S.) and have an increased risk of atherosclerotic cardiovascular disease (ASCVD). This pilot study evaluated a culturally-salient, community-based healthy lifestyle intervention to reduce ASCVD risk among South Asians. METHODS Through an academic-community partnership, medically underserved South Asian immigrants at risk for ASCVD were randomized into the South Asian Heart Lifestyle Intervention (SAHELI) study. The intervention group attended 6 interactive group classes focused on increasing physical activity, healthful diet, weight, and stress management. They also received follow-up telephone support calls. The control group received translated print education materials about ASCVD and healthy behaviors. Primary outcomes were feasibility and initial efficacy, measured as change in moderate/vigorous physical activity and dietary saturated fat intake at 3- and 6-months. Secondary clinical and psychosocial outcomes were also measured. RESULTS Participants' (n = 63) average age was 50 (SD = 8) years, 63 % were female, 27 % had less than or equal to a high school education, one-third were limited English proficient, and mean BMI was 30 kg/m2 (SD ± 5). There were no significant differences in change in physical activity or saturated fat intake between the intervention and control group. Compared to the control group, the intervention group showed significant weight loss (-1.5 kg, p-value = 0.04) and had a greater sex-adjusted decrease in hemoglobin A1C (-0.43 %, p-value <0.01) at 6 months. Study retention was 100 %. CONCLUSIONS This pilot study suggests that a culturally-salient, community-based lifestyle intervention was feasible for engaging medically underserved South Asian immigrants and more effective at addressing ASCVD risk factors than print health education materials. TRIAL REGISTRATION NCT01647438, Date of Trial Registration: July 19, 2012.
Prevalence of Stunting and Associated Factors among School Age Children in Primary Schools of Haik Town, South Wollo Zone, North- Eastern Ethiopia, 2017
Background: Under-nutrition is the major public health problem in the developing countries including Ethiopia. This study aimed to investigate the magnitude of stunting and associated factors among school age children. Methods: A school based cross-sectional study was conducted on 414 school age children in Haiyk town Primary schools, North eastern Ethiopia in May 2017. In this study, Stunting was defined as a child whose height for age Z-scores is below -2SD. Descriptive statistics, bivariate analysis to identify associated factors and multivariable logistic regression analysis were employed to control the effect of potential confounders. Variables with a pvalue<0.05 in the multivariable model were identified as predictors of stunting. Results: The prevalence of stunting among school age children was 44 (11.3%) with Z-scores below-2SD and 83.7% of students were categorized under 16.5-18.5 body mass index. Multivariable logistic regression analysis showed that increased child level of education (AOR 4.028; 95% CI 1.72, 9.42), did not have additional food during study time (AOR 2.12; 95% CI 1.10, 4.12) and use of mixed food (AOR 0.20; 95% CI 0.06, 0.70) have been found significant associated with stunting. Conclusion: The study revealed that magnitude of stunting among school age children was suboptimal. Therefore, interventions could focus on educating parents on the importance of timely feeding, balanced diet; economize use of the available resources. Further analytic studies should be conducted to investigate the causes of stunting among school children in the study area.
Lumbar Total Disc Replacement for Discogenic Low Back Pain: Two-year Outcomes of the activL Multicenter Randomized Controlled IDE Clinical Trial.
STUDY DESIGN A prospective, multicenter, randomized, controlled, investigational device exemption (IDE) noninferiority trial. OBJECTIVE The aim of this study was to evaluate the comparative safety and effectiveness of lumbar total disc replacement (TDR) in the treatment of patients with symptomatic degenerative disc disease (DDD) who are unresponsive to nonsurgical therapy. SUMMARY OF BACKGROUND DATA Lumbar TDR has been used to alleviate discogenic pain and dysfunction while preserving segmental range of motion and restoring stability. There is a paucity of data available regarding the comparative performance of lumbar TDR. METHODS Patients presenting with symptomatic single-level lumbar DDD who failed at least 6 months of nonsurgical management were randomly allocated (2:1) to treatment with an investigational TDR device (activL, n = 218) or FDA-approved control TDR devices (ProDisc-L or Charité, n = 106). The hypothesis of this study was that a composite effectiveness outcome at 2 years in patients treated with activL would be noninferior (15% delta) to that in controls. RESULTS The primary composite endpoint of this study was met, which demonstrated that the activL TDR was noninferior to control TDR (P < 0.001). A protocol-defined analysis of the primary composite endpoint also confirmed that activL was superior to controls (P = 0.02). Radiographic success was higher with activL versus controls (59% vs. 43%; P < 0.01). Mean back pain severity improved by 74% with activL and 68% with controls. Oswestry Disability Index scores decreased by 67% and 61% with activL and controls, respectively. Patient satisfaction with treatment was over 90% in both groups at 2 years. Return to work was approximately 1 month shorter (P = 0.08) with activL versus controls. The rate of device-related serious adverse events was lower in patients treated with activL versus controls (12% vs. 19%; P = 0.13). Surgical reintervention rates at the index level were comparable (activL 2.3%, control 1.9%). CONCLUSION The single-level activL TDR is safe and effective for the treatment of symptomatic lumbar DDD through 2 years. LEVEL OF EVIDENCE 2.
Parallelization of Bresenham's line and circle algorithms
Parallel algorithm for line and circle drawing that are based on J.E. Bresenham's line and circle algorithms (see Commun. ACM, vol.20, no.2, p.100-6 (1977)) are presented. The new algorithms are applicable on raster scan CRTs, incremental pen plotters, and certain types of printers. The line algorithm approaches a perfect speedup of P as the line length approaches infinity, and the circle algorithm approaches a speedup greater than 0.9P as the circle radius approaches infinity. It is assumed that the algorithm are run in a multiple-instruction-multiple-data (MIMD) environment, that the raster memory is shared, and that the processors are dedicated and assigned to the task (of line or circle drawing).<<ETX>>
Signal processing techniques in network-aided positioning: a survey of state-of-the-art positioning designs
Wireless positioning has attracted much research attention and has become increasingly important in recent years. Wireless positioning has been found very useful for other applications besides E911 service, ranging from vehicle navigation and network optimization to resource management and automated billing. Although many positioning devices and services are currently available, it is necessary to develop an integrated and seamless positioning platform to provide a uniform solution for different network configurations. This article surveys the state-of-the-art positioning designs, focusing specifically on signal processing techniques in network-aided positioning. It serves as a tutorial for researchers and engineers interested in this rapidly growing field. It also provides new directions for future research for those who have been working in this field for many years.
Exploring Neural Methods for Parsing Discourse Representation Structures
Neural methods have had several recent successes in semantic parsing, though they have yet to face the challenge of producing meaning representations based on formal semantics. We present a sequence-to-sequence neural semantic parser that is able to produce Discourse Representation Structures (DRSs) for English sentences with high accuracy, outperforming traditional DRS parsers. To facilitate the learning of the output, we represent DRSs as a sequence of flat clauses and introduce a method to verify that produced DRSs are well-formed and interpretable. We compare models using characters and words as input and see (somewhat surprisingly) that the former performs better than the latter. We show that eliminating variable names from the output using De Bruijn indices increases parser performance. Adding silver training data boosts performance even further.
Mucosal Inflammatory Response to Salmonella typhimurium Infection
The human intestinal epithelium consists of a single layer of epithelial cells that forms a barrier against food antigens and the resident microbiota within the lumen. This delicately balanced organ functions in a highly sophisticated manner to uphold the fidelity of the intestinal epithelium and to eliminate pathogenic microorganisms. On the luminal side, this barrier is fortified by a thick mucus layer, and on the serosal side exists the lamina propria containing a resident population of immune cells. Pathogens that are able to breach this barrier disrupt the healthy epithelial lining by interfering with the regulatory mechanisms that govern the normal balance of intestinal architecture and function. This disruption results in a coordinated innate immune response deployed to eliminate the intruder that includes the release of antimicrobial peptides, activation of pattern-recognition receptors, and recruitment of a variety of immune cells. In the case of Salmonella enterica serovar typhimurium (S. typhimurium) infection, induction of an inflammatory response has been linked to its virulence mechanism, the type III secretion system (T3SS). The T3SS secretes protein effectors that exploit the host's cell biology to facilitate bacterial entry and intracellular survival, and to modulate the host immune response. As the role of the intestinal epithelium in initiating an immune response has been increasingly realized, this review will highlight recent research that details progress made in understanding mechanisms underlying the mucosal inflammatory response to Salmonella infection, and how such inflammatory responses impact pathogenic fitness of this organism.
A Survey of Blockchain Security Issues and Challenges
Blockchain technologies is one of the most popular issue in recent years, it has already changed people’s lifestyle in some area due to its great influence on many business or industry, and what it can do will still continue cause impact in many places. Although the feature of blockchain technologies may bring us more reliable and convenient services, the security issues and challenges behind this innovative technique is also an important topic that we need to concern.
Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition
This paper introduces a novel Gabor-Fisher (1936) classifier (GFC) for face recognition. The GFC method, which is robust to changes in illumination and facial expression, applies the enhanced Fisher linear discriminant model (EFM) to an augmented Gabor feature vector derived from the Gabor wavelet representation of face images. The novelty of this paper comes from 1) the derivation of an augmented Gabor feature vector, whose dimensionality is further reduced using the EFM by considering both data compression and recognition (generalization) performance; 2) the development of a Gabor-Fisher classifier for multi-class problems; and 3) extensive performance evaluation studies. In particular, we performed comparative studies of different similarity measures applied to various classifiers. We also performed comparative experimental studies of various face recognition schemes, including our novel GFC method, the Gabor wavelet method, the eigenfaces method, the Fisherfaces method, the EFM method, the combination of Gabor and the eigenfaces method, and the combination of Gabor and the Fisherfaces method. The feasibility of the new GFC method has been successfully tested on face recognition using 600 FERET frontal face images corresponding to 200 subjects, which were acquired under variable illumination and facial expressions. The novel GFC method achieves 100% accuracy on face recognition using only 62 features.
Soil Classification From Large Imagery Databases Using a Neuro-Fuzzy Classifier
In this paper, we propose a neuro-fuzzy (NF) classification technique to determine various soil classes from large imagery soil databases. The technique looks at the feature-wise degree of belongings of the imagery databases to obtainable soil classes using a fuzzification method. The fuzzification method builds a membership matrix with an element count equal to the mathematical product of the number of data records and soil classes present. The elements of this matrix are the input to a neural network model. We apply our technique to three UCI databases, namely, Statlog Landsat Satellite, Forest Covertype, and Wilt for soil classification. The paper aims to find out soil classes using the proposed technique, and then compare its performance with four well-known classification algorithms, namely, radial basis function network, k-nearest neighbor, support vector machine, and adaptive NF inference system. Numerous measures, for example, root-mean-square error, kappa statistic, accuracy, false positive rate, true positive rate, precision, recall, F-measure, and area under the curve, are used for evaluating the quantitative analysis of the simulated results. All these evaluation measures approve the supremacy of the proposed NF method.
Conceptualization and Measurement of Adaptive and Maladaptive Aspects of Performance Perfectionism: Relations to Personality, Psychological Functioning, and Academic Achievement
A program of research was initiated to evaluate the construct of performance perfectionism in adults. Findings from these studies indicated that different adaptive and maladaptive aspects of performance perfectionism could be distinguished (Studies 1a & 1b), and that performance perfectionism was distinguishable from alternative personality variables, including the five-factor model (Study 2). Furthermore, additional findings indicated that performance perfectionism was associated with positive and negative psychological functioning (Studies 3a & 3b), and with prospective academic achievement in the classroom (Study 4). In addition, regression results indicated that performance perfectionism accounted for additional variance in positive psychological functioning beyond a popular measure of perfectionism (Study 3b). Some implications of the present findings and future directions are discussed.
A Slim RFID Tag Antenna Design for Metallic Object Applications
A slim radio frequency identification (RFID) tag antenna design for metallic objects application is proposed in this letter. It is designed based on a high-impedance surface (HIS) unit cell structure directly rather than adopting a large HIS ground plane. The antenna structure consists of metallic rectangular patches electrically connected through vias to the ground plane to form an RFID tag antenna that is suitable for mounting on metallic objects. The experimental tests show that the maximum read range of RFID tag placed on a metallic object is about 3.1 m and the overall size is 65 times 20 times 1.5 mm3. It is thinner than inverted-F antenna (IFA), planar inverted-F antenna (PIFA), or patch-type antennas for metallic objects. Simulation and measurement results of the proposed RFID tag antenna are also presented in this letter.
Reliability and sensitivity to change of the Bristol Rheumatoid Arthritis Fatigue scales.
OBJECTIVE To examine the reliability (stability) and sensitivity of the Bristol Rheumatoid Arthritis Fatigue scales (BRAFs) and patient-reported outcome measures (PROMs) developed to capture the fatigue experience. The Multi-Dimensional Questionnaire (BRAF-MDQ) has a global score and four subscales (Physical Fatigue, Living with Fatigue, Cognitive Fatigue and Emotional Fatigue), while three numerical rating scales (BRAF-NRS) measure fatigue Severity, Effect and Coping. METHODS RA patients completed the BRAFs plus comparator PROMs. Reliability (study 1): 50 patients completed questionnaires twice. A same-day test-retest interval (minimum 60 min) ensured both time points related to the same 7 days, minimizing the capture of fatigue fluctuations. Reliability (study 2): 50 patients completed the same procedure with a re-worded BRAF-NRS Coping. Sensitivity to change (study 3): 42 patients being given clinically a single high dose of i.m. glucocorticoids completed questionnaires at weeks 0 and 2. RESULTS The BRAF-MDQ, its subscales and the BRAF-NRS showed very strong reliability (r = 0.82-0.95). BRAF-NRS Coping had lower moderate reliability in both wording formats (r = 0.62, 0.60). The BRAF-MDQ, its subscales and the BRAF-NRS Severity and Effect were sensitive to change, with effect sizes (ESs) of 0.33-0.56. As hypothesized, the BRF-NRS Coping was not responsive to the pharmaceutical intervention (ES 0.05). Preliminary exploration suggests a minimum clinically important difference of 17.5% for improvement and 6.1% for fatigue worsening. CONCLUSION The BRAF scales show good reliability and sensitivity to change. The lack of BRAF-NRS Coping responsiveness to medication supports the theory that coping with fatigue is a concept distinct from severity and effect that is worth measuring separately.
Fully adiabatic 31P 2D-CSI with reduced chemical shift displacement error at 7 T--GOIA-1D-ISIS/2D-CSI.
A fully adiabatic phosphorus (31P) two-dimensional (2D) chemical shift spectroscopic imaging sequence with reduced chemical shift displacement error for 7 T, based on 1D-image-selected in vivo spectroscopy, combined with 2D-chemical shift spectroscopic imaging selection, was developed. Slice-selective excitation was achieved by a spatially selective broadband GOIA-W(16,4) inversion pulse with an interleaved subtraction scheme before nonselective adiabatic excitation, and followed by 2D phase encoding. The use of GOIA-W(16,4) pulses (bandwidth 4.3-21.6 kHz for 10-50 mm slices) reduced the chemical shift displacement error in the slice direction ∼1.5-7.7 fold, compared to conventional 2D-chemical shift spectroscopic imaging with Sinc3 selective pulses (2.8 kHz). This reduction was experimentally demonstrated with measurements of an MR spectroscopy localization phantom and with experimental evaluation of pulse profiles. In vivo experiments in clinically acceptable measurement times were demonstrated in the calf muscle (nominal voxel volume, 5.65 ml in 6 min 53 s), brain (10 ml, 6 min 32 s), and liver (8.33 ml, 8 min 14 s) of healthy volunteers at 7 T. High reproducibility was found in the calf muscle at 7 T. In combination with adiabatic excitation, this sequence is insensitive to the B1 inhomogeneities associated with surface coils. This sequence, which is termed GOIA-1D-ISIS/2D-CSI (goISICS), has the potential to be applied in both clinical research and in the clinical routine.
Deception used for cyber defense of control systems
Control system cyber security defense mechanisms may employ deception in human system interactions to make it more difficult for attackers to plan and execute successful attacks. These deceptive defense mechanisms are organized and initially explored according to a specific deception taxonomy and the seven abstract dimensions of security previously proposed as a framework for the cyber security of control systems.
Revisiting the JDL Data Fusion Model II
This paper suggests refinements and extensions of the JDL Data Fusion Model, the standard process model used for a multiplicity of community purposes. However, this Model has not been reviewed in accordance with (a) the dynamics of world events and (b) the changes, discoveries, and new methods in both the data fusion research and development community and related IT technologies. This paper suggests ways to revise and extend this important model. Proposals are made regarding (a) improvements in the understanding of internal processing within a fusion node and (b) extending the model to include (1) remarks on issues related to quality control, reliability, and consistency in DF processing, (2) assertions about the need for co-processing of abductive/ inductive and deductive inferencing processes, (3) remarks about the need for and exploitation of an onto logicallybased approach to DF process design, and ( 4) extensions to account for the case of Distributed Data Fusion (DDF).
360-degree fog projection interactive display
The aim of this research is to develop a fog display which enables observers to recognize a 3D shape of virtual objects. A fog display is one of immaterial display systems. Foregoing systems (e.g. [Rakkolainen et al. 2005]) realized to project objects and images floating in mid-air. However, these systems provided only 2D images on a flat screen. We propose a novel fog display system consists of one cylindrical fog screen and multiple projectors, which brings motion parallax of the virtual object to observers. The concept of the proposed display is similar to 360-degrees viewable 3D displays which utilize projection of multiple images, such as Hitachi's Transpost [Otsuka et al. 2006] and Sony's RayModeler [Ito et al. 2010]. The advantage of the proposed fog display is that the proposed display enables direct touching operation to the virtual objects by observers' hands.