title
stringlengths
8
300
abstract
stringlengths
0
10k
A Critical Discourse Analysis of how powerful businesswomen are portrayed in The Economist online
Writer: Julie Risbourg Title: Breaking the ‘glass ceiling’ Subtitle: Language: A Critical Discourse Analysis of how powerful businesswomen are portrayed in The Economist online English Pages: 52 Women still represent a minority in the executive world. Much research has been aimed at finding possible explanations concerning the underrepresentation of women in the male dominated executive sphere. The findings commonly suggest that a patriarchal society and the maintenance of gender stereotypes lead to inequalities and become obstacles for women to break the so-called ‘glass ceiling’. This thesis, however, aims to explore how businesswomen are represented once they have broken the glass ceiling and entered the executive world. Within the Forbes’ list of the 100 most powerful women of 2017, the two first businesswomen on the list were chosen, and their portrayals were analysed through articles published by The Economist online. The theoretical framework of this thesis includes Goffman’s framing theory and takes a cultural feminist perspective on exploring how the media outlet frames businesswomen Sheryl Sandberg and Mary Barra. The thesis also examines how these frames relate to the concepts of stereotyping, commonly used in the coverage of women in the media. More specifically, the study investigates whether negative stereotypes concerning their gender are present in the texts or if positive stereotypes such as idealisation are used to portray them. Those concepts are coupled with the theoretical aspect of the method, which is Critical Discourse Analysis. This method is chosen in order to explore the underlying meanings and messages The Economist chose to refer to these two businesswomen. This is done through the use of linguistic and visual tools, such as lexical choices, word connotations, nomination/functionalisation and gaze. The findings show that they were portrayed positively within a professional environment, and the publication celebrated their success and hard work. Moreover, the results also show that gender related traits were mentioned, showing a subjective representation, which is countered by their idealisation, via their presence in not only the executive world, but also having such high-working titles in male dominated industries.
An Exploratory Study on Issues and Challenges of Agile Software Development with Scrum
An Exploratory Study on Issues and Challenges of Agile Software Develop m nt with Scrum by Juyun Joey Cho, Doctor of Philosophy Utah State University, 2010 Major Professor: Dr. David H. Olsen Department: Management Information Systems The purpose of this dissertation was to explore critical issues and challenges t hat might arise in agile software development processes with Scrum. It a lso sought to provide management guidelines to help organizations avoid and overcome barriers in adopting the Scrum method as a future software development method. A qualitative researc h method design was used to capture the knowledge of practitioners and scrutinize the Scrum software development process in its natural settings. An in-depth case s tudy was conducted in two organizations where the Scrum method was fully integrated in every aspect of two organizations’ software development processes. One organizat ion provides large-scale and mission-critical applications and the other provides smalland mediumscale applications. Differences between two organizations provided useful c ontrasts for the data analysis. Data were collected through an email survey, observations, documents, and semistructured face-to-face interviews. The email survey was used to re fine interview
Introduction to Time-Sensitive Networking
A number of companies and standards development organizations have, since 2000, been producing products and standards for "time-sensitive networks" to support real-time applications that require a) zero packet loss due to buffer congestion, b) extremely low packet loss due to equipment failure, and c) guaranteed upper bounds on end-to-end latency. Often, a robust capability for time synchronization to less than 1 μs is also required. These networks consist of specially-featured bridges that are interconnected using standard Ethernet links with standard MAC/PHY layers. Since 2012, this technology has advanced to the use of routers, as well as bridges, and features of interest to time-sensitive networks have been added to both Ethernet and wireless standards.
Model-driven diabetes care: study protocol for a randomized controlled trial
BACKGROUND People with type 1 diabetes who use electronic self-help tools register a large amount of information about their disease on their participating devices; however, this information is rarely utilized beyond the immediate investigation. We have developed a diabetes diary for mobile phones and a statistics-based feedback module, which we have named Diastat, to give data-driven feedback to the patient based on their own data. METHOD In this study, up to 40 participants will be given a smartphone on which is loaded a diabetes self-help application (app), the Few Touch Application (FTA). Participants will be randomized into two groups to be given access to Diastat 4 or 12 weeks, respectively after receiving the smartphone, and will use the FTA with Diastat for 8 weeks after this point. The primary endpoint is the frequency of high and low blood-glucose measurements. DISCUSSION The study will investigate the effect of data-driven feedback to patients. Our hypothesis is that this will improve glycemic control and reduce variability. The endpoints are robust indicators that can be assembled with minimal effort by the patient beyond normal routine. TRIAL REGISTRATION Clinicaltrials.gov: NCT01774149.
A Focus+Context Technique Based on Hyperbolic Geometry for Visualizing Large Hierarchies
We present a new focus+Context (fisheye) technique for visualizing and manipulating large hierarchies. Our technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy. The essence of this scheme is to layout the hierarchy in a uniform way on a hyperbolic plane and map this plane onto a circular display region. This supports a smooth blending between focus and context, as well as continuous redirection of the focus. We have developed effective procedures for manipulating the focus using pointer clicks as well as interactive dragging, and for smoothly animating transitions across such manipulation. A laboratory experiment comparing the hyperbolic browser with a conventional hierarchy browser was conducted.
Computation of the characteristics of a claw pole alternator through the finite element method
The paper presents the analysis of a 3-D numerical model developed for a claw pole alternator. The complex structure of the claw-pole magnetic circuit required a 3D FEM model and a double scalar potential magnetic φ-φred formulation, in order to reduce computing time and memory. The no-load and magnetization characteristics and the e.m.f. time dependence have been calculated. Working characteristics and the induced voltage in the static winding can be calculated by knowing the 3D distribution of the field in stationary magnetic regime for successive positions of the rotor considering the stator.
Test intellectual property trends, test issues and the wealth of owning test in-house
In a recent 1500 mile tour around various automotive companies in mainland Europe, I was amazed to find design engineers who had not heard of JTAG or boundary scan as a test technique. This paper introduces the IEEE 1149.1 standard, how it works, what you can do with it now - right up to examples of advanced use today. I will present some trends which will force any clear-thinking test department to look at JTAGpsilas benefits again, reinforce the importance of designing for test.
Superhydrophobic tracks for low-friction, guided transport of water droplets.
anti-fogging, [ 6 ] anti-icing, [ 7 ] buoyancy [ 8 ] and drag reduction. [ 9 ] By defi nition, a surface is superhydrophobic if the contact angle between a water drop and the surface at the solid/liquid/air interface is larger than 150 ° , and the contact angle hysteresis is small, i.e., drops readily slide or roll off when the surface is tilted slightly. [ 10–12 ] Here we explore the feasibility of using superhydrophobicity for guided transport of water droplets. We demonstrate a simple yet effi cient approach for droplet transport, in which the droplet is moving on a superhydrophobic surface, using gravity or electrostatic forces as the driving force for droplet transportation and using tracks with vertical walls as gravitational potential barriers to design trajectories. Although the slope of the platform is as small as a few degrees, the drops move at a considerable speed up to 14 cm s − 1 , even in highly curved trajectories. We further demonstrate splitting of a droplet using a superhydrophobic knife and drop-size selection using superhydrophobic tracks. These concepts may fi nd applications in droplet microfl uidics and lab-on-a-chip systems where single droplets with potential analytes are manipulated. [ 13–16 ]
Decreased attentional responsivity during sleep deprivation: orienting response latency, amplitude, and habituation.
Ever increasing societal demands for uninterrupted work are causing unparalleled amounts of sleep deprivation among workers. Sleep deprivation has been linked to safety problems ranging from medical misdiagnosis to industrial and vehicular accidents. Microsleeps (very brief intrusions of sleep into wakefulness) are usually cited as the cause of the performance decrements during sleep deprivation. Changes in a more basic physiological phenomenon, attentional shift, were hypothesized to be additional factors in performance declines. The current study examined the effects of 36 hours of sleep deprivation on the electrodermal-orienting response (OR), a measure of attentional shift or capture. Subjects were 71 male undergraduate students, who were divided into sleep deprivation and control (non-sleep deprivation) groups. The expected negative effects of sleep deprivation on performance were noted in increased reaction times and increased variability in the sleep-deprived group on attention-demanding cognitive tasks. OR latency was found to be significantly delayed after sleep deprivation, OR amplitude was significantly decreased, and habituation of the OR was significantly faster during sleep deprivation. These findings indicate impaired attention, the first revealing slowed shift of attention to novel stimuli, the second indicating decreased attentional allocation to stimuli, and the third revealing more rapid loss of attention to repeated stimuli. These phenomena may be factors in the impaired cognitive performance seen during sleep deprivation.
Sparsity and smoothness via the fused lasso
The lasso penalizes a least squares regression by the sum of the absolute values (L1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0).We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L1-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile.The fused lasso is especially useful when the number of features p is much greater than N , the sample size.The technique is also extended to the ‘hinge’ loss function that underlies the support vector classifier.We illustrate the methods on examples from protein mass spectroscopy and gene expression data.
Physiological Stresses Related to Hypercapnia during Patrols on Submarines
Schaefer, K. E. 1979. Physiological stresses related to hypercapnia during patrols on submarines. Undersea Biomed. Res. Sub. Suppl.: S15-S47.—Physiological studies on hypercapnic effects carried out on 13 Polaris patrols are summarized. The average C02 concentrations ranged from 0.7-1% C02; C02 was identified as the only environmental contaminant of the submarine atmosphere that has a direct effect on respiration in the concentration range found in the submarine atmosphere. A comparison has been made of physiological effects produced during 42 days of exposure to 1.5% C02 during laboratory studies (L.S.) with those observed during 50 to 60 days of exposure to 0.7-1% C02 on patrols (P.S.). A close similarity was found in the effects on respiration and blood electrolytes under both conditions. Respiratory minute volume was elevated by 40-63% because of increased tidal volume. The physiological dead space increased 60%. Vital capacity showed a trend toward a decrease. Studies of acid-base balance carried out during patrols demonstrated cyclic changes in blood pH and bicarbonate; pH and blood bicarbonate fell during the first 17 days of exposure, rose during the subsequent 20 days, and decreased again after 40 days. These cycles cannot be explained on the basis of known renal regulations in C02-induced acidosis and were not found during exposure to 1.5% C02. The hypothesis is advanced that these changes in acid-base balance are caused by cycles in C02 uptake and release in bones. The time constants of the bone C02 stores fit the observed length of cycles in acid-base balance. Correlation with cycles of calcium metabolism provides further support for this uypotnesis. Red cell electrolytes showed similar changes under 1.5% C02 (L.S.) and 0.7-1% C02 (P.S.). Red cell sodium increased and potassium decreased. Moreover, red cell calcium also increased under both conditions. The significance of these red cell electrolyte changes in regard to changes in permeability and active transport remains to be clarified. An increased gastric acidity was found during patrol (exposure to 0.80.95% C02). The changes observed during patrols disappeared during the recovery periods.
Computational Features of the Thinking and the Thinking Attributes of Computing: On Computational Thinking
The paper aims at revealing the essence and connotation of Computational Thinking. It analyzed some of the international academia’s research results of Computational Thinking. The author thinks Computational Thinking is discipline thinking or computing philosophy, and it is very critical to understand Computational Thinking to grasp the thinking’ s computational features and the computing’s thinking attributes. He presents the basic rules of screening the representative terms of Computational Thinking and lists some representative terms based on the rules. He thinks Computational Thinking is contained in the commonalities of those terms. The typical thoughts of Computational Thinking are structuralization, formalization, association-and-interaction, optimization and reuse-and-sharing. Training Computational Thinking must base on the representative terms and the typical thoughts. There are three innovations in the paper: the five rules of screening the representative terms, the five typical thoughts and the formalized description of Computational Thinking.
Waveform Coding for Passive Multiplexing: Application to Microwave Imaging
This paper proposes a novel passive technique for the collection of microwave images. A compact component is developed that passively codes and sums the waves received by an antenna array to which it is connected, and produces a unique signal that contains all of the scene information. This technique of passive multiplexing simplifies the microwave reception chains for radar and beamforming systems (whose complexity and cost highly increase with the number of antennas) and does not require any active elements to achieve beamsteering. The preservation of the waveforms is ensured using orthogonal codes supplied by the propagation through the component's uncorrelated channels. Here we show a multiplexing technique in the physical layer that, besides being compact and passive, is compatible with all ultrawideband antennas, enabling its implementation in various fields.
Distributed Clock Parameters Tracking in Wireless Sensor Network
Clock parameters (skew and offset) in sensor network are inherently time-varying due to imperfect oscillator circuits. This paper develops a distributed Kalman filter for clock parameters tracking. The proposed algorithm only requires each node to exchange limited information with its direct neighbors, thus is energy efficient, scalable with network size, and is robust to changes in network connectivity. A low-complexity distributed algorithm based on Coordinate-Descent with Bootstrap (CD-BS) is also proposed to provide rapid initialization to the tracking algorithm. Simulation results show that the performance of the proposed distributed tracking algorithm maintains long-term clock parameters accuracy close to the Bayesian Cramer-Rao Lower Bound.
Fuzzy Gauge Capability (Cg and Cgk) through Buckley Approach
Abstract—Different terms of the Statistical Process Control (SPC) has sketch in the fuzzy environment. However, Measurement System Analysis (MSA), as a main branch of the SPC, is rarely investigated in fuzzy area. This procedure assesses the suitability of the data to be used in later stages or decisions of the SPC. Therefore, this research focuses on some important measures of MSA and through a new method introduces the measures in fuzzy environment. In this method, which works based on Buckley approach, imprecision and vagueness nature of the real world measurement are considered simultaneously. To do so, fuzzy version of the gauge capability (Cg and Cgk) are introduced. The method is also explained through example clearly.
First report of human parechovirus type 3 infection in a pregnant woman.
Human parechovirus type 3 (HPeV3) can cause serious conditions in neonates, such as sepsis and encephalitis, but data for adults are lacking. The case of a pregnant woman with HPeV3 infection is reported herein. A 28-year-old woman at 36 weeks of pregnancy was admitted because of myalgia and muscle weakness. Her grip strength was 6.0kg for her right hand and 2.5kg for her left hand. The patient's symptoms, probably due to fasciitis and not myositis, improved gradually with conservative treatment, however labor pains with genital bleeding developed unexpectedly 3 days after admission. An obstetric consultation was obtained and a cesarean section was performed, with no complications. A real-time PCR assay for the detection of viral genomic ribonucleic acid against HPeV showed positive results for pharyngeal swabs, feces, and blood, and negative results for the placenta, umbilical cord, umbilical cord blood, amniotic fluid, and breast milk. The HPeV3 was genotyped by sequencing of the VP1 region. The woman made a full recovery and was discharged with her infant in a stable condition.
On Abstract Intelligence: Toward a Unifying Theory of Natural, Artificial, Machinable, and Computational Intelligence
intelligence is a human enquiry of both natural and artificial intelligence at the reductive embodying levels of neural, cognitive, functional, and logical from the bottom up. This paper describes the taxonomy and nature of intelligence. It analyzes roles of information in the evolution of human intelligence, and the needs for logical abstraction in modeling the brain and natural intelligence. A formal model of intelligence is developed known as the Generic Abstract Intelligence Mode (GAIM), which provides a foundation to explain the mechanisms of advanced natural intelligence such as thinking, learning, and inferences. A measurement framework of intelligent capability of humans and systems is comparatively studied in the forms of intelligent quotient, intelligent equivalence, and intelligent metrics. On the basis of the GAIM model and the abstract intelligence theories, the compatibility of natural and machine intelligence is revealed in order to investigate into a wide range of paradigms of abstract intelligence such as natural, artificial, machinable intelligence, and their engineering applications.
Multiresonator-Based Chipless RFID System for Low-Cost Item Tracking
A fully passive printable chipless RFID system is presented. The chipless tag uses the amplitude and phase of the spectral signature of a multiresonator circuit and provides 1:1 correspondence of data bits. The tag comprises of a microstrip spiral multiresonator and cross-polarized transmitting and receiving microstrip ultra-wideband disc loaded monopole antennas. The reader antenna is a log periodic dipole antenna with average 5.5-dBi gain. Firstly, a 6-bit chipless tag is designed to encode 000000 and 010101 IDs. Finally, a 35-bit chipless tag based on the same principle is presented. The tag has potentials for low-cost item tagging such as banknotes and secured documents.
Computer-Based Examples Designed to Encourage Optimal Example Processing: A Study Examining the Impact of Sequentially Presented, Subgoal-Oriented Worked Examples 1
This study was designed to examine the effectiveness of a specific type of computer-based worked example, one designed to encourage students to study the example in an optimal fashion by: (1) incorporating visually isolated and labeled subgoals, a structural manipulation that appears to enhance the way in which students study examples; as well as (2) presenting problem states sequentially, a manipulation that appears to have the potential to accomplish the same goal. The study also examined the effects of having examples present or absent during practice problem solving. Findings indicated that sequentially-presented examples with clearly isolated subgoals produce better conceptual performance than do examples in which solutions are presented all at once without strong subgoal emphasis. It is still unclear whether examples should be present or withdrawn during practice problem solving.
Actions Speak Louder Than (Pass)words: Passive Authentication of Smartphone Users via Deep Temporal Features
Prevailing user authentication schemes on smartphones rely on explicit user interaction, where a user types in a passcode or presents a biometric cue such as face, fingerprint, or iris. In addition to being cumbersome and obtrusive to the users, such authentication mechanisms pose security and privacy concerns. Passive authentication systems can tackle these challenges by frequently and unobtrusively monitoring the user’s interaction with the device. In this paper, we propose a Siamese Long Short-Term Memory network architecture for passive authentication, where users can be verified without requiring any explicit authentication step. We acquired a dataset comprising of measurements from 30 smartphone sensor modalities for 37 users. We evaluate our approach on 8 dominant modalities, namely, keystroke dynamics, GPS location, accelerometer, gyroscope, magnetometer, linear accelerometer, gravity, and rotation sensors. Experimental results find that, within 3 seconds, a genuine user can be correctly verified 97.15% of the time at a false accept rate of 0.1%.
Lung Nodule Classification With Multilevel Patch-Based Context Analysis
In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance.
Robot dynamics and control
This chapter presents an introduction to the dynamics and control of robot manipulators. We derive the equations of motion for a general open-chain manipulator and, using the structure present in the dynamics , construct control laws for asymptotic tracking of a desired trajectory. In deriving the dynamics, we will make explicit use of twists for representing the kinematics of the manipulator and explore the role that the kinematics play in the equations of motion. We assume some familiarity with dynamics and control of physical systems.
Spatial Transformer Introspective Neural Network
Natural images contain many variations such as illumination differences, affine transformations, and shape distortions. Correctly classifying these variations poses a long standing problem. The most commonly adopted solution is to build large-scale datasets that contain objects under different variations. However, this approach is not ideal since it is computationally expensive and it is hard to cover all variations in one single dataset. Towards addressing this difficulty, we propose the spatial transformer introspective neural network (ST-INN) that explicitly generates samples with the unseen affine transformation variations in the training set. Experimental results indicate ST-INN achieves classification accuracy improvements on several benchmark datasets, including MNIST, affNIST, SVHN and CIFAR-10. We further extend our method to cross dataset classification tasks and few-shot learning problems to verify our method under extreme conditions and observe substantial improvements from experiment results.
Opioid rotation in patients with cancer pain. A retrospective comparison of dose ratios between methadone, hydromorphone, and morphine.
BACKGROUND When a change of opioid is considered, equianalgesic dose tables are used. These tables generally propose a dose ratio of 5:1 between morphine and hydromorphone. In the case of a change from subcutaneous hydromorphone to methadone, dose ratios ranging from 1:6 to 1:10 are proposed. The purpose of this study was to review the analgesic dose ratios for methadone compared with hydromorphone. METHODS In a retrospective study, 48 cases of medication changes from morphine to hydromorphone, and 65 changes between hydromorphone and methadone were identified. the reason for the change, the analgesic dose, and pain intensity were obtained. RESULTS The dose ratios between morphine and hydromorphone and vice versa were found to be 5.33 and 0.28, respectively (similar to expected results). However, the hydromorphone/methadone ratio was found to be 1.14:1 (5 to 10 times higher than expected). Although the dose ratios of hydromorphone/morphine and vice versa did not change according to a previous opioid dose, the hydromorphone/methadone ratio correlated with total opioid dose (correlation coefficient = 0.41 P < 0.001) and was 1.6 (range, 0.3-14.4) in patients receiving more than 330 mg of hydromorphone per day prior to the change, versus 0.95 (range, 0.2-12.3) in patients receiving ae330 mg of hydromorphone per day (P = 0.023). CONCLUSIONS These results suggest that only partial tolerance develops between methadone and hydromorphone. Methadone is much more potent than previously described and any change should start at a lower equivalent dose.
Action recognition from a distributed representation of pose and appearance
We present a distributed representation of pose and appearance of people called the “poselet activation vector”. First we show that this representation can be used to estimate the pose of people defined by the 3D orientations of the head and torso in the challenging PASCAL VOC 2010 person detection dataset. Our method is robust to clutter, aspect and viewpoint variation and works even when body parts like faces and limbs are occluded or hard to localize. We combine this representation with other sources of information like interaction with objects and other people in the image and use it for action recognition. We report competitive results on the PASCAL VOC 2010 static image action classification challenge.
Experimental comparison of representation methods and distance measures for time series data
The previous decade has brought a remarkable increase of the interest in applications that deal with querying and mining of time series data. Many of the research efforts in this context have focused on introducing new representation methods for dimensionality reduction or novel similarity measures for the underlying data. In the vast majority of cases, each individual work introducing a particular method has made specific claims and, aside from the occasional theoretical justifications, provided quantitative experimental observations. However, for the most part, the comparative aspects of these experiments were too narrowly focused on demonstrating the benefits of the proposed methods over some of the previously introduced ones. In order to provide a comprehensive validation, we conducted an extensive experimental study re-implementing eight different time series representations and nine similarity measures and their variants, and testing their effectiveness on 38 time series data sets from a wide variety of application domains. In this article, we give an overview of these different techniques and present our comparative experimental findings regarding their effectiveness. In addition to providing a unified validation of some of the existing achievements, our experiments also indicate that, in some cases, certain claims in the literature may be unduly optimistic.
An Efficient Cross-lingual Model for Sentence Classification Using Convolutional Neural Network
In this paper, we propose a cross-lingual convolutional neural network (CNN) model that is based on word and phrase embeddings learned from unlabeled data in two languages and dependency grammar. Compared to traditional machine translation (MT) based methods for cross lingual sentence modeling, our model is much simpler and does not need parallel corpora or language specific features. We only use a bilingual dictionary and dependency parser. This makes our model particularly appealing for resource poor languages. We evaluate our model using English and Chinese data on several sentence classification tasks. We show that our model achieves a comparable and even better performance than the traditional MT-based method.
Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm.
Using simulated falls performed under supervised conditions and activities of daily living (ADL) performed by elderly subjects, the ability to discriminate between falls and ADL was investigated using tri-axial accelerometer sensors, mounted on the trunk and thigh. Data analysis was performed using MATLAB to determine the peak accelerations recorded during eight different types of falls. These included; forward falls, backward falls and lateral falls left and right, performed with legs straight and flexed. Falls detection algorithms were devised using thresholding techniques. Falls could be distinguished from ADL for a total data set from 480 movements. This was accomplished using a single threshold determined by the fall-event data-set, applied to the resultant-magnitude acceleration signal from a tri-axial accelerometer located at the trunk.
Geodesic Matting: A Framework for Fast Interactive Image and Video Segmentation and Matting
An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and/or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature.
Erasing Lane Changes From Roads: A Design of Future Road Intersections
Imagine in the future that autonomous vehicles coordinated and guided by signal-free autonomous intersections are able to pass through the intersections immediately after the vehicles in the conflicting direction leave. Meanwhile, with the coordination of the autonomous intersections, autonomous vehicles on any approaching lane are able to turn onto any downstream lane, expecting that driving in such a road system, autonomous vehicles can reach their destinations without any on-road lane changes, and high traffic efficiency will be achieved as well as great traffic safety. To draw the picture in detail, this paper designs a signal-free autonomous intersection with all-direction turn lanes (ADTL) under the environment of autonomous vehicles, and proposes a conflict-avoidance-based approach to coordinate all approaching vehicles in different directions. Communicating with the approaching autonomous vehicles and utilizing the approach, the autonomous ADTL intersection is able to coordinate the approaching vehicles in all directions and guide them to safely and efficiently pass through the intersection. Two simulation scenarios are conducted in a road network with an isolated intersection composed of four three-lane arms. One scenario validates the collision-free design of the system, and the other shows that the designed ADTL intersection outperforms the conventional signal controlled intersection in terms of traffic efficiency, and is potentially better than the autonomous intersection with specific-direction turn lanes. The autonomous ADTL intersection can be an important basis for designing a future autonomous urban road traffic system.
A Celebration of the Life and Work of Caroline Breese Hall, MD.
A symposium to celebrate the life and work of Dr. Caroline Breese Hall (or Caren as she was known to thousands of her colleagues, students, mentees, and friends) was held at the University of Rochester School of Medicine & Dentistry on April 25, 2014. The symposium also served as an opening event to announce the establishment of the Caroline Breese Hall, MD Endowment for Infectious Diseases at the University of Rochester Medical Center, which will provide salary support for a promising fellow or junior faculty member at the University working in Infectious Diseases. The endowment was established as a gift from the Hall family. Caren was remembered for her incredible energy and warmth and her formidable intellect and creativity. Her contributions to clinical research and teaching greatly improved the lives of children, medical students, residents, fellows, and colleagues alike. Many current and former trainees and colleagues came together at the symposium to meet her husband, Dr. William J. Hall, her 3 children, and several of her grandchildren, and to listen to presentations from those who worked or trained with Caren over her 4-decade career at Rochester. A native of Brighton, New York (and daughter of eminent pediatrician Burtis Burr Breese, MD, himself a pioneer in office-based clinical research and the development of the office throat culture for streptococci [1–7]), Caroline Breese Hall, MD earned a bachelor’s degree in chemistry from Wellesley College and a medical degree from the University of Rochester School of Medicine & Dentistry. She completed a residency in Pediatrics and a fellowship in Infectious Diseases at Yale University. Along with her husband Dr. William Hall, Dr. Caroline Breese Hall joined the faculty at the University of Rochester Medical Center (URMC) in 1971, with appointments in both Pediatrics and Internal Medicine. She was appointed Professor of Pediatrics and Medicine in 1986. Dr. Hall’s research focused on pediatric clinical virology—especially the natural history of infections caused by respiratory syncytial virus (RSV) and human herpes viruses 6 and 7 (HHV6 and HHV7). Early in her career, she carried out studies that defined the diagnosis, epidemiology, transmission, and therapy of RSV bronchiolitis in children [8–20]. Later, when HHV6 was identified as the cause of roseola, she initiated studies that defined the clinical spectrum of HHV6 infection, and she attempted to understand the relationship between chromosomal integration and vertical transmission of the virus [21–26]. At the same time, few pediatric pathogens escaped her focus and interest, and Dr. Hall contributed and collaborated on manyworks concerning group A streptococci, parainfluenza and influenza viruses, coronaviruses, rhinoviruses, human metapneumovirus, rotaviruses, and noroviruses [27–38]. Caren Hall was a major contributor to the discipline of pediatric infectious diseases, as teacher, mentor, researcher, and counselor, and she published approximately 300 articles in the scientific literature and 130 textbook chapters—many of which were graced by her own original poetry, which spanned verses on life, odes to colleagues, and humorous microbial limericks. She was a founding member of the Pediatric Infectious Diseases Society (PIDS), its fifth president and Society Historian, and she also served for many years on the American Academy of Pediatrics (AAP) Committee on Infectious Diseases (Red Book Committee) and the Centers for Disease Control and Editorial Commentary
Molecular Medicine in Practice Phase I Clinical Trial of MPC-6827 ( Azixa ) , a Microtubule Destabilizing Agent , in Patients with Advanced Cancer
MPC-6827 (Azixa) is a small-moleculemicrotubule-destabilizing agent that binds to the same (or nearby) sites on b-tubulin as colchicine. This phase I study was designed to determine the dose-limiting toxicities (DLT), maximum tolerateddose (MTD), andpharmacokinetics (PK) ofMPC-6827 inpatientswith solid tumors. Patients with advanced/metastatic cancer were treated with once-weekly, 1to 2-hour intravenous administration of MPC-6827 for 3 consecutiveweeks every 28 days (1 cycle). Dose escalation beganwith 0.3, 0.6, 1, and 1.5mg/m, with subsequent increments of 0.6 mg/m until the MTD was determined. A 3 þ 3 design was used. Pharmacokinetics ofMPC-6827 and itsmetaboliteMPI-0440627were evaluated. Forty-eightpatients received therapy; 79 cycleswere completed (median, 1; range, 1–10). Themost commonadverse eventswerenausea, fatigue, flushing, and hyperglycemia. The DLT was nonfatal grade 3 myocardial infarction at 3.9 mg/m (1/6 patients) and at 4.5mg/m (1/7patients). TheMTDwasdetermined tobe3.3mg/m (0/13patientshadaDLT). Five (10.4%)of the 48 patients achieved stable disease (Response Evaluation Criteria in Solid Tumors) for 4 months or greater. MPC-6827 has a high volume of distribution and clearance. Half-life ranged from 3.8 to 7.5 hours. In conclusion, MPC-6827administered intravenouslyover2hoursat adoseof3.3mg/monceweekly for 3weeks every 28days was safe inpatientswithheavilypretreated cancer.Clinical trialswithMPC-6827and chemotherapyareongoing. Mol Cancer Ther; 9(12); 3410–9. 2010 AACR.
Quality of life among people living with HIV/AIDS in northern Thailand: MOS-HIV Health Survey
Objectives: Translation and psychometric evaluation of a Thai version of the Medical Outcomes Study HIV Health Survey (MOS-HIV) in Thailand. Methods: A cross-sectional survey in Chiang Mai province, northern Thailand, with data collected in face-to-face interviews using a structured questionnaire designed to measure 10 scales of quality of life (QOL). We recruited 200 people with HIV/AIDS attending self-help groups in the municipal area. Standard guidelines were followed for questionnaire translation and psychometric evaluations. Results: Item-level internal consistency and discriminant validity were reasonably established. Success rates were 93.8 and 97.4%, respectively. Scale-level internal consistency reliability of multi-item scales was satisfactory, ranging from 0.74 to 0.88, with all exceeding inter-scale correlations. Principal components analysis of item and scale scores identified two hypothesized dimensions of the MOS-HIV. The mental health component was strongly loaded by health distress, mental health, vitality and cognitive function scales, and physical health by role, physical and social functions, and pain scales. Respondents manifesting symptoms or reporting worsening health status scored significantly lower on all scales. Conclusions: These preliminary studies have shown the Thai version of the MOS-HIV to have psychometric properties comparable with those reported in previous surveys. Further testing and modification should make it useful as an HIV-specific QOL measure in Thailand.
The Effect of Graphical Representation on the Learner's Learning Interest and Achievement in Multimedia Learning.
Effect of vitamin E supplementation on HDL function by haptoglobin genotype in type 1 diabetes: results from the HapE randomized crossover pilot trial
Haptoglobin (Hp) genotype 2-2 increases cardiovascular diabetes complications. In type 2 diabetes, α-tocopherol was shown to lower cardiovascular risk in Hp 2-2, potentially through HDL function improvements. Similar type 1 diabetes data are lacking. We conducted a randomized crossover pilot of α-tocopherol supplementation on HDL function [i.e., cholesterol efflux (CE) and HDL-associated lipid peroxides (LP)] and lipoprotein subfractions in type 1 diabetes. Hp genotype was assessed in members of two Allegheny County, PA, type 1 diabetes registries and the CACTI cohort; 30 were randomly selected within Hp genotype, and 28 Hp 1-1, 31 Hp 2-1 and 30 Hp 2-2 were allocated to daily α-tocopherol or placebo for 8 weeks with a 4-week washout. Baseline CE decreased with the number of Hp 2 alleles (p-trend = 0.003). There were no differences in LP or lipoprotein subfractions. In intention-to-treat analysis stratified by Hp, α-tocopherol increased CE in Hp 2-2 (β = 0.79, p = 0.03) and LP in Hp 1 allele carriers (β Hp 1-1 = 0.18, p = 0.05; β Hp 2-1 = 0.21, p = 0.07); reduced HDL particle size (β = −0.07, p = 0.03) in Hp 1-1 carriers; increased LDL particle concentration in Hp 1-1; and decreased it in Hp 2-2 carriers. However, no significant interactions were observed by Hp. In this type 1 diabetes study, HDL function worsened with the number of Hp 2 alleles. α-Tocopherol improved HDL function in Hp 2-2 carriers and appeared to adversely affect lipid peroxides and lipoprotein subfractions among Hp 1 allele carriers. As no significant interactions were observed, findings require replication in larger studies.
Who Uses Mobile Phone Health Apps and Does Use Matter? A Secondary Data Analytics Approach
BACKGROUND Mobile phone use and the adoption of healthy lifestyle software apps ("health apps") are rapidly proliferating. There is limited information on the users of health apps in terms of their social demographic and health characteristics, intentions to change, and actual health behaviors. OBJECTIVE The objectives of our study were to (1) to describe the sociodemographic characteristics associated with health app use in a recent US nationally representative sample; (2) to assess the attitudinal and behavioral predictors of the use of health apps for health promotion; and (3) to examine the association between the use of health-related apps and meeting the recommended guidelines for fruit and vegetable intake and physical activity. METHODS Data on users of mobile devices and health apps were analyzed from the National Cancer Institute's 2015 Health Information National Trends Survey (HINTS), which was designed to provide nationally representative estimates for health information in the United States and is publicly available on the Internet. We used multivariable logistic regression models to assess sociodemographic predictors of mobile device and health app use and examine the associations between app use, intentions to change behavior, and actual behavioral change for fruit and vegetable consumption, physical activity, and weight loss. RESULTS From the 3677 total HINTS respondents, older individuals (45-64 years, odds ratio, OR 0.56, 95% CI 0.47-68; 65+ years, OR 0.19, 95% CI 0.14-0.24), males (OR 0.80, 95% CI 0.66-0.94), and having degree (OR 2.83, 95% CI 2.18-3.70) or less than high school education (OR 0.43, 95% CI 0.24-0.72) were all significantly associated with a reduced likelihood of having adopted health apps. Similarly, both age and education were significant variables for predicting whether a person had adopted a mobile device, especially if that person was a college graduate (OR 3.30). Individuals with apps were significantly more likely to report intentions to improve fruit (63.8% with apps vs 58.5% without apps, P=.01) and vegetable (74.9% vs 64.3%, P<.01) consumption, physical activity (83.0% vs 65.4%, P<.01), and weight loss (83.4% vs 71.8%, P<.01). Individuals with apps were also more likely to meet recommendations for physical activity compared with those without a device or health apps (56.2% with apps vs 47.8% without apps, P<.01). CONCLUSIONS The main users of health apps were individuals who were younger, had more education, reported excellent health, and had a higher income. Although differences persist for gender, age, and educational attainment, many individual sociodemographic factors are becoming less potent in influencing engagement with mobile devices and health app use. App use was associated with intentions to change diet and physical activity and meeting physical activity recommendations.
What Shape Are Dolphins? Building 3D Morphable Models from 2D Images
3D morphable models are low-dimensional parameterizations of 3D object classes which provide a powerful means of associating 3D geometry to 2D images. However, morphable models are currently generated from 3D scans, so for general object classes such as animals they are economically and practically infeasible. We show that, given a small amount of user interaction (little more than that required to build a conventional morphable model), there is enough information in a collection of 2D pictures of certain object classes to generate a full 3D morphable model, even in the absence of surface texture. The key restriction is that the object class should not be strongly articulated, and that a very rough rigid model should be provided as an initial estimate of the “mean shape.” The model representation is a linear combination of subdivision surfaces, which we fit to image silhouettes and any identifiable key points using a novel combined continuous-discrete optimization strategy. Results are demonstrated on several natural object classes, and show that models of rather high quality can be obtained from this limited information.
Metformin does not reduce markers of cell proliferation in esophageal tissues of patients with Barrett's esophagus.
BACKGROUND & AIMS Obesity is associated with neoplasia, possibly via insulin-mediated cell pathways that affect cell proliferation. Metformin has been proposed to protect against obesity-associated cancers by decreasing serum insulin. We conducted a randomized, double-blind, placebo-controlled, phase 2 study of patients with Barrett's esophagus (BE) to assess the effect of metformin on phosphorylated S6 kinase (pS6K1), a biomarker of insulin pathway activation. METHODS Seventy-four subjects with BE (mean age, 58.7 years; 58 men [78%; 52 with BE >2 cm [70%]) were recruited through 8 participating organizations of the Cancer Prevention Network. Participants were randomly assigned to groups given metformin daily (increasing to 2000 mg/day by week 4, n = 38) or placebo (n = 36) for 12 weeks. Biopsy specimens were collected at baseline and at week 12 via esophagogastroduodenoscopy. We calculated and compared percent changes in median levels of pS6K1 between subjects given metformin vs placebo as the primary end point. RESULTS The percent change in median level of pS6K1 did not differ significantly between groups (1.4% among subjects given metformin vs -14.7% among subjects given placebo; 1-sided P = .80). Metformin was associated with an almost significant reduction in serum levels of insulin (median -4.7% among subjects given metformin vs 23.6% increase among those given placebo, P = .08) as well as in homeostatic model assessments of insulin resistance (median -7.2% among subjects given metformin vs 38% increase among those given placebo, P = .06). Metformin had no effects on cell proliferation (on the basis of assays for KI67) or apoptosis (on the basis of levels of caspase 3). CONCLUSIONS In a chemoprevention trial of patients with BE, daily administration of metformin for 12 weeks, compared with placebo, did not cause major reductions in esophageal levels of pS6K1. Although metformin reduced serum levels of insulin and insulin resistance, it did not discernibly alter epithelial proliferation or apoptosis in esophageal tissues. These findings do not support metformin as a chemopreventive agent for BE-associated carcinogenesis. ClinicalTrials.gov number, NCT01447927.
Solid-State Transformers: On the Origins and Evolution of Key Concepts
During the past two decades, solid-state transformers (SSTs) have evolved quickly and have been considered for replacing conventional low-frequency (LF) transformers in applications such as traction, where weight and volume savings and substantial efficiency improvements can be achieved, or in smart grids because of their controllability. As shown in this article, all main modern SST topologies realize the common key characteristics of these transformers-medium-frequency (MF) isolation stage, connection to medium voltage (MV), and controllability-by employing combinations of a very few key concepts, which have been described or patented as early as the 1960s. But still, key research challenges concerning protection, isolation, and reliability remain.
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.
Monkey Algorithm for Global Numerical Optimization
In this paper, monkey algorithm (MA) is designed to solve global numerical optimization problems with continuous variables. The algorithm mainly consists of climb process, watch-jump process, and somersault process in which the climb process is employed to search the local optimal solution, the watch-jump process to look for other points whose objective values exceed those of the current solutions so as to accelerate the monkeys’ search courses, and the somersault process to make the monkeys transfer to new search domains rapidly. The proposed algorithm is applied to effectively solve the benchmark problems of global optimization with 30, 1000 or even 10000 dimensions. The computational results show that the MA can find optimal or near-optimal solutions to the problems with a large dimensions and very large numbers of local optima. c ©2008 World Academic Press, UK. All rights reserved.
From simulation to experimentable digital twins: Simulation-based development and operation of complex technical systems
Way beyond its industrial roots, robotics evolved to be a highly interdisciplinary field with a variety of applications in a smart world. The eRobotics methodology addresses this evolution by providing platforms where roboticist can exchange ideas and collaborate with experts from other disciplines for developing complex technical systems and automated solutions. Virtual Testbeds are the central method in eRobotics, where complex technical systems and their interaction with prospective working environments are first designed, programmed, controlled and optimized in simulation before commissioning the real system. On the other hand, Industry 4.0 concepts promote the notion of “Digital Twins”, virtual substitutes of real world objects consisting of virtual representations and communication capabilities making up smart objects acting as intelligent nodes inside the internet of things and services. Combining these two approaches, Virtual Testbeds and Digital Twins, leads to a new kind of “Experimentable Digital Twins” breaking new ground in the simulation-based development and operation of complex technical systems. In this contribution, we describe how such “Experimentable Digital Twins” can act as the very core of simulation-based development processes streamlining the development process, enabling detailed simulations at system level and realizing intelligent systems. Besides this, the multiple use of models and simulations in various scenarios significantly reduces the effort for the use of simulation technology throughout the life cycle of complex technical systems.
High-Performance Transformerless Online UPS
In this paper, a high-performance single-phase transformerless online uninterruptible power supply (UPS) is proposed. The proposed UPS is composed of a four-leg-type converter, which operates as a rectifier, a battery charger/discharger, and an inverter. The rectifier has the capability of power-factor collection and regulates a constant dc-link voltage. The battery charger/discharger eliminates the need for the transformer and the increase of the number of battery and supplies the power demanded by the load to the dc-link capacitor in the event of the input-power failure or abrupt decrease of the input voltage. The inverter provides a regulated sinusoidal output voltage to the load and limits the output current under an impulsive load. The control of the dc-link voltage enhances the transient response of the output voltage and the utilization of the input power. By utilizing the battery charger/discharger, the overall efficiency of the system is improved, and the size, weight, and cost of the system are significantly reduced. Experimental results obtained with a 3-kVA prototype show a normal efficiency of over 95.6% and an input power factor of over 99.7%.
Customer-Centric Strategic Planning: Integrating CRM in Online Business Systems
Customer Relationship Management (CRM) is increasingly found at the top of corporate agendas. Online companies in particular are embracing CRM as a major element of corporate strategy, because online technological applications permit a precise segmentation, profiling and targeting of customers, and the competitive pressures of the digital markets require a customer-centric corporate culture. The implementation of CRM systems in online organisation determines a complex restructuring of all organisational elements and processes. The strategic planning process will have to adapt to new customer-centric procedures. The present paper analyses the implementation process of a CRM system in online retail businesses and develops a model of the strategic planning function in a customer-centric context.
Ground-Shielded Dual-Type High-Voltage Electrode for Corona Charging Applications
Corona discharge generated by various electrode arrangements is commonly employed for several electrostatic applications, such as charging nonwoven fabrics for air filters and insulating granules in electrostatic separators. The aim of this paper is to analyze the effects of the presence of a grounded metallic shield in the proximity of a high-voltage corona electrode facing a grounded plate electrode. The metallic shield was found to increase the current intensity and decrease the inception voltage of the corona discharge generated by this electrode arrangement, both in the absence and in the presence of a layer of insulating particles at the surface of the plate electrode. With the shield, the current density measured at the surface of the collecting electrode is higher and distributed on a larger area. As a consequence, the charge acquired by millimeter-sized HDPE particles forming a monolayer at the surface of the grounded plate electrode is twice as high as in the absence of the shield. These experiments are discussed in relation with the results of the numerical analysis of the electric field generated by the wire-plate configuration with and without shield.
A new era in brown adipose tissue biology: molecular control of brown fat development and energy homeostasis.
Brown adipose tissue (BAT) is specialized to dissipate chemical energy in the form of heat as a defense against cold and excessive feeding. Interest in the field of BAT biology has exploded in the past few years because of the therapeutic potential of BAT to counteract obesity and obesity-related diseases, including insulin resistance. Much progress has been made, particularly in the areas of BAT physiology in adult humans, developmental lineages of brown adipose cell fate, and hormonal control of BAT thermogenesis. As we enter into a new era of brown fat biology, the next challenge will be to develop strategies for activating BAT thermogenesis in adult humans to increase whole-body energy expenditure. This article reviews the recent major advances in this field and discusses emerging questions.
Cancer statistics, 2013.
Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths expected in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival based on incidence data from the National Cancer Institute, the Centers for Disease Control and Prevention, and the North American Association of Central Cancer Registries and mortality data from the National Center for Health Statistics. A total of 1,660,290 new cancer cases and 580,350 cancer deaths are projected to occur in the United States in 2013. During the most recent 5 years for which there are data (2005-2009), delay-adjusted cancer incidence rates declined slightly in men (by 0.6% per year) and were stable in women, while cancer death rates decreased by 1.8% per year in men and by 1.5% per year in women. Overall, cancer death rates have declined 20% from their peak in 1991 (215.1 per 100,000 population) to 2009 (173.1 per 100,000 population). Death rates continue to decline for all 4 major cancer sites (lung, colorectum, breast, and prostate). Over the past 10 years of data (2000-2009), the largest annual declines in death rates were for chronic myeloid leukemia (8.4%), cancers of the stomach (3.1%) and colorectum (3.0%), and non-Hodgkin lymphoma (3.0%). The reduction in overall cancer death rates since 1990 in men and 1991 in women translates to the avoidance of approximately 1.18 million deaths from cancer, with 152,900 of these deaths averted in 2009 alone. Further progress can be accelerated by applying existing cancer control knowledge across all segments of the population, with an emphasis on those groups in the lowest socioeconomic bracket and other underserved populations.
Space-Efficient Online Computation of Quantile Summaries
An ∈-approximate quantile summary of a sequence of <i>N</i> elements is a data structure that can answer quantile queries about the sequence to within a precision of ∈<i>N</i>. We present a new online algorithm for computing∈-approximate quantile summaries of very large data sequences. The algorithm has a worst-case space requirement of <i>&Ogr;</i>(1÷∈ log(∈<i>N</i>)). This improves upon the previous best result of <i>&Ogr;</i>(1÷∈ log<sup>2</sup>(∈<i>N</i>)). Moreover, in contrast to earlier deterministic algorithms, our algorithm does not require a priori knowledge of the length of the input sequence. Finally, the actual space bounds obtained on experimental data are significantly better than the worst case guarantees of our algorithm as well as the observed space requirements of earlier algorithms.
High-Frequency Modeling of an Adjustable Speed Drive
The use of high frequency switching power devices in adjustable speed drives (ASDs) induces high voltage variations (dv/dt) that excite the parasitic elements of the power circuit, leading to conducted emissions at high frequencies. The advent of these devices has thus generated several unexpected problems, such as premature deterioration of motor ball bearings and high increases in the electromagnetic interference (EMI) levels, caused by the circulation of the high frequency parasitic currents. This paper deals with high frequency modeling of an ASD system. This will be used to study the influence of the PWM inverter commutations on the level of the common mode and the differential mode currents between the power converter and the motor. First a shielded 4-wire energy cable model is presented. Then, a high frequency model of the PWM inverter and a model of the AC motor are proposed. Finally, the ASD system is simulated and the obtained results are compared to the experimental measurements
Genetic Algorithms and its application to image fusion
Image fusion is the process of combining images taken from different sources, to obtain better situational awareness. In fusing source images, the objective is to combine the most relevant information from source images into a composite image. There are many Image Fusion techniques based on signal, pixel, feature and symbol level fusion. Genetic Algorithms (GA's) are used for solving optimization problems. GA can be employed to image fusion where some kind of parameter optimization is required. In this paper, an existing and three novel image fusion algorithms which use GA's are presented. The experimental results have shown that GA based image fusion algorithms outperform the existing image fusion algorithms. GA based image fusion methods are time consuming, so they cannot be adopted in real time applications, however they can be very helpful in static image fusion applications (e.g. concealed weapon detection, medical imaging, remote sensing, weather forecasting etc).
Knowledge management in construction supply chain integration
Knowledge Management (KM) is becoming increasingly important for organisations across a wide spectrum of industry sectors, especially for the naturally fragmented construction industry. There has been a growing realisation that it is very important for each project participant to effectively capture, share and utilise strategic knowledge and project knowledge, as well as process knowledge within the construction supply chain for better performance. This paper highlights the benefits of integrated construction supply chain management through effective KM. The paper reviews the general literature in construction supply chains and KM and presents some initiatives in Copyright © 2010 Inderscience Enterprises Ltd. 208 M.M.A. Khalfan, M. Kashyap, X. Li and C. Abbott the abovementioned area, followed by a full theory analysis and case study. The case study was conducted with a public sector client organisation in the UK. It explored their strategies for an integrated construction supply chain through KM, knowledge capture and knowledge sharing. It also studied the reuse by their employees as well as by the other organisations they worked with to deliver construction projects in north-west England. The paper concludes that KM would effectively improve the integration of construction supply chains and thus improve overall production performance.
College students’ homework and academic achievement: The mediating role of self-regulatory beliefs
The influence of homework experiences on students’ academic grades was studied with 223 college students. Students’ self-efficacy for learning and perceived responsibility beliefs were included as mediating variables in this research. The students’ homework influenced their achievement indirectly via these two self-regulatory beliefs as well as directly. Self-efficacy for learning, although moderately correlated with perceptions of responsibility, predicted course grades more strongly than the latter variable. No gender differences were found for any of the variables, a finding that extends prior research based on high school girls. Educational implications about the importance of students’ homework completion and its relationship to college students’ development of self-regulation and positive self-efficacy beliefs is discussed from a social cognitive perspective.
Depth-First Search and Linear Graph Algorithms
The value of depth-first search or "bacltracking" as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and ar algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by k1V + k2E dk for some constants kl, k2, and ka, where Vis the number of vertices and E is the number of edges of the graph being examined.
X-band isoflux pattern antenna for SAR data transmission
In this paper, a design scheme for isoflux pattern antenna suitable for SAR data transmission is presented. An isoflux pattern is suitable for SAR data transmission of LEO satellite requiring wide angle coverage of the earth for the longer time visibility. Also, it is advantageous to provide uniform power density over the earth and constant power transmission to the ground station during the passage on the orbit. Based on the principle of generating isoflux pattern, we design basic isoflux pattern antenna which shows required isoflux characteristics. For improved performance, the basic antenna is optimized with genetic algorithm. The optimized isoflux pattern antenna is designed and implemented and shows rapid skirt characteristics, low side lobe, and low back lobe level.
Towards Highly Accurate and Stable Face Alignment for High-Resolution Videos
In recent years, heatmap regression based models have shown their effectiveness in face alignment and pose estimation. However, Conventional Heatmap Regression (CHR) is not accurate nor stable when dealing with high-resolution facial videos, since it finds the maximum activated location in heatmaps which are generated from rounding coordinates, and thus leads to quantization errors when scaling back to the original high-resolution space. In this paper, we propose a Fractional Heatmap Regression (FHR) for high-resolution video-based face alignment. The proposed FHR can accurately estimate the fractional part according to the 2D Gaussian function by sampling three points in heatmaps. To further stabilize the landmarks among continuous video frames while maintaining the precise at the same time, we propose a novel stabilization loss that contains two terms to address time delay and non-smooth issues, respectively. Experiments on 300W, 300VW and Talking Face datasets clearly demonstrate that the proposed method is more accurate and stable than the state-ofthe-art models. Introduction Face alignment aims to estimate a set of facial landmarks given a face image or video sequence. It is a classic computer vision problem that has attributed to many advanced machine learning algorithms Fan et al. (2018); Bulat and Tzimiropoulos (2017); Trigeorgis et al. (2016); Peng et al. (2015, 2016); Kowalski, Naruniec, and Trzcinski (2017); Chen et al. (2017); Liu et al. (2017); Hu et al. (2018). Nowadays, with the rapid development of consumer hardwares (e.g., mobile phones, digital cameras), High-Resolution (HR) video sequences can be easily collected. Estimating facial landmarks on such highresolution facial data has tremendous applications, e.g., face makeup Chen, Shen, and Jia (2017), editing with special effects Korshunova et al. (2017) in live broadcast videos. However, most existing face alinement methods work on faces with medium image resolutions Chen et al. (2017); Bulat and Tzimiropoulos (2017); Peng et al. (2016); Liu et al. (2017). Therefore, developing face alignment algorithms for high-resolution videos is at the core of this paper. To this end, we propose an accurate and stable algorithm for high-resolution video-based face alignment, named Fractional Heatmap Regression (FHR). It is well known that ∗ indicates equal contributions. Conventional Heatmap Regression (CHR) Loss Fractional Heatmap Regression (FHR) Loss 930 744 411
Principles and Standards for School Mathematics : A Guide for Mathematicians
868 NOTICES OF THE AMS VOLUME 47, NUMBER 8 In April 2000 the National Council of Teachers of Mathematics (NCTM) released Principles and Standards for School Mathematics—the culmination of a multifaceted, three-year effort to update NCTM’s earlier standards documents and to set forth goals and recommendations for mathematics education in the prekindergarten-through-grade-twelve years. As the chair of the Writing Group, I had the privilege to interact with all aspects of the development and review of this document and with the committed groups of people, including the members of the Writing Group, who contributed immeasurably to this process. This article provides some background about NCTM and the standards, the process of development, efforts to gather input and feedback, and ways in which feedback from the mathematics community influenced the document. The article concludes with a section that provides some suggestions for mathematicians who are interested in using Principles and Standards.
LPS Increases 5-LO Expression on Monocytes via an Activation of Akt-Sp1/NF-κB Pathways
5-Lipoxygenase (5-LO) plays a pivotal role in the progression of atherosclerosis. Therefore, this study investigated the molecular mechanisms involved in 5-LO expression on monocytes induced by LPS. Stimulation of THP-1 monocytes with LPS (0~3 µg/ml) increased 5-LO promoter activity and 5-LO protein expression in a concentration-dependent manner. LPS-induced 5-LO expression was blocked by pharmacological inhibition of the Akt pathway, but not by inhibitors of MAPK pathways including the ERK, JNK, and p38 MAPK pathways. In line with these results, LPS increased the phosphorylation of Akt, suggesting a role for the Akt pathway in LPS-induced 5-LO expression. In a promoter activity assay conducted to identify transcription factors, both Sp1 and NF-κB were found to play central roles in 5-LO expression in LPS-treated monocytes. The LPS-enhanced activities of Sp1 and NF-κB were attenuated by an Akt inhibitor. Moreover, the LPS-enhanced phosphorylation of Akt was significantly attenuated in cells pretreated with an anti-TLR4 antibody. Taken together, 5-LO expression in LPS-stimulated monocytes is regulated at the transcriptional level via TLR4/Akt-mediated activations of Sp1 and NF-κB pathways in monocytes.
Adaptive identification of MR damper for vibration control
Magnetorheological (MR) damper is a promising semi-active device for vibration control of structures. This paper presents a simple mathematical model of MR damper with a small number of model parameters, which can express its hysteresis behavior of nonlinear dynamic friction mechanism of the MR fluid. The adaptive identification algorithm is also proposed in which the uncertain model parameters and internal state variable can be estimated in an online manner. The proposed model has an advantage that by using its inverse model we can analytically determine the necessary input voltage to the MR damper so that the desirable damper force can be added to the structure in an adaptive manner. Experimental results validate the proposed adaptive modeling method and the model-based inverse control approach in vibration isolation of three story structure.
Tool wear optimization in turning operation by Taguchi method
A design of experiment-based approach is adopted to obtain an optimal setting of turning process parameters (cutting speed, feed and depth of cut) that may yield optimal tool wear (flank wear and crater wear) to titanium carbide coated carbide inserts while machining En24 steel (0.4% C), a difficult-to-machine material. The effects of the selected process parameters on tool wear and subsequent optimal settings of the parameters have been accomplished using Taguchi’s parameter design approach. The results indicate that the selected process parameters affect significantly the tool wear characteristics of TiC coated carbide tool. The predicted optimal values of flank wear width and crater wear depth of coated carbide tool while machining En24 steel are 0.172 mm and 0.244 micron respectively. The results are further confirmed by conducting further experiments.
Skeleton-Based Action Recognition with Spatial Reasoning and Temporal Stack Learning
Skeleton-based action recognition has made great progress recently, but many problems still remain unsolved. For example, the representations of skeleton sequences captured by most of the previous methods lack spatial structure information and detailed temporal dynamics features. In this paper, we propose a novel model with spatial reasoning and temporal stack learning (SR-TSL) for skeleton-based action recognition, which consists of a spatial reasoning network (SRN) and a temporal stack learning network (TSLN). The SRN can capture the high-level spatial structural information within each frame by a residual graph neural network, while the TSLN can model the detailed temporal dynamics of skeleton sequences by a composition of multiple skip-clip LSTMs. During training, we propose a clip-based incremental loss to optimize the model. We perform extensive experiments on the SYSU 3D Human-Object Interaction dataset and NTU RGB+D dataset and verify the effectiveness of each network of our model. The comparison results illustrate that our approach achieves much better results than the state-of-the-art methods.
Networked Participatory Scholarship: Emergent techno-cultural pressures toward open and digital scholarship in online networks
We examine the relationship between scholarly practice and participatory technologies and explore how such technologies invite and reflect the emergence of a new form of scholarship that we call Networked Participatory Scholarship: scholars’ participation in online social networks to share, reflect upon, critique, improve, validate, and otherwise develop their scholarship. We discuss emergent techno-cultural pressures that may influence higher education scholars to reconsider some of the foundational principles upon which scholarship has been established due to the limitations of a pre-digital world, and delineate how scholarship itself is changing with the emergence of certain tools, social behaviors, and cultural expectations associated with participatory technologies. 2011 Elsevier Ltd. All rights reserved.
Executive Functions in 5-to 8-Year Olds : Developmental Changes and Relationship to Academic Achievement
Pronounced improvements in executive functions (EF) during preschool years have been documented in cross-sectional studies. However, longitudinal evidence on EF development during the transition to school and predictive associations between early EF and later school achievement are still scarce. This study examined developmental changes in EF across three time-points, the predictive value of EF for mathematical, reading and spelling skills and explored children’s specific academic attainment as a function of early EF. Participants were 323 children following regular education; 160 children were enrolled in prekindergarten (younger cohort: 69 months) and 163 children in kindergarten (older cohort: 78.4 months) at the first assessment. Various tasks of EF were administered three times with an interval of one year each. Mathematical, reading and spelling skills were measured at the last assessment. Individual background characteristics such as vocabulary, non-verbal intelligence and socioeconomic status were included as control variables. In both cohorts, changes in EF were substantial; improvements in EF, however, were larger in preschoolers than school-aged children. EF assessed in preschool accounted for substantial variability in mathematical, reading and spelling achievement two years later, with low EF being especially associated with significant academic disadvantages in early school years. Given that EF continue to develop from preschool into primary school years and that starting with low EF is associated with lower school achievement, EF may be considered as a marker or risk for academic disabilities.
How schema and novelty augment memory formation
Information that is congruent with existing knowledge (a schema) is usually better remembered than less congruent information. Only recently, however, has the role of schemas in memory been studied from a systems neuroscience perspective. Moreover, incongruent (novel) information is also sometimes better remembered. Here, we review lesion and neuroimaging findings in animals and humans that relate to this apparent paradoxical relationship between schema and novelty. In addition, we sketch a framework relating key brain regions in medial temporal lobe (MTL) and medial prefrontal cortex (mPFC) during encoding, consolidation and retrieval of information as a function of its congruency with existing information represented in neocortex. An important aspect of this framework is the efficiency of learning enabled by congruency-dependent MTL-mPFC interactions.
A Novel Microstrip Meander-Line Antenna With A Very High Relative Permittivity Substrate For 315-MHz Band Applications
This paper presents a design, simulation, implementation and measurement of a novel microstrip meander patch antenna for the application of sensor networks. The dimension of the microstrip chip antenna is 15 mm times 15 mm times 2 mm. The meander-type radiating patch is constructed on the upper layer of the 2 mm height substrate with 0.0 5 mm height metallic conduct lines. Because of using the very high relative permittivity substrate ( epsivr=90), the proposed antenna achieves 315 MHz band operations.
Predicting sexual aggression: the role of pornography in the context of general and specific risk factors.
The main focus of the present study was to examine the unique contribution (if any) of pornography consumption to men's sexually aggressive behavior. Even after controlling for the contributions of risk factors associated with general antisocial behavior and those used in Confluence Model research as specific predictors of sexual aggression, we found that high pornography consumption added significantly to the prediction of sexual aggression. Further analyses revealed that the predictive utility of pornography was due to its discriminative ability only among men classified (based on their other risk characteristics) at relatively high risk for sexual aggression. Other analyses indicated that the specific risk factors accounted for more variance in sexual aggression than the general risk factors and mediated the association between the general risk factors and sexual aggression. We illustrate the potential application of the findings for risk assessment using a classification tree.
The More Electric Aircraft: Technology and challenges.
The More Electric Aircraft concept offers many potential benefits in the design and efficiency of future large, manned aircraft. In this article, typical aircraft electrical power systems and associated loads are described as well as the exciting future challenges for the aerospace industry. The importance of power electronics as an enabling technology for this step change in aircraft design is considered, and examples of typical system designs are discussed.
Corpus-based and Knowledge-based Measures of Text Semantic Similarity
This paper presents a method for measuring the semantic similarity of texts, using corpus-based and knowledge-based measures of similarity. Previous work on this problem has focused mainly on either large documents (e.g. text classification, information retrieval) or individual words (e.g. synonymy tests). Given that a large fraction of the information available today, on the Web and elsewhere, consists of short text snippets (e.g. abstracts of scientific documents, imagine captions, product descriptions), in this paper we focus on measuring the semantic similarity of short texts. Through experiments performed on a paraphrase data set, we show that the semantic similarity method outperforms methods based on simple lexical matching, resulting in up to 13% error rate reduction with respect to the traditional vector-based similarity metric.
Graph Partitioning with Acyclicity Constraints
Graphs are widely used to model execution dependencies in applications. In particular, the NP-complete problem of partitioning a graph under constraints receives enormous attention by researchers because of its applicability in multiprocessor scheduling. We identified the additional constraint of acyclic dependencies between blocks when mapping streaming applications to a heterogeneous embedded multiprocessor. Existing algorithms and heuristics do not address this requirement and deliver results that are not applicable for our use-case. In this work, we show that this more constrained version of the graph partitioning problem is NP-complete and present heuristics that achieve a close approximation of the optimal solution found by an exhaustive search for small problem instances and much better scalability for larger instances. In addition, we can show a positive impact on the schedule of a real imaging application that improves communication volume and execution time.
SALICON: Reducing the Semantic Gap in Saliency Prediction by Adapting Deep Neural Networks
Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.
Clustering Spatial Data in the Presence of Obstacles: a Density-Based Approach
Clustering spatial data is a well-known problem that has been extensively studied. Grouping similar data in large 2-dimensional spaces to find hidden patterns or meaningful sub-groups has many applications such as satellite imagery, geographic information systems, medical image analysis, marketing, computer visions, etc. Although many methods have been proposed in the literature, very few have considered physical obstacles that may have significant consequences on the effectiveness of the clustering. Taking into account these constraints during the clustering process is costly and the modeling of the constraints is paramount for good performance. In this paper, we investigate the problem of clustering in the presence of constraints such as physical obstacles and introduce a new approach to model these constraints using polygons. We also propose a strategy to prune the search space and reduce the number of polygons to test during clustering. We devise a density-based clustering algorithm, DBCluC, which takes advantage of our constraint modeling to efficiently cluster data objects while considering all physical constraints. The algorithm can detect clusters of arbitrary shape and is insensitive to noise, the input order, and the difficulty of constraints. Its average running complexity is O(NlogN) where N is the number of
Analysing Wikipedia and Gold-Standard Corpora for NER Training
Named entity recognition (NER) for English typically involves one of three gold standards: MUC, CoNLL, or BBN, all created by costly manual annotation. Recent work has used Wikipedia to automatically create a massive corpus of named entity annotated text. We present the first comprehensive crosscorpus evaluation of NER. We identify the causes of poor cross-corpus performance and demonstrate ways of making them more compatible. Using our process, we develop a Wikipedia corpus which outperforms gold standard corpora on crosscorpus evaluation by up to 11%.
Calibrating multiple cameras with non-overlapping views using coded checkerboard targets
This paper presents an approach for combined intrinsic and extrinsic calibration of multi-camera rigs using coded targets. It is suited for conventional mono or stereo camera setups as well as for arrangements of multiple cameras with non-overlapping fields of view. We use a static scene consisting of multiple checkerboard targets and a sequence of images. This gives us a large amount of different observations and leads to precise calibration results. To solve the association problem of checkerboard corners over time and between different cameras, we use binary patterns that surround each checkerboard. A sparse nonlinear least squares solver is finally used to estimate the optimal parameter set. The parameter set contains intrinsic and extrinsic parameters, time series of the multi-camera rig pose, board poses and a description of board deformation.
Social Features of Online Networks: The Strength of Intermediary Ties in Online Social Media
An increasing fraction of today's social interactions occur using online social media as communication channels. Recent worldwide events, such as social movements in Spain or revolts in the Middle East, highlight their capacity to boost people's coordination. Online networks display in general a rich internal structure where users can choose among different types and intensity of interactions. Despite this, there are still open questions regarding the social value of online interactions. For example, the existence of users with millions of online friends sheds doubts on the relevance of these relations. In this work, we focus on Twitter, one of the most popular online social networks, and find that the network formed by the basic type of connections is organized in groups. The activity of the users conforms to the landscape determined by such groups. Furthermore, Twitter's distinction between different types of interactions allows us to establish a parallelism between online and offline social networks: personal interactions are more likely to occur on internal links to the groups (the weakness of strong ties); events transmitting new information go preferentially through links connecting different groups (the strength of weak ties) or even more through links connecting to users belonging to several groups that act as brokers (the strength of intermediary ties).
FFT-based Terrain Segmentation for Underwater Mapping
A method for segmenting three-dimensional scans of underwater unstructured terrains is presented. Individual terrain scans are represented as an elevation map and analysed using fast Fourier transform (FFT). The segmentation of the ground surface is performed in the frequency domain. The lower frequency components represent the slower varying undulations of the underlying ground whose segmentation is similar to de-noising / low pass filtering. The cut-off frequency, below which ground frequency components are selected, is automatically determined using peak detection. The user can specify a maximum admissible size of objects (relative to the extent of the scan) to drive the automatic detection of the cut-off frequency. The points above the estimated ground surface are clustered via standard proximity clustering to form object segments. The approach is evaluated using ground truth hand labelled data. It is also evaluated for registration error when the segments are fed as features to an alignment algorithm. In both sets of experiments, the approach is compared to three other segmentation techniques. The results show that the approach is applicable to a range of different terrains and is able to generate features useful for navigation.
Nightmare Abbey/Crotchet Castle
Two 19th-century novels satirize romanticism, political theories, and society through witty dialogue.
Design of Hamming Code Encoding and Decoding Circuit Using Transmission Gate Logic
Debalina Roy Choudhury1, Krishanu Podder2 1Department of Electronics and Communication Engineering, Heritage Institute of Technology, Kolkata, 700107, India 2Department of Electronics and Communication Engineering, Tripura Institute of Technology, Tripura, 799009, India ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract In this paper, the Hamming code encoder
Wide-Area Protection and its Applications - A Bibliographical Survey
Modern power systems are continuously developing to large and interconnected ones. The power industry restructuring and the reduced investment in transmission system expansion make power systems operate closer and closer to their limits, and hence lead to larger possibility of fault outages than before. Therefore, the protection and control in power systems become more and more important as well as complicated. On the other hand, the continuous technological development in communication and measurement accelerates the occurrence and applications of wide-area protection, a kind of advanced protections based on wide-area measurements. The blackouts happened in North America as well as other countries in the past few years also provide more and more incentive for scientists and engineers in the power system circle to devote to the study on wide-area protection and control systems. In this paper, a comprehensive bibliographical survey is made on recent development in this field, and the survey is done from seven relevant aspects
Comparison of vildagliptin twice daily vs. sitagliptin once daily using continuous glucose monitoring (CGM): Crossover pilot study (J-VICTORIA study)
BACKGROUND No previous studies have compared the DPP-4 inhibitors vildagliptin and sitagliptin in terms of blood glucose levels using continuous glucose monitoring (CGM) and cardiovascular parameters. METHODS Twenty patients with type 2 diabetes mellitus were randomly allocated to groups who received vildagliptin then sitagliptin, or vice versa. Patients were hospitalized at 1 month after starting each drug, and CGM was used to determine: 1) mean (± standard deviation) 24-hour blood glucose level, 2) mean amplitude of glycemic excursions (MAGE), 3) fasting blood glucose level, 4) highest postprandial blood glucose level and time, 5) increase in blood glucose level after each meal, 6) area under the curve (AUC) for blood glucose level ≥180 mg/dL within 3 hours after each meal, and 7) area over the curve (AOC) for daily blood glucose level <70 mg/dL. Plasma glycosylated hemoglobin (HbA1c), glycoalbumin (GA), 1,5-anhydroglucitol (1,5AG), immunoreactive insulin (IRI), C-peptide immunoreactivity (CPR), brain natriuretic peptide (BNP), and plasminogen activator inhibitor-1 (PAI-1) levels, and urinary CPR levels, were measured. RESULTS The mean 24-hour blood glucose level was significantly lower in patients taking vildagliptin than sitagliptin (142.1 ± 35.5 vs. 153.2 ± 37.0 mg/dL; p = 0.012). In patients taking vildagliptin, MAGE was significantly lower (110.5 ± 33.5 vs. 129.4 ± 45.1 mg/dL; p = 0.040), the highest blood glucose level after supper was significantly lower (206.1 ± 40.2 vs. 223.2 ± 43.5 mg/dL; p = 0.015), the AUC (≥180 mg/dL) within 3 h was significantly lower after breakfast (484.3 vs. 897.9 mg/min/dL; p = 0.025), and urinary CPR level was significantly higher (97.0 ± 41.6 vs. 85.2 ± 39.9 μg/day; p = 0.008) than in patients taking sitagliptin. There were no significant differences in plasma HbA1c, GA, 1,5AG, IRI, CPR, BNP, or PAI-1 levels between patients taking vildagliptin and sitagliptin. CONCLUSIONS CGM showed that mean 24-h blood glucose, MAGE, highest blood glucose level after supper, and hyperglycemia after breakfast were significantly lower in patients with type 2 diabetes mellitus taking vildagliptin than those taking sitagliptin. There were no significant differences in BNP and PAI-1 levels between patients taking vildagliptin and sitagliptin. TRIAL REGISTRATION UMIN000007687.
Crossing Nets: Combining GANs and VAEs with a Shared Latent Space for Hand Pose Estimation
State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose modelling the statistical relationship of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose or into a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth map. To prevent over-fitting and to better exploit unlabeled depth maps, the generator and discriminator are trained jointly. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized samples and unlabeled depth maps. The proposed discriminator network architecture is highly efficient and runs at 90fps on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks.
Open-Source Tools for Morphology, Lemmatization, POS Tagging and Named Entity Recognition
We present two recently released opensource taggers: NameTag is a free software for named entity recognition (NER) which achieves state-of-the-art performance on Czech; MorphoDiTa (Morphological Dictionary and Tagger) performs morphological analysis (with lemmatization), morphological generation, tagging and tokenization with state-of-the-art results for Czech and a throughput around 10-200K words per second. The taggers can be trained for any language for which annotated data exist, but they are specifically designed to be efficient for inflective languages, Both tools are free software under LGPL license and are distributed along with trained linguistic models which are free for non-commercial use under the CC BY-NC-SA license. The releases include standalone tools, C++ libraries with Java, Python and Perl bindings and web services.
Home-based, early intervention with mechatronic toys for preterm infants at risk of neurodevelopmental disorders (CARETOY): a RCT protocol
BACKGROUND Preterm infants are at risk for neurodevelopmental disorders, including motor, cognitive or behavioural problems, which may potentially be modified by early intervention. The EU CareToy Project Consortium (http://www.caretoy.eu) has developed a new modular system for intensive, individualized, home-based and family-centred early intervention, managed remotely by rehabilitation staff. A randomised controlled trial (RCT) has been designed to evaluate the efficacy of CareToy training in a first sample of low-risk preterm infants. METHODS/DESIGN The trial, randomised, multi-center, evaluator-blinded, parallel group controlled, is designed according to CONSORT Statement. Eligible subjects are infants born preterm without major complications, aged 3-9 months of corrected age with specific gross-motor abilities defined by Ages & Stages Questionnaire scores. Recruited infants, whose parents will sign a written informed consent for participation, will be randomized in CareToy training and control groups at baseline (T0). CareToy group will perform four weeks of personalized activities with the CareToy system, customized by the rehabilitation staff. The control group will continue standard care. Infant Motor Profile Scale is the primary outcome measure and a total sample size of 40 infants has been established. Bayley-Cognitive subscale, Alberta Infants Motor Scale and Teller Acuity Cards are secondary outcome measures. All measurements will be performed at T0 and at the end of training/control period (T1). For ethical reasons, after this first phase infants enrolled in the control group will perform the CareToy training, while the training group will continue standard care. At the end of open phase (T2) all infants will be assessed as at T1. Further assessment will be performed at 18 months corrected age (T3) to evaluate the long-term effects on neurodevelopmental outcome. Caregivers and rehabilitation staff will not be blinded whereas all the clinical assessments will be performed, videotaped and scored by blind assessors. The trial is ongoing and it is expected to be completed by April 2015. DISCUSSION This paper describes RCT methodology to evaluate CareToy as a new tool for early intervention in preterm infants, first contribution to test this new type of system. It presents background, hypotheses, outcome measures and trial methodology. TRIAL REGISTRATION ClinicalTrials.gov: NCT01990183. EU grant ICT-2011.5.1-287932.
Predictors of resistance to preoperative trastuzumab and vinorelbine for HER2-positive early breast cancer.
PURPOSE To assess pathologic complete response (pCR), clinical response, feasibility, safety, and potential predictors of response to preoperative trastuzumab plus vinorelbine in patients with operable, human epidermal growth factor receptor 2 (HER2)-positive breast cancer. EXPERIMENTAL DESIGN Forty-eight patients received preoperative trastuzumab and vinorelbine weekly for 12 weeks. Single and multigene biomarker studies were done in an attempt to identify predictors of response. RESULTS Eight of 40 (20%) patients achieved pCR (95% confidence interval, 9-36%). Of 9 additional patients recruited for protocol-defined toxicity analysis, 8 were evaluable; 42 of 48 (88%) patients had clinical response (16 patients, clinical complete response; 26 patients, clinical partial response). T(1) tumors more frequently exhibited clinical complete response (P = 0.05) and showed a trend to exhibit pCR (P = 0.07). Five (13%) patients experienced grade 1 cardiac dysfunction during preoperative treatment. Neither HER2 nor estrogen receptor status changed significantly after exposure to trastuzumab and vinorelbine. RNA profiling identified three top-level clusters by unsupervised analysis. Tumors with extremes of response [pCR (n = 3) versus nonresponse (n = 3)] fell into separate groups by hierarchical clustering. No predictive genes were identified in pCR tumors. Nonresponding tumors were more likely to be T(4) stage (P = 0.02) and express basal markers (P < 0.00001), growth factors, and growth factor receptors. Insulin-like growth factor-I receptor membrane expression was associated with a lower response rate (50% versus 97%; P = 0.001). CONCLUSIONS Preoperative trastuzumab plus vinorelbine is active and well tolerated in patients with HER2-positive, operable, stage II/III breast cancer. HER2-overexpressing tumors with a basal-like phenotype, or with expression of insulin-like growth factor-I receptor and other proteins involved in growth factor pathways, are more likely to be resistant to this regimen.
Managerial Actions, Stock Returns, and Earnings: The Case of Business-To-Business Internet Firms
In this study we investigate the role played by managerial actions in explaining stock market returns and accounting earnings of 57 Internet firms engaged in Business-to-Business (B2B) e-commerce. We classify 3,166 managerial actions undertaken by our sample firms between the firm's IPO date and September 30, 2000 into ten key action categories: (1) acquisition of major customers, (2) introduction of new products and services, (3) promotional and marketing actions, (4) expansion into international markets, (5) actions taken to address the concerns of stakeholders such as employees and the community at large, (6) announcements of technology, marketing, and distribution alliances, (7) completion of acquisitions, (8) management team building actions, (9) announcement of recognition and awards bestowed upon the firm, and (10) organizational changes. We undertake an event study over a three-day window surrounding the announcement of each action. Our event study results indicate that announcements of alliances (technology, marketing, and distribution), acquisition of new customers, and promotions are associated with positive abnormal returns. Next, using the factor analysis technique we group the counts of managerial actions taken by each firm over its post-IPO life into three broad managerial initiatives-market penetration, organization building, and legitimacy building. These three initiatives explain a substantial portion of the cross-sectional variation in the firms' post-IPO life stock market returns beyond that explained by accounting earnings. However, accounting earnings do not explain variation in post-IPO stock returns. Thus, investors appear to supplement relatively meager accounting information with data about managerial actions in setting stock prices of B2B Internet firms.
A Classifier-Based Parser with Linear Run-Time Complexity
We present a classifier-based parser that produces constituent trees in linear time. The parser uses a basic bottom-up shiftreduce algorithm, but employs a classifier to determine parser actions instead of a grammar. This can be seen as an extension of the deterministic dependency parser of Nivre and Scholz (2004) to full constituent parsing. We show that, with an appropriate feature set used in classification, a very simple one-path greedy parser can perform at the same level of accuracy as more complex parsers. We evaluate our parser on section 23 of the WSJ section of the Penn Treebank, and obtain precision and recall of 87.54% and 87.61%, respectively.
Data glove embedded with 9-axis IMU and force sensing sensors for evaluation of hand function
A hand injury can greatly affect a person's daily life. Physicians must evaluate the state of recovery of a patient's injured hand. However, current manual evaluations of hand functions are imprecise and inconvenient. In this paper, a data glove embedded with 9-axis inertial sensors and force sensitive resistors is proposed. The proposed data glove system enables hand movement to be tracked in real-time. In addition, the system can be used to obtain useful parameters for physicians, is an efficient tool for evaluating the hand function of patients, and can improve the quality of hand rehabilitation.
Leucocyte classification for leukaemia detection using image processing techniques
INTRODUCTION The counting and classification of blood cells allow for the evaluation and diagnosis of a vast number of diseases. The analysis of white blood cells (WBCs) allows for the detection of acute lymphoblastic leukaemia (ALL), a blood cancer that can be fatal if left untreated. Currently, the morphological analysis of blood cells is performed manually by skilled operators. However, this method has numerous drawbacks, such as slow analysis, non-standard accuracy, and dependences on the operator's skill. Few examples of automated systems that can analyse and classify blood cells have been reported in the literature, and most of these systems are only partially developed. This paper presents a complete and fully automated method for WBC identification and classification using microscopic images. METHODS In contrast to other approaches that identify the nuclei first, which are more prominent than other components, the proposed approach isolates the whole leucocyte and then separates the nucleus and cytoplasm. This approach is necessary to analyse each cell component in detail. From each cell component, different features, such as shape, colour and texture, are extracted using a new approach for background pixel removal. This feature set was used to train different classification models in order to determine which one is most suitable for the detection of leukaemia. RESULTS Using our method, 245 of 267 total leucocytes were properly identified (92% accuracy) from 33 images taken with the same camera and under the same lighting conditions. Performing this evaluation using different classification models allowed us to establish that the support vector machine with a Gaussian radial basis kernel is the most suitable model for the identification of ALL, with an accuracy of 93% and a sensitivity of 98%. Furthermore, we evaluated the goodness of our new feature set, which displayed better performance with each evaluated classification model. CONCLUSIONS The proposed method permits the analysis of blood cells automatically via image processing techniques, and it represents a medical tool to avoid the numerous drawbacks associated with manual observation. This process could also be used for counting, as it provides excellent performance and allows for early diagnostic suspicion, which can then be confirmed by a haematologist through specialised techniques.
Isotope approach to assess hydrologic connections during Marcellus Shale drilling.
Water and gas samples were collected from (1) nine shallow groundwater aquifers overlying Marcellus Shale in north-central West Virginia before active shale gas drilling, (2) wells producing gas from Upper Devonian sands and Middle Devonian Marcellus Shale in southwestern Pennsylvania, (3) coal-mine water discharges in southwestern Pennsylvania, and (4) streams in southwestern Pennsylvania and north-central West Virginia. Our preliminary results demonstrate that the oxygen and hydrogen isotope composition of water, carbon isotope composition of dissolved inorganic carbon, and carbon and hydrogen isotope compositions of methane in Upper Devonian sands and Marcellus Shale are very different compared with shallow groundwater aquifers, coal-mine waters, and stream waters of the region. Therefore, spatiotemporal stable isotope monitoring of the different sources of water before, during, and after hydraulic fracturing can be used to identify migrations of fluids and gas from deep formations that are coincident with shale gas drilling.
Relationships of methacholine and adenosine monophosphate responsiveness with serum vascular endothelial growth factor in children with asthma.
BACKGROUND Airway hyperresponsiveness, which is a characteristic feature of asthma, is usually measured by means of bronchial challenge with direct or indirect stimuli. Vascular endothelial growth factor (VEGF) increases vascular permeability and angiogenesis, leads to mucosal edema, narrows the airway diameter, and reduces airway flow. OBJECTIVE To examine the relationships between serum VEGF level and airway responsiveness to methacholine and adenosine monophosphate (AMP) in children with asthma. METHODS Peripheral blood eosinophil counts, serum eosinophil cationic protein (ECP) concentrations, and serum VEGF concentrations were measured in 31 asthmatic children and 26 control subjects. Methacholine and AMP bronchial challenges were performed on children with asthma. RESULTS Children with asthma had a significantly higher mean (SD) level of VEGF than controls (361.2 [212.0] vs 102.7 [50.0] pg/mL; P < .001). Blood eosinophil counts and serum ECP levels significantly correlated inversely with AMP provocation concentration that caused a decrease in forced expiratory volume in 1 second of 20% (PC20) (r = -0.474, P =.01; r = -0.442, P =.03, respectively), but not with methacholine PC20 (r = -0.228, P = .26; r = -0.338, P =.10, respectively). Serum VEGF levels significantly correlated with airway responsiveness to AMP (r = -0.462; P = .009) but not to methacholine (r = -0.243; P = .19). CONCLUSIONS Serum VEGF levels were increased in children with asthma and were related to airway responsiveness to AMP but not to methacholine. Increased VEGF levels in asthmatic children may result in increased airway responsiveness by mechanisms related to airway inflammation or increased permeability of airway vasculature.
A Dataset for Document Grounded Conversations
This paper introduces a document grounded dataset for conversations. We define “Document Grounded Conversations” as conversations that are about the contents of a specified document. In this dataset the specified documents were Wikipedia articles about popular movies. The dataset contains 4112 conversations with an average of 21.43 turns per conversation. This positions this dataset to not only provide a relevant chat history while generating responses but also provide a source of information that the models could use. We describe two neural architectures that provide benchmark performance on the task of generating the next response. We also evaluate our models for engagement and fluency, and find that the information from the document helps in generating more engaging and fluent responses.
An Automatic Controller Extractor for HDL Descriptions at the RTL
ern circuit designs, verification has become the major bottleneck in the entire design process.1 To cope with the exponential state-space growth, researchers have proposed some techniques2,3 to reduce this state space in functional verification at the register transfer level (RTL). Because most design errors are related to the design’s control part, one possible solution is to separate the data paths from the controllers and verify the control part only. However, in the proposed techniques, the capability of extracting controllers relies on specific control-register labels, which users must assign manually. In large designs, labeling the hundreds of control registers is inconvenient. More importantly, if the original designers are not available—for example, when using vendor-provided intellectual property—assigning labels becomes very difficult. In modern designs, almost all controllers consist of finite-state machines (FSMs). Therefore, by locating the FSMs, we can find the possible locations of controllers. Some vendors claim their tools4 can automatically extract FSMs in the original hardware description language (HDL) code. However, most of these tools depend on a specific coding style or user intervention. The literature offers some approaches for translating HDL code into FSMs by compiler techniques.5,6 As we know, a compiler translates predefined language constructs into other forms. Therefore, compiler-based approaches must also limit users’ coding styles. It can be difficult to deal with real designs from various designers with varying coding styles. To overcome the problems of existing approaches, we propose a novel method for extracting FSMs in HDL code written at the RTL by recognizing the general patterns of FSMs in the process-module (PM) graph. These general patterns are derived from the relationship between an FSM’s current states and its next states, not the language constructs. Therefore, the writing style of HDL code is almost entirely unrestricted. Hints or comments in the source code aren’t needed either. We already reported on the preliminary stage of this work.7 The reported experimental results on several real designs from different designers with various coding styles have shown the effectiveness and efficiency of our algorithm. Because we use the general FSM patterns in the recognition process, some special designs, such as the program counters and the accumulators used in arithmetic logic, may be identified as general FSMs. Although these have general FSM structures, they are in fact only An Automatic Controller Extractor for HDL Descriptions at the RTL Automatic Controller Extractor
Risk Management in Software Development using Artificial Neural Networks
IT industry is one of the biggest industries around the world with several software projects being developed which vary in size, cost, complexity, etc. During development, many risks of different types arise such as lack of staff experience, new technologies, budgets, etc. These risks play a huge role in success or failure of a project. Most of the available risk management solutions are too costly and time consuming. There is a need for an efficient risk management technique. To assist the project manager in risk management, we have developed an application which will identify the risks involved during software development and predict the success or failure of the project using Artificial Neural Networks. The prediction will be done using historical data taking the important and common risk factors into account. After risk identification, the probability of success or failure will be determined and suggestions for risk mitigation will be provided for the
One-shot deep neural network for pose and illumination normalization face recognition
Pose and illumination are considered as two main challenges that face recognition system encounters. In this paper, we consider face recognition problem across pose and illumination variations, given small amount of training samples and single sample per gallery (a.k.a., one shot classification). We combine the strength of 3D models in generating multiviews and various illumination samples and the ability of deep learning in learning non-linear transformation, which is very suitable for pose and illumination normalization, by using a multi-task deep neural network. By the pose and illumination augmentation strategy, we train a pose and illumination normalization neural network with much less training data compared to other methods. Experiments on MultiPIE database achieve competitive recognition results, demonstrating the effectiveness of proposed method.
Façade: An Experiment in Building a Fully-Realized Interactive Drama
Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.
FLAG-IDA in the treatment of refractory/relapsed acute myeloid leukemia: single-center experience
We evaluated the efficacy and toxicity profiles of the combination of fludarabine, high-dose cytosine arabinoside (AraC), idarubicin, and granulocyte colony-stimulating factor (G-CSF) in refractory/relapsed acute myeloblastic leukemia (AML) patients. Between October 1998 and February 2002, 46 AML patients were treated with FLAG-IDA (fludarabine 30 mg/m2, AraC 2 g/m2 for 5 days, idarubicin 10 mg/m2 for 3 days, and G-CSF 5 µg/kg from day +6 until neutrophil recovery). Thirty patients were in relapse after conventional chemotherapy including cytarabine, etoposide, and daunorubicin or mitoxantrone according to the GIMEMA protocols. Four were in relapse after autologous peripheral stem cell transplantation and two after allogeneic bone marrow transplantation. Ten patients had refractory disease (after 10 days of standard doses of cytarabine, 3 days of mitoxantrone or daunorubicin, and 5 days of etoposide). Recovery of neutrophils and platelets required a median of 19 and 22 days from the start of therapy. Complete remission (CR) was obtained in 24 of 46 patients (52.1%) and 3 of 46 (6.6%) died during reinduction therapy: 2 due to cerebral hemorrhage and 1 due to fungemia (Candida tropicalis). Fever >38.5°C was observed in 40 of 46 patients (86.9%), 27 had fever of unknown origin (FUO) and 13 documented infections; 31 of 46 (67.3%) developed mucositis and 14 of 46 (30.4%) had grade 2 WHO transient liver toxicity. After achieving CR, 11 patients received allogeneic stem cell transplantation, 4 patients received autologous stem cell transplantation, 4 were judged unable to receive any further therapy, and 5 refused other therapy. Ten patients are at present in continuous CR after a median follow-up of 13 months (range: 4–24). In our experience, FLAG-IDA is a well-tolerated and effective regimen in relapsed/refractory AML. The toxicity is acceptable, enabling most patients to receive further treatment, including transplantation procedures.
Self-Attention Networks for Connectionist Temporal Classification in Speech Recognition
The success of self-attention in NLP has led to recent applications in end-to-end encoder-decoder architectures for speech recognition. Separately, connectionist temporal classification (CTC) has matured as an alignment-free, non-autoregressive approach to sequence transduction, either by itself or in various multitask and decoding frameworks. We propose SAN-CTC, a deep, fully self-attentional network for CTC, and show it is tractable and competitive for end-toend speech recognition. SAN-CTC trains quickly and outperforms existing CTC models and most encoder-decoder models, with character error rates (CERs) of 4.7% in 1 day on WSJ eval92 and 2.8% in 1 week on LibriSpeech test-clean, with a fixed architecture and one GPU. Similar improvements hold for WERs after LM decoding. We motivate the architecture for speech, evaluate position and downsampling approaches, and explore how label alphabets (character, phoneme, subword) affect attention heads and performance.
After the origin of life
SummaryAs to the primary morphogenesis which occurred after the origin of life, two conditions are considered.(a)It must be a non-specific pattern.(b)It must be one of the simplest patterns. The above conditions are satisfied by the morphogenetic polarity. Actually, the simplest polar pattern is divided into two classes. The first of these is represented by a regional protrusion of the surface of a sphere (Fig. 1B), and the second by a regional inversion (Fig. 1C). That means that the first morphogenesis might take place towards two directions: regional protrusion and regional inversion of the globular organism. Both of them might be conditioned by the appearance of protoplasmic polarity.
Gender identification using a general audio classifier
In the context of content-based multimedia indexing gender identification using speech signal is an important task. Existing techniques are dependent on the quality of the speech signal making them unsuitable for the video indexing problems. In this paper we introduce a novel gender identification approach based on a general audio classifier. The audio classifier models the audio signal by the first order spectrum’s statistics in 1s windows and uses a set of neural networks as classifiers. The presented technique shows robustness to adverse audio compression and it is language independent. We show how practical considerations about the speech in audio-visual data, such as the continuity of speech, can further improve the classification results which attain 92%.
Normalized cuts in 3-D for spinal MRI segmentation
Segmentation of medical images has become an indispensable process to perform quantitative analysis of images of human organs and their functions. Normalized Cuts (NCut) is a spectral graph theoretic method that readily admits combinations of different features for image segmentation. The computational demand imposed by NCut has been successfully alleviated with the Nystro/spl uml/m approximation method for applications different than medical imaging. In this paper we discuss the application of NCut with the Nystro/spl uml/m approximation method to segment vertebral bodies from sagittal T1-weighted magnetic resonance images of the spine. The magnetic resonance images were preprocessed by the anisotropic diffusion algorithm, and three-dimensional local histograms of brightness was chosen as the segmentation feature. Results of the segmentation as well as limitations and challenges in this area are presented.
Deep learning based image description generation
Describing the contents of images is a challenging task for machines to achieve. It requires not only accurate recognition of objects and humans, but also their attributes and relationships as well as scene information. It would be even more challenging to extend this process to identify falls and hazardous objects to aid elderly or users in need of care. This research makes initial attempts to deal with the above challenges to produce multi-sentence natural language description of image contents. It employs a local region based approach to extract regional image details and combines multiple techniques including deep learning and attribute learning through the use of machine learned features to create high level labels that can generate detailed description of real-world images. The system contains the core functions of scene classification, object detection and classification, attribute learning, relationship detection and sentence generation. We have also further extended this process to deal with open-ended fall detection and hazard identification. In comparison to state-of-the-art related research, our system shows superior robustness and flexibility in dealing with test images from new, unrelated domains, which poses great challenges to many existing methods. Our system is evaluated on a subset from Flickr8k and Pascal VOC 2012 and achieves an impressive average BLEU score of 46 and outperforms related research by a significant margin of 10 BLEU score when evaluated with a small dataset of images containing falls and hazardous objects. It also shows impressive performance when evaluated using a subset of IAPR TC-12 dataset.
A Trajectory-Based Ball Tracking Framework with Visual Enrichment for Broadcast Baseball Videos
Pitching contents play the key role in the resultant victory or defeat in a baseball game. Utilizing the physical characteristic of ball motion, this paper presents a trajectory-based framework for automatic ball tracking and pitching evaluation in broadcast baseball videos. The task of ball detection and tracking in broadcast baseball videos is very challenging because in video frames, the noises may cause many ball-like objects, the ball size is small, and the ball may deform due to its high speed movement. To overcome these challenges, we first define a set of filters to prune most non-ball objects but retain the ball, even if it is deformed. In ball position prediction and trajectory extraction, we analyze the 2D distribution of ball candidates and exploit the characteristic that the ball trajectory presents in a near parabolic curve in video frames. Most of the non-qualified trajectories are pruned, which greatly improves the computational efficiency. The missed balls can also be recovered in the trajectory by applying the position prediction. The experiments of ball tracking on the testing sequences of JPB, MLB and CPBL captured from different TV channels show promising results. The ball tracking framework is able to extract the ball trajectory, superimposed on the video, and in near real-time provide visual enrichment before the next pitch coming up without specific cameras or equipments set up in the stadiums. It can also be utilized in strategy analysis and intelligence statistics for player training.