title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Coronary artery Calcium predicts Cardiovascular events in participants with a low lifetime risk of Cardiovascular disease: The Multi-Ethnic Study of Atherosclerosis (MESA).
|
AIMS
Patients with a low lifetime risk of coronary heart disease (CHD) are not completely free of events over 10 years. We evaluated predictors for CHD among "low lifetime risk" participants in the population-based Multi-Ethnic Study of Atherosclerosis (MESA).
METHODS
MESA enrolled 6814 men and women aged 45-84 years who were free of baseline cardiovascular disease. Using established criteria of non-diabetic, non-smokers with total cholesterol ≤ 200 mg/dL, systolic BP ≤ 139 mmHg, and diastolic BP ≤ 89 mmHg at baseline, we identified 1391 participants with a low lifetime risk for cardiovascular disease. Baseline covariates were age, gender, ethnicity, HDL-C, C-reactive protein, family history of CHD, carotid intima-media thickness and coronary artery calcium (CAC). We calculated event rates and the number needed to scan (NNS) to identify one participant with CAC>0 and > 100.
RESULTS
Over 10.4 years median follow-up, there were 33 events (2.4%) in participants with low lifetime risk. There were 479 participants (34%) with CAC>0 including 183 (13%) with CAC>100. CAC was present in 25 (76%) participants who experienced an event. In multivariable analyses, only CAC>100 remained predictive of CHD (HR 4.6; 95% CI: 1.6-13.6; p = 0.005). The event rates for CAC = 0, CAC>0 and CAC>100 were 0.9/1,000, 5.7/1,000, and 11.0/1000 person-years, respectively. The NNS to identify one participant with CAC>0 and > 100 were 3 and 7.6, respectively.
CONCLUSIONS
While 10-year event rates were low in those with low lifetime risk, CAC was the strongest predictor of incident CHD. Identification of individuals with CAC = 0 and CAC>100 carries significant potential therapeutic implications.
|
Unprocessing Images for Learned Raw Denoising
|
Machine learning techniques work best when the data used for training resembles the data used for evaluation. This holds true for learned single-image denoising algorithms, which are applied to real raw camera sensor readings but, due to practical constraints, are often trained on synthetic image data. Though it is understood that generalizing from synthetic to real images requires careful consideration of the noise properties of camera sensors, the other aspects of an image processing pipeline (such as gain, color correction, and tone mapping) are often overlooked, despite their significant effect on how raw measurements are transformed into finished images. To address this, we present a technique to “unprocess” images by inverting each step of an image processing pipeline, thereby allowing us to synthesize realistic raw sensor measurements from commonly available Internet photos. We additionally model the relevant components of an image processing pipeline when evaluating our loss function, which allows training to be aware of all relevant photometric processing that will occur after denoising. By unprocessing and processing training data and model outputs in this way, we are able to train a simple convolutional neural network that has 14%-38% lower error rates and is 9×-18× faster than the previous state of the art on the Darmstadt Noise Dataset [30], and generalizes to sensors outside of that dataset as well.
|
DHAES: An Encryption Scheme Based on the Diffie-Hellman Problem
|
This paper describes a Di e-Hellman based encryption scheme, DHAES. The scheme is as e cient as ElGamal encryption, but has stronger security properties. Furthermore, these security properties are proven to hold under appropriate assumptions on the underlying primitive. We show that DHAES has not only the \basic" property of secure encryption (namely privacy under a chosen-plaintext attack) but also achieves privacy under both non-adaptive and adaptive chosenciphertext attacks. (And hence it also achieves non-malleability.) DHAES is built in a generic way from lower-level primitives: a symmetric encryption scheme, a message authentication code, group operations in an arbitrary group, and a cryptographic hash function. In particular, the underlying group may be an elliptic-curve group or the multiplicative group of integers modulo a prime number. The proofs of security are based on appropriate assumptions about the hardness of the Di e-Hellman problem and the assumption that the underlying symmetric primitives are secure. The assumptions are all standard in the sense that no random oracles are involved. We suggest that DHAES provides an attractive starting point for developing public-key encryption standards based on the Di e-Hellman assumption.
|
Spiral-STC: An On-Line Coverage Algorithm of Grid Environments by a Mobile Robot
|
W e describe an on-line sensor based algorithm for covering pla.nar areas by a squareshaped tool attached to a mobile robot. Let D be the tool size. The algorithm, called Spiral-STC, incrementally subdivides the planar work-area into disjoint D-size cells, while following a spanning tree of the resulting grid. The algorithm covers general grid environments using a path whose length is at most (n+m)D, where n is the number of D-size cells and m 5 n is the number of boundary cells, defined as cells that share at least one point with the grid boundary. W e also report that any on-line coverage algorithm generates a covering path whose length is at least (2-e)lOpt in worst case, where lopt is the length of the optimal covering path. Since (n+m)D I 2lOpt, Spiral-STC is worst-case optimal. Moreover, m << n in practical environments, and the algorithm generates close-to-optimal covering paths in such environments. Simulation results demonstrate the spiral-like covering patterns typical to the algorithm.
|
Combinatorial and probabilistic properties of systems of numeration
|
Let G = (G(n))(n) be a strictly increasing sequence of positive integers with G(0) = 1. We study the system of numeration defined by this sequence by looking at the corresponding compactification K-G of N and the extension of the addition-by-one map tau on K-G (the 'odometer'). We give sufficient conditions for the existence and uniqueness of tau-invariant measures on K-G in terms of combinatorial properties of G.
|
Foraminiferal Mg/Ca increase in the Caribbean during the Pliocene: Western Atlantic Warm Pool formation, salinity influence, or diagenetic overprint?: CARIBBEAN FORAMINIFERAL Mg/Ca
|
[1] We constructed a high-resolution Mg/Ca record on the planktonic foraminifer Globigerinoides sacculifer in order to explore the change in sea surface temperature (SST) due to the shoaling of the Isthmus of Panama as well as the impact of secondary factors like diagenesis and large salinity fluctuations. The study covers the latest Miocene and the early Pliocene (5.6–3.9 Ma) and was combined with dO to isolate changes in sea surface salinity (SSS). Before 4.5 Ma, SSTMg/Ca and SSS show moderate fluctuations, indicating a free exchange of surface ocean water masses between the Pacific and the Atlantic. The increase in dO after 4.5 Ma represents increasing salinities in the Caribbean due to the G Geochemistry Geophysics Geosystems
|
Principles of Health Interoperability
|
This chapter sets out some of the core problems and opportunities facing the digital healthcare sector. Healthcare is all about communication. Large investments in digital health have failed to live up to expectations, partly due to poor interoperability. Patient centered care requires a new approach, organized primarily for patient benefi t, not just for provider organizations. What matters most is the point of care, which is inevitably complex. Many lessons can be learnt from past experience, successes and failures.
|
Compelling Intelligent User Interfaces - How Much AI?
|
Efforts to incorporate intelligence into the user interface have been underway for decades, but the commercial impact of this work has not lived up to early expectations, and is not immediately apparent. This situation appears to be changing. However, so far the most interesting intelligent user interfaces (IUIS) have tended to use minimal or simplistic AI. In this panel we consider whether more or less AI is the key to the development of compelling IUIS. The panelists will present examples of compelling IUIS that use a selection of AI techniques, mostly simple, but some complex. Each panelist will then comment on the merits of different kinds and quantities of AI in the development of pragmatic interface technology.
|
Trust-Aware Collaborative Filtering for Recommender Systems
|
Recommender Systems allow people to find the resources they need by making use of the experiences and opinions of their nearest neighbours. Costly annotations by experts are replaced by a distributed process where the users take the initiative. While the collaborative approach enables the collection of a vast amount of data, a new issue arises: the quality assessment. The elicitation of trust values among users, termed “web of trust”, allows a twofold enhancement of Recommender Systems. Firstly, the filtering process can be informed by the reputation of users which can be computed by propagating trust. Secondly, the trust metrics can help to solve a problem associated with the usual method of similarity assessment, its reduced computability. An empirical evaluation on Epinions.com dataset shows that trust propagation allows to increase the coverage of Recommender Systems while preserving the quality of predictions. The greatest improuvements are achieved for new users, who provided few ratings.
|
Joint combination of point cloud and DSM for 3D building reconstruction using airborne laser scanner data
|
More and more cities are looking for service providers able to deliver 3D city models in a short time. Airborne laser scanning techniques make it possible to acquire a three-dimensional point cloud leading almost instantaneously to digital surface models (DSM), but these models are far from a topological 3D model needed by geographers or land surveyors. The aim of this paper is to present the pertinence and advantages of combining simultaneously the point cloud and the normalized DSM (nDSM) in the main steps of a building reconstruction approach. This approach has been implemented in order to exempt any additional data and to automate the process. The proposed workflow firstly extracts the off-terrain mask based on DSM. Then, it combines the point cloud and the DSM for extracting a building mask from the off-terrain. At last, based on the previously extracted building mask, the reconstruction of 3D flat roof models is carried out and analyzed.
|
Dual-Energy X-Ray Absorptiometry for Quantification of Visceral Fat
|
Obesity is the major risk factor for metabolic syndrome and through it diabetes as well as cardiovascular disease. Visceral fat (VF) rather than subcutaneous fat (SF) is the major predictor of adverse events. Currently, the reference standard for measuring VF is abdominal X-ray computed tomography (CT) or magnetic resonance imaging (MRI), requiring highly used clinical equipment. Dual-energy X-ray absorptiometry (DXA) can accurately measure body composition with high-precision, low X-ray exposure, and short-scanning time. The purpose of this study was to validate a new fully automated method whereby abdominal VF can be measured by DXA. Furthermore, we explored the association between DXA-derived abdominal VF and several other indices for obesity: BMI, waist circumference, waist-to-hip ratio, and DXA-derived total abdominal fat (AF), and SF. We studied 124 adult men and women, aged 18-90 years, representing a wide range of BMI values (18.5-40 kg/m(2)) measured with both DXA and CT in a fasting state within a one hour interval. The coefficient of determination (r(2)) for regression of CT on DXA values was 0.959 for females, 0.949 for males, and 0.957 combined. The 95% confidence interval for r was 0.968 to 0.985 for the combined data. The 95% confidence interval for the mean of the differences between CT and DXA VF volume was -96.0 to -16.3 cm(3). Bland-Altman bias was +67 cm(3) for females and +43 cm(3) for males. The 95% limits of agreement were -339 to +472 cm(3) for females and -379 to +465 cm(3) for males. Combined, the bias was +56 cm(3) with 95% limits of agreement of -355 to +468 cm(3). The correlations between DXA-derived VF and BMI, waist circumference, waist-to-hip ratio, and DXA-derived AF and SF ranged from poor to modest. We conclude that DXA can measure abdominal VF precisely in both men and women. This simple noninvasive method with virtually no radiation can therefore be used to measure VF in individual patients and help define diabetes and cardiovascular risk.
|
Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: Evaluation using sensorimotor tasks
|
Feedforward deep neural networks (DNNs), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean±standard deviation; %) of 6.9 (±3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4±4.6) and the two-layer network (7.4±4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation.
|
Evaluation of low-complexity visual feature detectors and descriptors
|
Several visual feature extraction algorithms have recently appeared in the literature, with the goal of reducing the computational complexity of state-of-the-art solutions (e.g., SIFT and SURF). Therefore, it is necessary to evaluate the performance of these emerging visual descriptors in terms of processing time, repeatability and matching accuracy, and whether they can obtain competitive performance in applications such as image retrieval. This paper aims to provide an up-to-date detailed, clear, and complete evaluation of local feature detector and descriptors, focusing on the methods that were designed with complexity constraints, providing a much needed reference for researchers in this field. Our results demonstrate that recent feature extraction algorithms, e.g., BRISK and ORB, have competitive performance requiring much lower complexity and can be efficiently used in low-power devices.
|
Differential Mode Input Filter Design for a Three-Phase Buck-Type PWM Rectifier Based on Modeling of the EMC Test Receiver
|
For a three-phase buck-type pulsewidth modulation rectifier input stage of a high-power telecommunications power supply module, a differential-mode (DM) electromagnetic compatibility (EMC) filter is designed for compliance to CISPR 22 Class B in the frequency range of 150 kHz-30 MHz. The design is based on a harmonic analysis of the rectifier input current and a mathematical model of the measurement procedure including the line impedance stabilization network (LISN) and the test receiver. Guidelines for a successful filter design are given, and components for a 5-kW rectifier prototype are selected. Furthermore, formulas for the estimation of the quasi-peak detector output based on the LISN output voltage spectrum are provided. The damping of filter resonances is optimized for a given attenuation in order to facilitate a higher stability margin for system control. Furthermore, the dependence of the filter input and output impedances and the attenuation characteristic on the inner mains impedance are discussed. As experimentally verified by using a three-phase common-/Differential-Mode separator, this procedure allows accurate prediction of the converter DM conducted emission levels and therefore could be employed in the design process of the rectifier system to ensure compliance to relevant EMC standards
|
Camera Self-Calibration with Known Camera Orientation
|
support for the camera interfaces. My colleagues Felix Woelk and Kevin Köser I would like to thank for many fruitful discussions. I thank our system administrator Torge Storm for always fixing my machine and providing enough data space for all my sequences which was really a hard job. Of course I also would like to thank the other members of the group Jan Woetzel, Daniel Grest, Birger Streckel and Renate Staecker for their help, the discussions and providing the exciting working environment. Last but not least, I would like to express my gratitude to my wife Miriam for always supporting me and my work. I also want to thank my sons Joshua and Noah for suffering under my paper writing. Finally I thank my parents for always supporting my education and my work.
|
Sensitivity analysis of a 1 to 18 GHz broadband DRGH antenna
|
In this paper some properties of a 1-18 GHz double ridged guide horn antenna (DRGH) with a feeding section including coaxial input and a back shorting plate are rigorously investigated. Most desired electromagnetic characteristics of this antenna is achieved by empirically finding sizes for different parameters, however there is no explanation for the effect of most of them in open literature. In order to have a clear idea of the effects of different parameters, a 1-18 GHz DRGH has been simulated with HFSS. It is understood from the results that the parameters near feeding point such as the initial distance between ridges, the distance between the center of the probe and the cavity, and the radius of the inserted probe play a significant role in controlling VSWR and gain and in shaping the radiation pattern for high frequencies
|
A Comparison of Five Alternative Approaches to Information Systems Development
|
The field of information systems (IS) has grown dramatically over the past three decades. Recent trends have transformed the IS landscape. These trends include: the evolution of implementation technology from centralized mainframe environments towards distributed client-server architectures, embracing the internet and intranets; changes in user interface technology from character-based to graphical user interfaces, multimedia, and the World Wide Web; changes in applications from transaction processing systems towards systems supporting collaborative work; and the use of information technology as an enabler of business process reengineering and redesign. These technology changes coupled with changes in organizations and their operating environment, such as the growth of the network and virtual organization, internationalization and globalization of many organizations, intensified global competition, changes in values such as customer orientation (service quality) and Quality of Working Life, have imposed new demands on the development of information systems. These changes have led to an increasing discussion about information systems development (ISO), and in particular, the various methods, tools, methodologies, and approaches for ISD. We believe such discussion has opened the door for new, alternative IS development approaches and methodologies. Our paper takes up this theme by describing five alternative ISD approaches, namely the Interactionist approach, the Speech Act-based approach, Soft Systems Methodology, the Trade Unionist approach, and the Professional Work Practices approach. Despite the fact that most of these approaches have a history of over 15 years, their relevance to IS development is not well recognized in the mainstream of IS practice and research, nor is their institutional status comparable to traditional approaches such as structured analysis and design methods. Therefore we characterize the five approaches as 'alternative' in the sense of alternative to the orthodoxy. The selection of the five approaches is essentially based on the finding that research on ISD approaches and methodologies has been dominated by a single set of philosophical assumptions regarding the nature of the phenomena studied and what constitutes valid knowledge about those phenomena (Hirschheim and Klein, 1989; Orlikowski and Baroudi, 1991; and livari, 1991). The idea behind the selection of the five ISD approaches has been to include approaches which challenge the dominant assumptions. These alternative approaches typically build upon radically different conceptions of the goals, meaning, function and processes of ISD. Part of the rationale for our paper is to meet the need of a concise yet penetrating way of introducing alternative ways of system development to a wider audience. The way in which the approaches are introduced, highlights their underlying principles and features. This naturally leads to a critical examination of their strengths and weaknesses. From this angle the paper adds more detail to the earlier work on mapping the terrain of the complex literature on IS development (cf. Episkopou and Wood-Harper, 1986; Hirschheim and Klein, 1989; livari, 1991; Orlikowski and Baroudi, 1991; Baskerville, etal. 1992; Avison et al. 1992; Avgerou and Cornford, 1993; Fitzgerald, 1994; Hirschheim, Klein and Lyytinen 1995; Avison and Fitzgerald, 1995; Jayartna and Fitzgerald, 1996; Wynekoop and Russo, 1997; livari, Hirschheim and Klein 1997). The paper can be expected to be of interest to the IS community in three respects. Firstly, the five alternative approaches are likely not to be as widely known as they deserve to be. The following meets the need of a concise introduction to them. Secondly, the paper continues our earlier work on mapping the terrain of the complex literature on IS development (Hirschheim and Klein, 1989; livari, 1991; Hirschheim and Klein, 1992; Hirschheim, Klein and Lyytinen 1995, 1996; livari, Hirschheim and Klein, 1997). Thirdly, it is our contention that the five alternative approaches point the direction which some important IS research will likely take in the future to strengthen the interpretive and critical traditions (Orlikowski and Baroudi, 1991; Hirschheim and Klein, 1994) within the field.
|
Cross-Language Entity Linking
|
There has been substantial recent interest in aligning mentions of named entities in unstructured texts to knowledge base descriptors, a task commonly called entity linking. This technology is crucial for applications in knowledge discovery and text data mining. This paper presents experiments in the new problem of crosslanguage entity linking, where documents and named entities are in a different language than that used for the content of the reference knowledge base. We have created a new test collection to evaluate cross-language entity linking performance in twenty-one languages. We present experiments that examine issues such as: the importance of transliteration; the utility of cross-language information retrieval; and, the potential benefit of multilingual named entity recognition. Our best model achieves performance which is 94% of a strong monolingual baseline.
|
Cell phones: modern man's nemesis?
|
Over the past decade, the use of mobile phones has increased significantly. However, with every technological development comes some element of health concern, and cell phones are no exception. Recently, various studies have highlighted the negative effects of cell phone exposure on human health, and concerns about possible hazards related to cell phone exposure have been growing. This is a comprehensive, up-to-the-minute overview of the effects of cell phone exposure on human health. The types of cell phones and cell phone technologies currently used in the world are discussed in an attempt to improve the understanding of the technical aspects, including the effect of cell phone exposure on the cardiovascular system, sleep and cognitive function, as well as localized and general adverse effects, genotoxicity potential, neurohormonal secretion and tumour induction. The proposed mechanisms by which cell phones adversely affect various aspects of human health, and male fertility in particular, are explained, and the emerging molecular techniques and approaches for elucidating the effects of mobile phone radiation on cellular physiology using high-throughput screening techniques, such as metabolomics and microarrays, are discussed. A novel study is described, which is looking at changes in semen parameters, oxidative stress markers and sperm DNA damage in semen samples exposed in vitro to cell phone radiation.
|
Influence of landmark-based navigation instructions on user attention in indoor smart spaces
|
Using landmark-based navigation instructions is widely considered to be the most effective strategy for presenting navigation instructions. Among other things, landmark-based instructions can reduce the user's cognitive load, increase confidence in navigation decisions and reduce the number of navigational errors. Their main disadvantage is that the user typically focuses considerable amount of attention on searching for landmark points, which easily results in poor awareness of the user's surroundings. In indoor spaces, this implies that landmark-based instructions can reduce the attention the user pays on advertisements and commercial displays, thus rendering the assistance commercially inviable. To better understand how landmark-based instructions influence the user's awareness of her surroundings, we conducted a user study with $20$ participants in a large national supermarket that investigated how the attention the user pays on her surroundings varies across two types of landmark-based instructions that vary in terms of their visual demand. The results indicate that an increase in the visual demand of landmark-based instructions does not necessarily improve the participant's recall of their surrounding environment and that this increase can cause a decrease in navigation efficiency. The results also indicate that participants generally pay little attention to their surroundings and are more likely to rationalize than to actually remember much from their surroundings. Implications of the findings on navigation assistants are discussed.
|
Compact Quintuple-Mode Stub-Loaded Resonator and UWB Filter
|
A novel compact quintuple-mode stub-loaded resonator and ultrawideband (UWB) bandpass filter (BPF) are proposed in this letter. The proposed resonator can generate two odd modes and three even modes in the desired band. The odd-mode resonance frequencies are exclusively correlated to the modified triple-mode resonator; even-mode resonance frequencies can be flexibly controlled by the impedance-stepped open and short stub at its central plane, whereas the odd-mode ones are fixed. The open stubs in pairs at two sides of the low-impedance line are mainly applied to adjust the high resonant modes (fm4, fm5) into desired passband. The short stub can generate two transmission zeros near the lower and upper cutoff frequencies, leading to high rejection skirt. A quintuple-mode UWB filter is simulated, fabricated, and measured. The EM simulated and measured results are presented, and an excellent agreement is obtained.
|
INVENTORY MANAGEMENT , SERVICE LEVEL AND SAFETY STOCK
|
There are many studies that emphasize as a first objective of inventory management to minimize the value invested in inventory because it has a direct impact on return on assets. This approach is not fully correct. The actual objective is to determine the value and the mix of inventory that support a high service level for customers and that maximizing the companies’ financial performance. Many companies look at their own demand fluctuations and assume that there are too many variables to predict demand variability. Service level is used in inventory management to measure the performance of inventory policies and represents the probability of not being stock-out and not losing sales. Safety stock is inventory that is carried to prevent stock outs. Safety stock determinations are not intended to eliminate all stock outs, just majority of them. Companies choose to keep safety stock level high as a buffer against demand variability resulting in inefficiencies and high working capital requirements. Safety stock optimization enables companies to achieve savings and increase inventory turns.
|
Visual Thinking In Action: Visualizations As Used On Whiteboards
|
While it is still most common for information visualization researchers to develop new visualizations from a data-or taskdriven perspective, there is growing interest in understanding the types of visualizations people create by themselves for personal use. As part of this recent direction, we have studied a large collection of whiteboards in a research institution, where people make active use of combinations of words, diagrams and various types of visuals to help them further their thought processes. Our goal is to arrive at a better understanding of the nature of visuals that are created spontaneously during brainstorming, thinking, communicating, and general problem solving on whiteboards. We use the qualitative approaches of open coding, interviewing, and affinity diagramming to explore the use of recognizable and novel visuals, and the interplay between visualization and diagrammatic elements with words, numbers and labels. We discuss the potential implications of our findings on information visualization design.
|
Studies of the microstructure and properties of dense ceramic coatings produced by high-velocity oxygen-fuel combustion spraying
|
High-velocity oxygen-fuel (HVOF) spraying stands out among the various processes to improve metal and ceramic coating density and surface characteristics. This paper explores microstructure development, coating characterization and properties of HVOF sprayed alumina coatings and compares these with those produced using the conventional air plasma spray process. We report on the characterization of these coatings using small-angle neutron scattering (SANS) and X-ray computed microtomography (XMT) to explain the behavior observed for the two coating systems. Microstructure information on porosity, void orientation distribution, void mean opening dimensions and internal surface areas have been obtained using SANS. XMT (X-ray synchrotron microtomography) has been used to nondestructively image the microstructural features in 3D at a 2.7m spatial resolution over a 2–3 mm field of view. 3D medial axis analysis has been used for the quantitative analysis of the coarse void space in order to obtain information on the porosity, specific surface area, pore connectivity and size distribution of the larger voids in the coatings. The results reveal different pore morphologies for the two spray processes. While only globular pores are imaged in the plasma sprayed coatings due to the spatial resolution limit, highly layered porosity is imaged in the HVOF coating. When the quantitative SANS and XMT information are combined, the different thermal and mechanical properties of the two different coating types can be explained in terms of their distinctly different void microstructures. © 2003 Elsevier B.V. All rights reserved.
|
Three-Port DC–DC Converter for Stand-Alone Photovoltaic Systems
|
System efficiency and cost effectiveness are of critical importance for photovoltaic (PV) systems. This paper addresses the two issues by developing a novel three-port dc-dc converter for stand-alone PV systems, based on an improved Flyback-Forward topology. It provides a compact single-unit solution with a combined feature of optimized maximum power point tracking (MPPT), high step-up ratio, galvanic isolation, and multiple operating modes for domestic and aerospace applications. A theoretical analysis is conducted to analyze the operating modes followed by simulation and experimental work. This paper is focused on a comprehensive modulation strategy utilizing both PWM and phase-shifted control that satisfies the requirement of PV power systems to achieve MPPT and output voltage regulation. A 250-W converter was designed and prototyped to provide experimental verification in term of system integration and high conversion efficiency.
|
Natural TTS Synthesis by Conditioning Wavenet on MEL Spectrogram Predictions
|
This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize time-domain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the conditioning input to WaveNet instead of linguistic, duration, and $F_{0}$ features. We further show that using this compact acoustic intermediate representation allows for a significant reduction in the size of the WaveNet architecture.
|
Universal construction based on 3D printing electric motors: Steps towards self-replicating robots to transform space exploration
|
Through a recent confluence of technological capacities, self-replicating robots have become a potentially nearterm rather than speculative technology. In a practical sense, self-replicating robots can never become obsolete — the first self- replicating robots will spawn all future generations of robots, subject to deliberate upgrading and/or evolutionary change. Furthermore, this technology promises to revolutionise space exploration by bypassing the apparently-insurmountable problem of high launch costs. We present recent efforts in 3D printing the key robotic components required for any such self- replicating machine.
|
Chopper Stabilized Low Resistance Comparator
|
The paper describes an improvement of the chopper method for elimination of parasitic voltages in a low resistance comparison and measurement procedure. The basic circuit diagram along with a short description of the working principle are presented and the appropriate low resistance comparator prototype was designed and realized. Preliminary examinations confirm the possibility of measuring extremely low voltages. Very high accuracy in resistance comparison and measurement is achieved (0.08 ppm for 1,000 attempts). Some special critical features in the design are discussed and solutions for overcoming the problems are described.
|
Direct torque control of four-switch brushless DC Motor with non-sinusoidal back-EMF
|
This paper presents a direct torque control (DTC) technique for brushless DC (BLDC) motors with non-sinusoidal back-EMF using four-switch inverter in the constant torque region. This approach introduces a two-phase conduction mode as opposed to the conventional three-phase DTC drives. Unlike conventional six-step PWM current and voltage control schemes, by properly selecting the inverter voltage space vectors of the two-phase conduction mode from a simple look-up table at a predefined sampling time, the desired quasi-square wave current is obtained. Therefore, a much faster torque response is achieved compared to conventional PWM current and especially voltage control schemes. In addition, for effective torque control in two phase conduction mode, a novel switching pattern incorporating with the voltage vector look-up table is designed and implemented for four-switch inverter to produce the desired torque characteristics. Furthermore, to eliminate the low-frequency torque oscillations caused by the non-ideal trapezoidal shape of the actual back-EMF waveform of the BLDC motor, pre-stored back-EMF constant versus position lookup tables are designed and used in the torque estimation. As a result, it is possible to achieve two-phase conduction DTC of a BLDC motor drive using four-switch inverter with faster torque response due to the fact that the voltage space vectors are directly controlled. Therefore, the direct torque controlled four-switch three-phase BLDC motor drive could be a good alternative to the conventional six-switch counterpart with respect to low cost and high performance. A theoretical concept is developed and the validity and effectiveness of the proposed two phase conduction four- switch DTC scheme are verified through the simulations and experimental results.
|
SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment
|
This paper presents the task on the evaluation of Compositional Distributional Semantics Models on full sentences organized for the first time within SemEval2014. Participation was open to systems based on any approach. Systems were presented with pairs of sentences and were evaluated on their ability to predict human judgments on (i) semantic relatedness and (ii) entailment. The task attracted 21 teams, most of which participated in both subtasks. We received 17 submissions in the relatedness subtask (for a total of 66 runs) and 18 in the entailment subtask (65 runs).
|
How Water Permutes the Structural Organization and Microscopic Dynamics of Cholinium Glycinate Biocompatible Ionic Liquid.
|
We investigate the structural organization and microscopic dynamics of aqueous cholinium glycinate ([Ch][Gly]), a biocompatible ionic liquid (IL), by employing all-atom molecular dynamics simulations. Herein, we observe the effect of water content on the molecular-level arrangement of ions in the IL-water mixture through simulated X-ray scattering structure function, their partial components, and real-space correlation functions. The study reveals the presence of a principal peak in the total structure function of the neat [Ch][Gly] IL at around q = 1.4 Å-1. The corresponding correlation tends to decrease and shifts toward shorter length scales with increase in the water content. It is found that the principal peak mainly originates from the correlations between counter ions. Hydrogen bond analysis reveals that water molecules compete with the anions to form hydrogen bond with the hydroxyl hydrogen of cation. Concomitantly, strong hydrogen bonding is also observed between [Gly]- anion and water, which depreciates with the increasing hydration level. Hydrogen-bond autocorrelation function analysis manifests that average lifetimes of different possible hydrogen bonds decrease with increase in mole fraction of water. The mobilities of the ions are also significantly affected by water, showing a nonlinear increase with the increasing water content. The [Gly]- anion is found to show faster dynamics on the addition of water as compared to [Ch]+ cation.
|
Manifold Mixup : Learning Better Representations by Interpolating Hidden States
|
Deep networks often perform well on the data distribution on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution. This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. Ideally, a model would assign lower confidence to points unlike those from the training distribution. We propose a regularizer which addresses this issue by training with interpolated hidden states and encouraging the classifier to be less confident at these points. Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes. This has a major advantage in that it avoids the underfitting which can result from interpolating in the input space. We prove that the exact condition for this problem of underfitting to be avoided by Manifold Mixup is that the dimensionality of the hidden states exceeds the number of classes, which is often the case in practice. Additionally, this concentration can be seen as making the features in earlier layers more discriminative. We show that despite requiring no significant additional computation, Manifold Mixup achieves large improvements over strong baselines in supervised learning, robustness to single-step adversarial attacks, semi-supervised learning, and Negative Log-Likelihood on held out samples. * Equal contribution. †Work done while author was visiting Montreal Institute for Learning Algorithms. Code available at https://github.com/vikasverma1077/manifold_mixup ar X iv :1 80 6. 05 23 6v 3 [ st at .M L ] 4 O ct 2 01 8
|
COMPONENTS OF PERFECTIONISM AND PROCRASTINATION IN COLLEGE STUDENTS
|
The present research examined the relations between individual differences in perfectionism and procrastinatory behavior in college students. A sample of 131 students (56 males, 75 females) completed measures of self-oriented, other-oriented, and socially prescribed perfectionism, as well as measures of academic procrastination and general procrastination. Subjects also completed ratings of factors related to procrastination (i.e., fear of failure, task aversiveness). Correlational analyses revealed it was the socially prescribed perfectionism dimension that was most closely correlated with both generalized procrastinat ion and academic procrastination, especially among males. There were few significant correlations involving self-oriented and other-oriented perfectionism. However, the fear of failure component of procrastination was associated broadly with all the perfectionism dimensions. Overall, the results suggest that procrastination stems, in part, from the anticipation of social disapproval from individuals with perfectionistic standards for others.
|
Dynamic Identification of Dynamic Stochastic General Equilibrium Models
|
THIS SUPPLEMENTARY DOCUMENT contains additional examples. Matlab code for constructing the ΔS(θ) and ΔNS(θ) matrices proposed in Komunjer and Ng (2011) is also provided to show that while the expression appears complex, the computation is simple. Once the minimal representation is obtained, ΔΛ(θ) and Δ NS(θ) can be computed using numerical differentiation. The ΔT (θ) and ΔNS T (θ) only require specification of nX , while Δ S U(θ) only requires nε. Section S.1 uses the model of An and Schorfheide (2007) to study the implications of (i) adding ct to the observables, and (ii) dropping variables to remove singularity. Section S.2 analyzes the model in Smets and Wouters (2007). It is shown that putting the model into minimal state space representation reveals features about the model that are not otherwise transparent. In particular, the parameters in the policy rule, output, and potential output equations are not independent. Section S.3 analyzes the model of Christiano, Eichenbaum, and Evans (2005). Section S.4 considers the model of Cicco, Pancrazi, and Uribe (2010) that is identifiable without further restrictions. Matlab code for computing the Δ(θ0) matrix is given in Section S.5.
|
SF-sketch: slim-fat-sketch with GPU assistance
|
A sketch is a probabilistic data structure that is used to record frequencies of items in a multi-set. Various types of sketches have been proposed in literature and applied in a variety of fields, such as data stream processing, natural language processing, distributed data sets etc. While several variants of sketches have been proposed in the past, existing sketches still have a significant room for improvement in terms of accuracy. In this paper, we propose a new sketch, called Slim-Fat (SF) sketch, which has a significantly higher accuracy compared to prior art, a much smaller memory footprint, and at the same time achieves the same speed as the best prior sketch. The key idea behind our proposed SF-sketch is to maintain two separate sketches: a small sketch called Slim-subsketch and a large sketch called Fat-subsketch. The Slim-subsketch, stored in the fast memory (SRAM), enables fast and accurate querying. The Fat-subsketch, stored in the relatively slow memory (DRAM), is used to assist the insertion and deletion from Slim-subsketch. We implemented and extensively evaluated SF-sketch along with several prior sketches and compared them side by side. Our experimental results show that SF-sketch outperforms the most commonly used CM-sketch by up to 33.1 times in terms of accuracy.
|
National Embeddedness and Calculative HRM in US Subsidiaries in Europe and Australia
|
This article presents a study of the degree to which national institutional settings impact on the application of management practices in foreign subsidiaries of multinational companies. Applying the national business systems approach our study centres on the use of calculative human resource management (HRM) practices by subsidiaries of US multinational companies in the UK, Ireland, Germany, Denmark/Norway and Australia, respectively, in comparison with these countries’ indigenous firms. The analysis indicates that while US subsidiaries adapt to the local setting in terms of applying calculative HRM practices, they also diverge from indigenous firm practices.
|
Infection and preterm birth.
|
As many as 50% of spontaneous preterm births are infection-associated. Intrauterine infection leads to a maternal and fetal inflammatory cascade, which produces uterine contractions and may also result in long-term adverse outcomes, such as cerebral palsy. This article addresses the prevalence, microbiology, and management of intrauterine infection in the setting of preterm labor with intact membranes. It also outlines antepartum treatment of infections for the purpose of preventing preterm birth.
|
Fast Computation of the Difference of Low-Pass Transform
|
This paper defines the difference of low-pass (DOLP) transform and describes a fast algorithm for its computation. The DOLP is a reversible transform which converts an image into a set of bandpass images. A DOLP transform is shown to require O(N2) multiplies and produce O(N log(N)) samples from an N sample image. When Gaussian low-pass filters are used, the result is a set of images which have been convolved with difference of Gaussian (DOG) filters from an exponential set of sizes. A fast computation technique based on ``resampling'' is described and shown to reduce the DOLP transform complexity to O(N log(N)) multiplies and O(N) storage locations. A second technique, ``cascaded convolution with expansion,'' is then defined and also shown to reduce the computational cost to O(N log(N)) multiplies. Combining these two techniques yields an algorithm for a DOLP transform that requires O(N) storage cells and requires O(N) multiplies.
|
Visual Analysis of Cardiac 4D MRI Blood Flow Using Line Predicates
|
Four-dimensional MRI is an in vivo flow imaging modality that is expected to significantly enhance the understanding of cardiovascular diseases. Among other fields, 4D MRI provides valuable data for the research of cardiac blood flow and with that the development, diagnosis, and treatment of various cardiac pathologies. However, to gain insights from larger research studies or to apply 4D MRI in the clinical routine later on, analysis techniques become necessary that allow to robustly identify important flow characteristics without demanding too much time and expert knowledge. Heart muscle contractions and the particular complexity of the flow in the heart imply further challenges when analyzing cardiac blood flow. Working toward the goal of simplifying the analysis of 4D MRI heart data, we present a visual analysis method using line predicates. With line predicates precalculated integral lines are sorted into bundles with similar flow properties, such as velocity, vorticity, or flow paths. The user can combine the line predicates flexibly and by that carve out interesting flow features helping to gain overview. We applied our analysis technique to 4D MRI data of healthy and pathological hearts and present several flow aspects that could not be shown with current methods. Three 4D MRI experts gave feedback and confirmed the additional benefit of our method for their understanding of cardiac blood flow.
|
Ensemble Transfer Learning Algorithm
|
Transfer learning and ensemble learning are the new trends for solving the problem that training data and test data have different distributions. In this paper, we design an ensemble transfer learning framework to improve the classification accuracy when the training data are insufficient. First, a weighted-resampling method for transfer learning is proposed, which is named TrResampling. In each iteration, the data with heavy weights in the source domain are resampled, and the TrAdaBoost algorithm is used to adjust the weights of the source data and target data. Second, three classic machine learning algorithms, namely, naive Bayes, decision tree, and SVM, are used as the base learners of TrResampling, where the base learner with the best performance is chosen for transfer learning. To illustrate the performance of TrResampling, the TrAdaBoost and decision tree are used for evaluation and comparison on 15 UCI data sets, TrAdaBoost, ARTL, and SVM are used for evaluation and comparison on five text data sets. According to the experimental results, our proposed TrResampling is superior to the state-of-the-art learning methods on UCI data sets and text data sets. In addition, TrResampling, bagging-based transfer learning algorithm, and MultiBoosting-based transfer learning algorithm (TrMultiBoosting) are assembled in the framework, and we compare the three ensemble transfer learning algorithms with TrAdaBoost to illustrate the framework’s effective transfer ability.
|
Can somatosensory system generate frequency following response?
|
The aim of this study was to establish whether functional characteristics of the somatosensory system structures in man comply with the frequency following response (FFR) generators. Somatosensory cerebral evoked potentials (SsCEP) were recorded by skin electrodes, and spinal somatosensory evoked potentials (SpEP) both by epidural and skin electrodes. In SpEP and SsCEP to trains of electrical or mechanical stimuli, a decrease of the amplutude to subsequent stimuli was found. SpEP were also attenuated by higher stimulation rates. It is highly improbable, therefore, that somatosensory system can contribute to the FFR-like response recorded in profoundly deaf people.
|
Security and Privacy in Device-to-Device (D2D) Communication: A Review
|
Device-to-device (D2D) communication presents a new paradigm in mobile networking to facilitate data exchange between physically proximate devices. The development of D2D is driven by mobile operators to harvest short range communications for improving network performance and supporting proximity-based services. In this paper, we investigate two fundamental and interrelated aspects of D2D communication, security and privacy, which are essential for the adoption and deployment of D2D. We present an extensive review of the state-of-the-art solutions for enhancing security and privacy in D2D communication. By summarizing the challenges, requirements, and features of different proposals, we identify lessons to be learned from existing studies and derive a set of “best practices.” The primary goal of our work is to equip researchers and developers with a better understanding of the underlying problems and the potential solutions for D2D security and privacy. To inspire follow-up research, we identify open problems and highlight future directions with regard to system and communication design. To the best of our knowledge, this is the first comprehensive review to address the fundamental security and privacy issues in D2D communication.
|
Experimental hypoglycemia is a human model of stress-induced hyperalgesia
|
Hypoglycemia is a physiological stress that leads to the release of stress hormones, such as catecholamines and glucocorticoids, and proinflammatory cytokines. These factors, in euglycemic animal models, are associated with stress-induced hyperalgesia. The primary aim of this study was to determine whether experimental hypoglycemia in humans would lead to a hyperalgesic state. In 2 separate 3-day admissions separated by 1 to 3 months, healthy study participants were exposed to two 2-hour euglycemic hyperinsulinemic clamps or two 2-hour hypoglycemic hyperinsulinemic clamps. Thermal quantitative sensory testing and thermal pain assessments were measured the day before and the day after euglycemia or hypoglycemia. In contrast to prior euglycemia exposure, prior hypoglycemia exposure resulted in enhanced pain sensitivity to hot and cold stimuli as well as enhanced temporal summation to repeated heat-pain stimuli. These findings suggest that prior exposure to hypoglycemia causes a state of enhanced pain sensitivity that is consistent with stress-induced hyperalgesia. This human model may provide a framework for hypothesis testing and targeted, mechanism-based pharmacological interventions to delineate the molecular basis of hyperalgesia and pain susceptibility.
|
Recent Advances in the Development of Cardiovascular Biomarkers.
|
Cardiovascular diseases (CVD) are initiated by endothelial dysfunction and resultant expression of adhesion molecules for inflammatory cells. Inflammatory cells secrete cytokines/chemokines and growth factors and promote CVD. Additionally, vascular cells themselves produce and secrete several factors, some of which can be useful for the early diagnosis and evaluation of disease severity of CVD. Among vascular cells, abundant vascular smooth muscle cells (VSMCs) secrete a variety of humoral factors that affect vascular functions in an autocrine/paracrine manner. Among these factors, we reported that CyPA (cyclophilin A) is secreted mainly from VSMCs in response to Rho-kinase activation and excessive reactive oxygen species (ROS). Additionally, extracellular CyPA augments ROS production, damages vascular functions, and promotes CVD. Importantly, a recent study in ATVB demonstrated that ambient air pollution increases serum levels of inflammatory cytokines. Moreover, Bell et al reported an association of air pollution exposure with high-density lipoprotein (HDL) cholesterol and particle number. In a large, multiethnic cohort study of men and women free of prevalent clinical CVD, they found that higher concentrations of PM2.5 over a 3-month time period was associated with lower HDL particle number, and higher annual concentrations of black carbon were associated with lower HDL cholesterol. Together with the authors’ previous work on biomarkers of oxidative stress, they provided evidence for potential pathways that may explain the link between air pollution exposure and acute cardiovascular events. The objective of this review is to highlight the novel research in the field of biomarkers for CVD.
|
A WL-SPPIM Semantic Model for Document Classification
|
In this paper, we explore SPPIM-based text classification method, and the experiment reveals that the SPPIM method is equal to or even superior than SGNS method in text classification task on three international and standard text datasets, namely 20newsgroups, Reuters52 and WebKB. Comparing to SGNS, although SPPMI provides a better solution, it is not necessarily better than SGNS in text classification tasks.. Based on our analysis, SGNS takes into the consideration of weight calculation during decomposition process, so it has better performance than SPPIM in some standard datasets. Inspired by this, we propose a WL-SPPIM semantic model based on SPPIM model, and experiment shows that WL-SPPIM approach has better classification and higher scalability in the text classification task compared with LDA, SGNS and SPPIM approaches.
|
Low Cost Flywheel Energy Storage for a Fuel Cell Powered Transit Bus
|
This paper presents work that was performed to design a compact flywheel energy storage solution for a fuel cell powered transit bus with a focus on commercialization requirements. For hybrid vehicle applications, flywheels offer much higher power densities than conventional batteries. The presented design attempts to maximize the use of lower-cost technologies. The rotor relies primarily on steel for the flywheel structure, and emphasis is placed on size reduction for vehicle packaging advantages Simulations of bus configurations on measured routes was performed using PSAT to correctly size the flywheel energy storage system.
|
The relationship between job satisfaction and intention to leave current employment among registered nurses in a teaching hospital.
|
AIMS AND OBJECTIVES
To assess Malaysian nurses' perceived job satisfaction and to determine whether any association exists between job satisfaction and intention to leave current employment.
BACKGROUND
There is currently a shortage of qualified nurses, and healthcare organisations often face challenges in retaining trained nurses. Job satisfaction has been identified as a factor that influences nurse turnover. However, this has not been widely explored in Malaysia.
DESIGN
Cross-sectional survey.
METHODS
Registered nurses in a teaching hospital in Malaysia completed a self-administered questionnaire. Of the 150 questionnaires distributed, 141 were returned (response rate = 94%).
RESULTS
Overall, nurses had a moderate level of job satisfaction, with higher satisfaction for motivational factors. Significant effects were observed between job satisfaction and demographic variables. About 40% of the nurses intended to leave their current employment. Furthermore, age, work experience and nursing education had significant associations with intention to leave. Logistic regression analysis revealed that job satisfaction was a significant and independent predictor of nurses' intention to leave after controlling for demographic variables.
CONCLUSION
The results suggest that there is a significant association between job satisfaction and nurses' intention to leave their current employment. It adds to the existing literature on the relationship between nurses' job satisfaction and intention to leave.
RELEVANCE TO CLINICAL PRACTICE
Methods for enhancing nurses' job satisfaction are vital to promote the long-term retention of nurses within organisations. Attention must be paid to the needs of younger nurses, as they represent the majority of the nursing workforce and often have lower satisfaction and greater intention to leave than older nurses do. Strategies to nurture younger nurses, such as providing opportunities for further education, greater management decision-making capabilities and flexible working environment, are essential.
|
Working after a stroke: survivors' experiences and perceptions of barriers to and facilitators of the return to paid employment.
|
PURPOSE
This paper examines respondents' relationship with work following a stroke and explores their experiences including the perceived barriers to and facilitators of a return to employment.
METHOD
Our qualitative study explored the experiences and recovery of 43 individuals under 60 years who had survived a stroke. Participants, who had experienced a first stroke less than three months before and who could engage in in-depth interviews, were recruited through three stroke services in South East England. Each participant was invited to take part in four interviews over an 18-month period and to complete a diary for one week each month during this period.
RESULTS
At the time of their stroke a minority of our sample (12, 28% of the original sample) were not actively involved in the labour market and did not return to the work during the period that they were involved in the study. Of the 31 participants working at the time of the stroke, 13 had not returned to work during the period that they were involved in the study, six returned to work after three months and nine returned in under three months and in some cases virtually immediately after their stroke. The participants in our study all valued work and felt that working, especially in paid employment, was more desirable than not working. The participants who were not working at the time of their stroke or who had not returned to work during the period of the study also endorsed these views. However they felt that there were a variety of barriers and practical problems that prevented them working and in some cases had adjusted to a life without paid employment. Participants' relationship with work was influenced by barriers and facilitators. The positive valuations of work were modified by the specific context of stroke, for some participants work was a cause of stress and therefore potentially risky, for others it was a way of demonstrating recovery from stroke. The value and meaning varied between participants and this variation was related to past experience and biography. Participants who wanted to work indicated that their ability to work was influenced by the nature and extent of their residual disabilities. A small group of participants had such severe residual disabilities that managing everyday life was a challenge and that working was not a realistic prospect unless their situation changed radically. The remaining participants all reported residual disabilities. The extent to which these disabilities formed a barrier to work depended on an additional range of factors that acted as either barriers or facilitator to return to work. A flexible working environment and supportive social networks were cited as facilitators of return to paid employment.
CONCLUSION
Participants in our study viewed return to work as an important indicator of recovery following a stroke. Individuals who had not returned to work felt that paid employment was desirable but they could not overcome the barriers. Individuals who returned to work recognized the barriers but had found ways of managing them.
|
Coexistence in millimeter-wave WBAN: A game theoretic approach
|
This paper examines and provides the theoretical evidence of the feasibility of 60 GHz mmWave in wireless body area networks (WBANs), by analyzing its properties. It has been shown that 60 GHz based communication could better fit WBANs compared to traditional 2.4 GHz based communication because of its compact network coverage, miniaturized devices, superior frequency reuse, multi-gigabyte transmission rate and the therapeutic merits for human health. Since allowing coexistence among the WBANs can enhance the efficiency of the mmWave based WBANs, we formulated the coexistence problem as a non-cooperative distributed power control game. This paper proves the existence of Nash equilibrium (NE) and derives the best response move as a solution. The efficiency of the NE is also improved by modifying the utility function and introducing a pair of pricing factors. Our simulation results indicate that the proposed pricing policy significantly improves the efficiency in terms of Pareto optimality and social optimality.
|
Exocarp Properties and Transcriptomic Analysis of Cucumber (Cucumis sativus) Fruit Expressing Age-Related Resistance to Phytophthora capsici
|
Very young cucumber (Cucumis sativus) fruit are highly susceptible to infection by the oomycete pathogen, Phytophthora capsici. As the fruit complete exponential growth, at approximately 10-12 days post pollination (dpp), they transition to resistance. The development of age-related resistance (ARR) is increasingly recognized as an important defense against pathogens, however, underlying mechanisms are largely unknown. Peel sections from cucumber fruit harvested at 8 dpp (susceptible) and 16 dpp (resistant) showed equivalent responses to inoculation as did whole fruit, indicating that the fruit surface plays an important role in defense against P. capsici. Exocarp from 16 dpp fruit had thicker cuticles, and methanolic extracts of peel tissue inhibited growth of P. capsici in vitro, suggesting physical or chemical components to the ARR. Transcripts specifically expressed in the peel vs. pericarp showed functional differentiation. Transcripts predominantly expressed in the peel were consistent with fruit surface associated functions including photosynthesis, cuticle production, response to the environment, and defense. Peel-specific transcripts that exhibited increased expression in 16 dpp fruit relative to 8 dpp fruit, were highly enriched (P<0.0001) for response to stress, signal transduction, and extracellular and transport functions. Specific transcripts included genes associated with potential physical barriers (i.e., cuticle), chemical defenses (flavonoid biosynthesis), oxidative stress, penetration defense, and molecular pattern (MAMP)-triggered or effector-triggered (R-gene mediated) pathways. The developmentally regulated changes in gene expression between peels from susceptible- and resistant- age fruits suggest programming for increased defense as the organ reaches full size.
|
Banks ' Liability Structure and Mortgage Lending During the Financial Crisis
|
We examine the impact of banks’ exposure to market liquidity shocks through wholesale funding on their supply of credit during the financial crisis in the United States. We focus on mortgage lending to minimize the impact of confounding demand factors that could potentially be large when comparing banks’ overall lending across heterogeneous categories of credit. The disaggregated data on mortgage applications that we use allows us to study the time variations in banks’ decisions to grant mortgage loans, while controlling for bank, borrower, and regional characteristics. The wealth of data also allows us to carry out matching exercises that eliminate imbalances in observable applicant characteristics between wholesale and retail banks, as well as various other robustness tests. We find that banks that were more reliant on wholesale funding curtailed their credit significantly more than retail-funded banks during the crisis. The demand for mortgage credit, on the other hand, declined evenly across wholesale and retail banks. To understand the aggregate implications of our findings, we exploit the heterogeneity in mortgage funding across U.S. Metropolitan Statistical Areas (MSAs) and find that wholesale funding was a strong and significant predictor of a sharper decline in overall mortgage credit at the MSA level. JEL Classification Numbers: G01, G21, E50
|
Archaeoastronomical Evidence for Wuism at the Hongshan Site of Niuheliang
|
Introduction The Neolithic Hongshan Culture flourished between 4500 and 3000 BCE in what is today northeastern China and Inner Mongolia (Figure 1). Village sites are found in the northern part of the region, while the two ceremonial sites of Dongshanzui and Niuheliang are located in the south, where villages are fewer (Guo 1995, Li 2003). The Hongshan inhabitants included agriculturalists who cultivated millet and pigs for subsistence, and accomplished artisans who carved finely crafted jades and made thin black-on-red pottery. Organized labor of a large number of workers is suggested by several impressive constructions, including an artificial hill containing three rings of marble-like stone, several high cairns with elaborate interiors and a 22 meter long building which contained fragments of life-sized statues. One fragment was a face with inset green jade eyes (Figure 2). A ranked society is implied by the burials, which include decorative jades made in specific, possibly iconographic, shapes. It has been argued previously that the sizes and locations of the mounded tombs imply at least three elite ranks (Nelson 1996).
|
Convolutional Neural Network on Three Orthogonal Planes for Dynamic Texture Classification
|
Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Deep learning methods have shown impressive results and are now the new state of the art for a wide range of computer vision tasks including image and video recognition and segmentation. In particular, Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to a filter bank approach. In this paper, we develop a new approach to DT analysis based on a CNN method applied on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their outputs to obtain a competitive DT classifier. Our results on a wide range of commonly used DT classification benchmark datasets prove the robustness of our approach. Significant improvement of the state of the art is shown on the larger datasets.
|
Ceci n'est pas une pipe: A deep convolutional network for fine-art paintings classification
|
“Ceci n'est pas une pipe” French for “This is not a pipe”. This is the description painted on the first painting in the figure above. But to most of us, how could this painting is not a pipe, at least not to the great Belgian surrealist artist Rene Magritte. He said that the painting is not a pipe, but rather an image of a pipe. In this paper, we present a study on large-scale classification of fine-art paintings using the Deep Convolutional Network. Our objectives are two-folds. On one hand, we would like to train an end-to-end deep convolution model to investigate the capability of the deep model in fine-art painting classification problem. On the other hand, we argue that classification of fine-art collections is a more challenging problem in comparison to objects or face recognition. This is because some of the artworks are non-representational nor figurative, and might requires imagination to recognize them. Hence, a question arose is that does a machine have or able to capture “imagination” in paintings? One way to find out is train a deep model and then visualize the low-level to high-level features learnt. In the experiment, we employed the recently publicly available large-scale “Wikiart paintings” dataset that consists of more than 80,000 paintings and our solution achieved state-of-the-art results (68%) in overall performance.
|
Progressive compression for lossless transmission of triangle meshes
|
Lossless transmission of 3D meshes is a very challenging and timely problem for many applications, ranging from collaborative design to engineering. Additionally, frequent delays in transmissions call for progressive transmission in order for the end user to receive useful successive refinements of the final mesh. In this paper, we present a novel, fully progressive encoding approach for lossless transmission of triangle meshes with a very fine granularity. A new valence-driven decimating conquest, combined with patch tiling and an original strategic retriangulation is used to maintain the regularity of valence. We demonstrate that this technique leads to good mesh quality, near-optimal connectivity encoding, and therefore a good rate-distortion ratio throughout the transmission. We also improve upon previous lossless geometry encoding by decorrelating the normal and tangential components of the surface. For typical meshes, our method compresses connectivity down to less than 3.7 bits per vertex, 40% better in average than the best methods previously reported [5, 18]; we further reduce the usual geometry bit rates by 20% in average by exploiting the smoothness of meshes. Concretely, our technique can reduce an ascii VRML 3D model down to 1.7% of its size for a 10-bit quantization (2.3% for a 12-bit quantization) while providing a very progressive reconstruction.
|
Towards Natural Interactive Question Answering
|
Interactive question answering systems should allow users to lead a coherent information seeking dialogue. Compared with systems that only locally evaluate a question, interactive systems facilitate the information seeking process and provide a more natural feel. We show that by extending a QA system to handle several types of anaphora and ellipsis, the naturalness of the interaction can be considerably improved. We describe an implementation in our prototype QA system for German and give a walk-through example of the enhanced interaction capabilities.
|
Yakchi chert–volcanogenic Formation—fragment of the Jurassic accretionary prism in the Central Sikhote-Аlin, Russian Far East
|
The Yakchi chert–volcanogenic formation is differentiated at the base of the stratigraphic succession in the Khor-Tormasu subzone of the Central Sikhote-clin structural–formational zone or the Samarka terrane of the Jurassic accretionary prism. The paper considers the results of biostratigraphic study of its deposits and petrogeochemical studies of its basalts. A tectonically disrupted sequence of the Yakchi Formation is restored on the basis of fossil conodonts and radiolarians, and its Late cermian–Middle Jurassic age is determined. The authors interpret the resulting stratigraphic succession in terms of changing depositional settings on the moving oceanic plate and recognize events of the ocean history recorded in it. Chert accumulated on the oceanic plate in pelagic canthalassa/caleopacifica from the Late cermian through to the Middle Jurassic. Deposition of siliceous claystone in the Late cermian–Early Triassic reflects the decline in productivity of radiolarians and a long anoxic event in Panthalassa. Chert accumulation resumed in the Triassic and persisted in the Jurassic, and it was interrupted by the eruption of basalts of different nature. Formation of the Middle–Late Triassic oceanic intraplate basalts likely occurred on the thick and old oceanic lithosphere and that of the Jurassic basalts on the thin and newly created lithosphere. In the Middle Jurassic, chert accumulation was replaced by accumulation of tuffaceous siltstone at a subduction zone along the csian continental margin. The middle Bathonian–early Callovian age of this siltstone closely predates accretion of the Yakchi Formation. The materials of the upper layer of the oceanic plate that formed over 100 million years in different parts of the ocean and on the lithospheric fragments of different ages were accreted to the continental margin. The bulk of the accreted material consists of oceanic intraplate basalts, i.e., fragments of volcanic edifices on the oceanic floor. accretion of this western part of the Khor-Tormasu subzone occurred concurrently with accretion of the southeastern part of the Samarka subzone in Primorye, which clarifies the paleotectonic zonation of the Central Sikhote-Alin accretionary prism. The cataclastic gabbroids and granitoids, as well as the clastic rocks with shallow-marine fossils in the Khor-Tormasu subzone, are considered as possible analogues of the Okrainka-Sergeevka allochthonous complex.
|
Autonomous Interface Agents
|
Two branches of the trend towards “agents” that are gaining currency are inre~ace agents, software that actively assists a user in operating an interactive interface, and autonomous agents, software that takes action without user intervention and operates concurrently, either while the user is idle or taking other actions. These two branches are related, but not identical, and are often lumped together under the single term “agent”. Much agent work can be classified as either being an interface agent, but not autonomous, or as an autonomous agent, but not operating directly in the interface. We show why it is important to have agents that are both interface agents and autonomous agents. We explore some design principles for such agents, and illustrate these principles with a description of Letizia, an autonomous interface agent that makes real-time suggestions for Web pages that a user might be interested in browsing.
|
Preclinical and clinical evaluation of intraductally administered agents in early breast cancer.
|
Most breast cancers originate in the epithelial cells lining the breast ducts. Intraductal administration of cancer therapeutics would lead to high drug exposure to ductal cells and eliminate preinvasive neoplasms while limiting systemic exposure. We performed preclinical studies in N-methyl-N'-nitrosourea-treated rats to compare the effects of 5-fluorouracil, carboplatin, nanoparticle albumin-bound paclitaxel, and methotrexate to the previously reported efficacy of pegylated liposomal doxorubicin (PLD) on treatment of early and established mammary tumors. Protection from tumor growth was observed with all five agents, with extensive epithelial destruction present only in PLD-treated rats. Concurrently, we initiated a clinical trial to establish the feasibility, safety, and maximum tolerated dose of intraductal PLD. In each eligible woman awaiting mastectomy, we visualized one ductal system and administered dextrose or PLD using a dose-escalation schema (2 to 10 mg). Intraductal administration was successful in 15 of 17 women with no serious adverse events. Our preclinical studies suggest that several agents are candidates for intraductal therapy. Our clinical trial supports the feasibility of intraductal administration of agents in the outpatient setting. If successful, administration of agents directly into the ductal system may allow for "breast-sparing mastectomy" in select women.
|
FACTORS ASSOCIATED WITH TOBACCO SMOKING AMONG MALE ADOLESCENTS: THE ROLE OF PSYCHOLOGIC, BEHAVIORAL, AND DEMOGRAPHIC RISK FACTORS
|
Background: Tobacco smoking among adolescents has been a concern for researchers and health organizations in recent years. However, predisposing factors to smoking initiation among Iranian adolescents are not well recognized. Objectives: This study aimed to determine the prevalence of tobacco smoking and to investigate the role of psychologic, behavioral, and demographic risk factors in adolescents' smoking status. Patients and Methods: This cross-sectional study was performed on 810 male adolescents recruited through cluster random sampling method in Hamadan in 2014. The participants received a self-administered questionnaire that contained questions about tobacco smoking behavior and demographic, behavioral, and psychologic variables. Data were analyzed by SPSS16 through independent-samples t test, Chi square, and logistic regression. Results: A total of 139 persons (17.1%) were tobacco smoker and the mean (SD) age at smoking initiation was 13.7 (2.2) years. Sense of need, decreasing stress, having a smoker friend, and inability to reject smoking suggestion were common reasons associated with tobacco smoking (P < 0.05). In addition, statistically significant differences between tobacco smokers and nonsmokers were found in the age, grade, mother's job, and education (P < 0.05). In comparison to non-smokers, tobacco smokers evaluated a typical smoker as less immature, more popular, more attractive, more self-confident, more independent, and less selfish person (P < 0.05). Conclusions: The results showed that the effect of several psychosocial, behavioral, and demographic risk factors on adolescents' smoking status. Thus, design and implementation of interventions based on the results of the present study may be effective in preventing tobacco smoking among adolescents.
|
Measuring user influence on Twitter: A survey
|
Centrality is one of the most studied concepts in social network analysis. There is a huge literature regarding centrality measures, as ways to identify the most relevant users in a social network. The challenge is to find measures that can be computed efficiently, and that can be able to classify the users according to relevance criteria as close as possible to reality. We address this problem in the context of the Twitter network, an online social networking service with millions of users and an impressive flow of messages that are published and spread daily by interactions between users. Twitter has different types of users, but the greatest utility lies in finding the most influential ones. The purpose of this article is to collect and classify the different Twitter influence measures that exist so far in literature. These measures are very diverse. Some are based on simple metrics provided by the Twitter API, while others are based on complex mathematical models. Several measures are based on the PageRank algorithm, traditionally used to rank the websites on the Internet. Some others consider the timeline of publication, others the content of the messages, some are focused on specific topics, and others try to make predictions. We consider all these aspects, and some additional ones. Furthermore, we include measures of activity and popularity, the traditional mechanisms to correlate measures, and some important aspects of computational complexity for this particular context.
|
Intelligent Phishing Website Detection and Prevention System by Using Link Guard Algorithm
|
Phishing is a new type of network attack where the attacker creates a replica of an existing Web page to fool users (e.g., by using specially designed e-mails or instant messages) into submitting personal, financial, or password data to what they think is their service provides’ Web site. In this project, we proposed a new end-host based anti-phishing algorithm, which we call Link Guard, by utilizing the generic characteristics of the hyperlinks in phishing attacks. These characteristics are derived by analyzing the phishing data archive provided by the Anti-Phishing Working Group (APWG). Because it is based on the generic characteristics of phishing attacks, Link Guard can detect not only known but also unknown phishing attacks. We have implemented LinkGuard in Windows XP. Our experiments verified that LinkGuard is effective to detect and prevent both known and unknown phishing attacks with minimal false negatives. LinkGuard successfully detects 195 out of the 203 phishing attacks. Our experiments also showed that LinkGuard is light weighted and can detect and prevent phishing attacks in real time.
|
Understanding Internet Banking Adoption and Use Behavior: A Hong Kong Perspective
|
Hong Kong is an international financial center well known for its efficiency and its ability to adapt and keep up with the times. Recently, however, the Hong Kong banking industry has been losing competitive advantages in some areas, with the adoption of Internet Banking being one of them. Hong Kong banks have been slower than some other international banks in joining the e-commerce evolution, which first emerged in the United States in mid-90s. Financial institutions in the U.S. have introduced and promoted online banking to provide better customer services. Many property and stock investment firms in Hong Kong have also jumped on the bandwagon and adopted the Internet as a channel for providing better and more efficient services ABSTRACT
|
Artificial Liver Support System Improves Short- and Long-Term Outcomes of Patients With HBV-Associated Acute-on-Chronic Liver Failure
|
For patients with acute-on-chronic liver failure (ACLF), artificial liver support system (ALSS) may help prolong lifespan and function as a bridge to liver transplantation (LT), but data on its long-term benefit are lacking. We conducted this prospective, controlled study to determine the efficacy of ALSS and the predictors of mortality in patients with hepatitis B virus (HBV)-associated ACLF.From January 2003 to December 2007, a total of 234 patients with HBV-associated ACLF not eligible for LT were enrolled in our study. They were allocated to receive either plasma exchange centered ALSS plus standard medical therapy (SMT) (ALSS group, n=104) or SMT alone (control group, n=130). All the patients were followed-up for at least 5 years, or until death.At 90 days, the survival rate of ALSS group was higher than that of the control group (62/104 [60%] vs 61/130 [47%], respectively; P<0.05). Median survival was 879 days in the ALSS group (43% survival at 5 years) and 649 days in the control group (31% survival at 5 years, log-rank P<0.05). ALSS was found to be associated with favorable outcome of these patients by both univariate and multivariate analysis. Multivariate Cox regression analysis also revealed that lower serum sodium levels, higher grades of encephalopathy, presence of cirrhosis, hepatorenal syndrome, and higher model for end-stage liver disease scores were independent predictors for both 90-day and 5-year mortality due to ACLF.Our findings suggest that ALSS is safe and may improve the short- and long-term prognosis of patients with HBV-associated ACLF.
|
Hierarchical models of object recognition in cortex
|
Visual processing in cortex is classically modeled as a hierarchy of increasingly sophisticated representations, naturally extending the model of simple to complex cells of Hubel and Wiesel. Surprisingly, little quantitative modeling has been done to explore the biological feasibility of this class of models to explain aspects of higher-level visual processing such as object recognition. We describe a new hierarchical model consistent with physiological data from inferotemporal cortex that accounts for this complex visual task and makes testable predictions. The model is based on a MAX-like operation applied to inputs to certain cortical neurons that may have a general role in cortical function.
|
ICT and Tourism : Challenges and Opportunities
|
The revolution in ICTs has profound implications for economic and social development. It has pervaded every aspect of human life whether it is health, education, economics, governance, entertainment etc. Dissemination, propagation and accessibility of these technologies are viewed to be integral to a country’s development strategy. The most important benefit associated with the access to the new technologies is the increase in the supply of information. Information is shared and disseminated to larger audience. Secondly it reduces the cost of production. Knowledge is produced, transmitted, accessed and shared at the minimum cost. With the reduction in the transactional costs, there is also a reduction in the degree of inefficiencies and uncertainty. Thirdly it has overcome the constraints of distance and geography. ICTs have cut across the geographic boundaries of the nation states. Buyers and sellers are able to share information, specifications, production process etc across the national borders. It enables all to know the comparative advantage in the market economy. It leads to the larger markets and increased access to global supply chains. Fourthly it has led to more transparency. Networking and information sharing definitely leads to demands for greater openness and transparency. Whether you want to know the status of the central banks’ foreign exchange agency or the cost prize of potatoes in the local market, ICTs empowers the individual with the information access, which is transparent. Efforts are under way to integrate ICTs to all sectors and developmental activity. Tourism is one such potential area. Tourism and economy are closely interconnected. Discussion on Tourism involves the discussion on economic enterprise also.
|
Collective Response of Human Populations to Large-Scale Emergencies
|
Despite recent advances in uncovering the quantitative features of stationary human activity patterns, many applications, from pandemic prediction to emergency response, require an understanding of how these patterns change when the population encounters unfamiliar conditions. To explore societal response to external perturbations we identified real-time changes in communication and mobility patterns in the vicinity of eight emergencies, such as bomb attacks and earthquakes, comparing these with eight non-emergencies, like concerts and sporting events. We find that communication spikes accompanying emergencies are both spatially and temporally localized, but information about emergencies spreads globally, resulting in communication avalanches that engage in a significant manner the social network of eyewitnesses. These results offer a quantitative view of behavioral changes in human activity under extreme conditions, with potential long-term impact on emergency detection and response.
|
High Level Exploration of Quantum-Dot Cellular Automata (QCA)
|
In this work, we present a high level evaluation of an emerging nanotechnology to determine a set of technology requirements. The technology under question is Quantum-Dot Cellular Automata (QCA). As a vehicle, we present two different QCA circuits and evaluate the technology requirements based on the specifications of these circuits. These circuits are a simple 4-bit arithmetic logic unit (ALU) and a 4/spl times/4 memory which are building blocks to more complex systems such as a computer central processing unit (CPU).
|
Multiband multistatic Passive Radar system for airspace surveillance: A step towards mature PCL implementations
|
Passive Radar systems present a novel approach to airspace surveillance. They use the target illumination by third-party transmitters, e.g. FM radio or TV broadcast stations, for air target detection and localisation. Due to the great number and wide frequency spacing of available transmitters the resulting multistatic and multiband illumination of air targets can be used to reliably obtain a wide-area air picture and to improve detection and tracking performance especially for low observable air targets. As no own transmitter is used Passive Radar systems are hard to detect and hard to jam and can potentially be implemented at low cost. For these reasons, interest in Passive Radar has grown significantly over the last years. However, most Passive Radar systems have been rather experimental set-ups tailored to a single frequency band or implemented as laboratory test devices. This paper describes the design, implementation and performance evaluation of a multi-band, multi-illuminator Passive Radar system. The result of this effort is a fully mobile FM/DAB/DVB Passive Radar system with cross-band data fusion capability. Multiple measurement campaigns with a great variety of third-party transmitters and arbitrary transmitter-target-receiver geometries have been conducted. In the paper, the design considerations and the resulting Passive Radar system structure are described and the evaluation of various measurement campaigns with this system are summarized.
|
Effect of probiotic supplements in women with gestational diabetes mellitus on inflammation and oxidative stress biomarkers: a randomized clinical trial.
|
BACKGROUND AND OBJECTIVES
Very little is known about the use of probiotics among pregnant women with gestational diabetes mellitus (GDM) especially its effect on oxidative stress and inflammatory indices. The aim of present study was to measure the effect of a probiotic supplement capsule on inflammation and oxidative stress biomarkers in women with newly-diagnosed GDM.
METHODS AND STUDY DESIGN
64 pregnant women with GDM were enrolled in a double-blind placebo controlled randomized clinical trial in the spring and summer of 2014. They were randomly assigned to receive either a probiotic containing four bacterial strains of Lactobacillus acidophilus LA-5, Bifidobacterium BB-12, Streptococcus Thermophilus STY-31 and Lactobacillus delbrueckii bulgaricus LBY-27 or placebo capsule for 8 consecutive weeks. Blood samples were taken pre- and post-treatment and serum indices of inflammation and oxidative stress were assayed. The measured mean response scales were then analyzed using mixed effects model. All statistical analysis was performed using Statistical Package for Social Sciences (SPSS) software (version 16).
RESULTS
Serum high-sensitivity C-reactive protein and tumor necrosis factor-α levels improved in the probiotic group to a statistically significant level over the placebo group. Serum interleukin-6 levels decreased in both groups after intervention; however, neither within group nor between group differences interleukin-6 serum levels was statistically significant. Malondialdehyde, glutathione reductase and erythrocyte glutathione peroxidase levels improved significantly with the use of probiotics when compared with the placebo.
CONCLUSIONS
The probiotic supplement containing L.acidophilus LA- 5, Bifidobacterium BB- 12, S.thermophilus STY-31 and L.delbrueckii bulgaricus LBY-2 appears to improve several inflammation and oxidative stress biomarkers in women with GDM.
|
Health Care Information Systems: Architectural Models and Governance
|
The adoption of ICT within health care has been characterized by a series of phases evolving since the 1960s (Khoumbati et al., 2009). Health informatics adoption started mainly from financial systems, providing support to the organization’s billing, payroll, accounting and reporting systems. Clinical departments launched a major initiative during the 1970s that supported such internal activities as radiology, laboratory and pharmacy (Wickramasinghe & Geisler, 2008), where machinery could support high-volume operations with the implementation of standardized procedures. Financial systems once again became prominent in the 1980s, with major investments in cost accounting and materials management systems (Grimson, 2001). During the 1990s, attention turned towards enterprise-wide clinical systems, including clinical data repositories and visions of a fully computerized Electronic Medical Record (EMR) (Bates, 2005).
|
Randomized, controlled pharmacokinetic and pharmacodynamic evaluation of albinterferon in patients with chronic hepatitis B infection.
|
BACKGROUND AND AIMS
Albinterferon is a fusion of albumin and interferon-α2b developed to improve the pharmacokinetics, convenience, and potential efficacy of interferon-α for the treatment of chronic hepatitis infections.
METHODS
This open-label, randomized, active-controlled, multicenter study investigated the safety and efficacy of albinterferon in patients with chronic hepatitis B virus (HBV) infection who were e-antigen (HBeAg) positive. One hundred and forty-one patients received one of four albinterferon doses/regimens or pegylated-interferon-α2a. Primary efficacy outcomes were changes in serum HBeAg and antibody, HBV-DNA, and alanine aminotransferase. Principal safety outcomes were changes in laboratory values, pulmonary function, and adverse events.
RESULTS
The study was prematurely terminated as phase III trials in hepatitis C infection indicated noninferior efficacy but inferior safety compared with pegylated-interferon-α2a. Here, all treatment groups had a significant reduction in HBV-DNA from baseline. Reductions in HBV-DNA were not significantly different, except the 1200 μg every 4 weeks albinterferon dose which was inferior compared with pegylated-interferon-α2a. The serum alanine aminotransferase levels decreased in all arms. The per-patient incidence of adverse events was not significantly different for albinterferon (96.4-100%) and pegylated-interferon-α2a (93.1%). Total adverse events, however, were higher for albinterferon and correlated to dose. Decreased lung function was found in all arms (∼93% of patients), and was more common in some albinterferon groups.
CONCLUSIONS
Albinterferon doses with similar anti-HBV efficacy to pegylated-interferon-α2a had higher rates of certain adverse events, particularly changes in lung diffusion capacity (http://www.clinicaltrials.gov number NCT00964665).
|
Revisiting the Bullwhip Effect under Economic Uncertainty
|
For the past decade, economic uncertainties were the only certainty. At the breakout of the global financial crisis, sales managers at the air conditioning industry noticed two strange distortions in the supply chain. At the downstream, factories were increasing their levels of inventories of specific raw materials mainly copper pipes and power cables whereas at the upper stream the distribution channels were logically making lower orders of the finished products compared to the actual market demands. This paper aims at analyzing some of the situational factors that lead to these two types of distortions. In doing so, it also proposes that the model of the Bullwhip Effect (BWE) will have an inverse shape under economic uncertainty compared to the conventional shape reported in supply chain traditional literature. Despite the fact that a number of BWE distortions were reported by several scholars, the phenomenon has never been studied under the special situational factors during an economic uncertainty.
|
Prevalence of low back pain and its risk factors among secondary school teachers at Bentong, Pahang
|
Title: Prevalence of low back pain and its risk factors among secondary school teachers at Bentong, Pahang Objective: The purpose of this study is to determine the prevalence of low back pain among Secondary School Teachers and to investigate the associated risk factors at Bentong, Pahang. Methodology: A self-administered questionnaire was distributed to 260 subjects through random sampling in 5 secondary schools. Seven female teaches were excluded because they never meet with the inclusion criteria, where at the end of the study only 253 subjects was included. Result: In the study, I found that prevalence of low back pain is high among secondary school teachers. Female teachers reported a significantly higher prevalence of low back pain when compared to male teachers. And the middle age group of teachers has reported high prevalence of pain compare to the younger and older age group. The highest risk factor for the low back pain among teachers is prolong standing, followed by prolong sitting and working with computer. Conclusion: We found a high prevalence of low back pain among school teachers with most female and middle age group people affected and they are related with highest risk factor. There is a need to develop specific strategies on ergonomics educate, regular physical exercises and occupational stress in the schools to reduce the occurrence of Work-related Musculoskeletal Disorders (WMSDs) of the low back pain among teachers.
|
Antecedents of Turnover Intentions : A Literature Review
|
Employees are the most important asset of the organization. It’s a major challenge for the organization to retain its workforce as a lot of cost is incurred on them directly or indirectly. In order to have competitive advantage over the other organizations, the focus has to be on the employees. As ultimately the employees are the face of the organization as they are the building blocks of the organization. Thus their retention is a major area of concern. So attempt has been made to reduce the turnover rate of the organization. Therefore this paper attempts to review the various antecedents of turnover which affect turnover intentions of the employees.
|
Characterization of biological diversity through analysis of discrete cranial traits.
|
In the present study, the frequency distributions of 20 discrete cranial traits in 70 major human populations from around the world were analyzed. The principal-coordinate and neighbor-joining analyses of Smith's mean measure of divergence (MMD), based on trait frequencies, indicate that 1). the clustering pattern is similar to those based on classic genetic markers, DNA polymorphisms, and craniometrics; 2). significant interregional separation and intraregional diversity are present in Subsaharan Africans; 3). clinal relationships exist among regional groups; 4). intraregional discontinuity exists in some populations inhabiting peripheral or isolated areas. For example, the Ainu are the most distinct outliers of the East Asian populations. These patterns suggest that founder effects, genetic drift, isolation, and population structure are the primary causes of regional variation in discrete cranial traits. Our results are compatible with a single origin for modern humans as well as the multiregional model, similar to the results of Relethford and Harpending ([1994] Am. J. Phys. Anthropol. 95:249-270). The results presented here provide additional measures of the morphological variation and diversification of modern human populations.
|
Albuterol Overuse: A Marker of Psychological Distress?
|
BACKGROUND
Albuterol overuse, 3 or more canisters per year, is associated with poor asthma control and frequent exacerbations.
OBJECTIVE
To describe albuterol use on symptom and symptom-free days and identify predictors of albuterol overuse and controller medication underuse.
METHODS
Secondary analyses of data from adults with mild asthma from the Trial of Asthma Patient Education were carried out. Based on albuterol use of 80% or more on symptom days and less than 20% on symptom-free days, participants were characterized as expected users, overusers, or underusers of albuterol. Good controller medication adherence was defined as 80% or more of prescribed doses. Data included demographic characteristics, diary data, spirometry, and scores from standardized questionnaires. Bivariate associations were examined between categorization of medication use and measured characteristics.
RESULTS
Of the 416 participants, 212 (51%) were expected users, 114 (27%) were overusers, and 90 (22%) were underusers of albuterol. No differences were observed among the user groups by demographic characteristics or lung function. Expected users demonstrated the highest asthma-related knowledge, attitudes, and efficacy. Overusers reported the greatest symptom burden, worst asthma control, and highest frequency of symptom days. Overusers also had the highest burden of depression symptoms. More frequent symptom days accounted for 15% of overuse, greater use on symptom days accounted for 31%, and greater use on symptom free days accounted for 54% of overuse. Mean controller adherence was high across all groups, and there were no differences between the groups.
CONCLUSIONS
Although overusers experienced more frequent symptom days and used more albuterol on those days, most overuse was attributable to unexpected use on symptom-free days. High levels of comorbid depression were observed, particularly among overusers and among those nonadherent to controller medication.
|
Visual fields in hornbills: precision-grasping and sunshades: Visual fields in hornbills
|
Retinal visual fields were determined in Southern Ground Hornbills Bucorvus leadbeateri and Southern Yellow-billed Hornbills Tockus leucomelas (Coraciiformes, Bucerotidae) using an ophthalmoscopic reflex technique. In both species the binocular field is relatively long and narrow with a maximum width of 30 ° occurring 40 ° above the bill. The bill tip projects into the lower half of the binocular field. This frontal visual field topography exhibits a number of key features that are also found in other terrestrial birds. This supports the hypothesis that avian visual fields are of three principal types that are correlated with the degree to which vision is employed when taking food items, rather than with phylogeny. However, unlike other species studied to date, in both hornbill species the bill intrudes into the binocular field. This intrusion of the bill restricts the width of the binocular field but allows the birds to view their own bill tips. It is suggested that this is associated with the precision-grasping feeding technique of hornbills. This involves forceps-like grasping and manipulation of items in the tips of the large decurved bill. The two hornbill species differ in the extent of the blind area perpendicularly above the head. Interspecific comparison shows that eye size and the width of the blind area above the head are significantly correlated. The limit of the upper visual field in hornbills is viewed through the long lash-like feathers of the upper lids and these appear to be used as a sunshade mechanism. In Ground Hornbills eye movements are non-conjugate and have sufficient amplitude (30–40 ° ) to abolish the frontal binocular field and to produce markedly asymmetric visual field configurations.
|
Distinct magnetic field dependence of Néel skyrmion sizes in ultrathin nanodots
|
We investigate the dependence of the Néel skyrmion size and stability on perpendicular magnetic field in ultrathin circular magnetic dots with out-of-plane anisotropy and interfacial Dzyaloshinskii-Moriya exchange interaction. Our results show the existence of two distinct dependencies of the skyrmion radius on the applied field and dot size. In the case of skyrmions stable at zero field, their radius strongly increases with the field applied parallel to the skyrmion core until skyrmion reaches the metastability region and this dependence slows down. More common metastable skyrmions demonstrate a weaker increase of their size as a function of the field until some critical field value at which these skyrmions drastically increase in size showing a hysteretic behavior with coexistence of small and large radius skyrmions and small energy barriers between them. The first case is also characterized by a strong dependence of the skyrmion radius on the dot diameter, while in the second case this dependence is very weak.
|
Validity of clinical outcome measures to evaluate ankle range of motion during the weight-bearing lunge test.
|
OBJECTIVES
To determine the concurrent validity of standard clinical outcome measures compared to laboratory outcome measure while performing the weight-bearing lunge test (WBLT).
DESIGN
Cross-sectional study.
METHODS
Fifty participants performed the WBLT to determine dorsiflexion ROM using four different measurement techniques: dorsiflexion angle with digital inclinometer at 15cm distal to the tibial tuberosity (°), dorsiflexion angle with inclinometer at tibial tuberosity (°), maximum lunge distance (cm), and dorsiflexion angle using a 2D motion capture system (°). Outcome measures were recorded concurrently during each trial. To establish concurrent validity, Pearson product-moment correlation coefficients (r) were conducted, comparing each dependent variable to the 2D motion capture analysis (identified as the reference standard). A higher correlation indicates strong concurrent validity.
RESULTS
There was a high correlation between each measurement technique and the reference standard. Specifically the correlation between the inclinometer placement at 15cm below the tibial tuberosity (44.9°±5.5°) and the motion capture angle (27.0°±6.0°) was r=0.76 (p=0.001), between the inclinometer placement at the tibial tuberosity angle (39.0°±4.6°) and the motion capture angle was r=0.71 (p=0.001), and between the distance from the wall clinical measure (10.3±3.0cm) to the motion capture angle was r=0.74 (p=0.001).
CONCLUSIONS
This study determined that the clinical measures used during the WBLT have a high correlation with the reference standard for assessing dorsiflexion range of motion. Therefore, obtaining maximum lunge distance and inclinometer angles are both valid assessments during the weight-bearing lunge test.
|
Roles of the fast-releasing and the slowly releasing vesicles in synaptic transmission at the calyx of Held.
|
In the calyx of Held, fast and slow components of neurotransmitter release can be distinguished during a step depolarization. The two components show different sensitivity to molecular/pharmacological manipulations. Here, their roles during a high-frequency train of action potential (AP)-like stimuli were examined by using both deconvolution of EPSCs and presynaptic capacitance measurements. During a 100 Hz train of AP-like stimuli, synchronous release showed a pronounced depression within the 20 stimuli. Asynchronous release persisted during the train, was variable in its amount, and was more prominent during a 300 Hz train. We have shown previously that slowly releasing vesicles were recruited faster than fast-releasing vesicles after depletion. By further slowing recovery of the fast-releasing vesicles by inhibiting calmodulin-dependent processes (Sakaba and Neher, 2001b), the slowly releasing vesicles were isolated during recovery from vesicle depletion. When a high-frequency train was applied, the isolated slowly releasing vesicles were released predominantly asynchronously. In contrast, synchronous release was mediated mainly by the fast-releasing vesicles. The results suggest that fast-releasing vesicles contribute mainly to synchronous release and that depletion of fast-releasing vesicles shape the synaptic depression of the synchronous phase of EPSCs, whereas slowly releasing vesicles are released mainly asynchronously during high-frequency stimulation. The latter is less subject to depression presumably because of a rapid vesicular recruitment process, which is a characteristic of this component.
|
Object-oriented nonlinear finite element programming: a primer
|
This article describes an introductory object-oriented finite element program for static and dynamic nonlinear applications. This work can be considered as an extension of the original FEM_Object environment dealing with linear elasticity [1] and nonlinearity [2]. Mainly the static aspects are discussed in this paper. Interested readers will find a detailed discussion of the object-oriented approach applied to finite element programming in [15-18] and also in [7-8] and references therein. Our ambition, in this paper, is limited to a presentation of an introductory object-oriented finite element package for nonlinear analysis. Our goal is to make a starting package available to newcomers to the object-oriented approach and to provide an answer to the large number of demands for such a program received in recent time. In the first part of the paper, a brief recall of the basics of finite element modeling applied to continuum mechanics is given. Von Misès plasticity including isotropic and kinematic hardening, which is used as model problem, is described. This first part also presents an overview of the main features of the object-oriented approach. In the second part of this paper, classes and associated tasks forming the kernel of the code are described in detail. A hierarchy of classes is proposed and discussed; it provides an immediate overview of the program's capabilities. Finally interactions between classes are explained and numerical examples illustrate the approach.
|
30 years of media coverage on high drug prices in the US – a never-ending story or a time for change?
|
Background US drug prices are among the highest worldwide as US policy makers have historically been reluctant to embrace price regulations, instead relying on market forces to set prices. However, the introduction of a number of breakthrough, highly effective and high-cost specialty medicines over the past years has stoked the fire of the long-running drug price debate in the USA. The prices of those specialty medicines – more than $100,000 per treatment course – have resulted in widespread outcry among patients, providers, insurers, and members of the Congress and the Senate. We aimed at analyzing whether the recent debate on drug prices reflects a sign of change in the drug pricing debate in US print media.
|
Improving School Leadership. Volume 2: Case Studies on System Leadership.
|
The job of school leaders has changed radically as countries transform their education systems to prepare young people for today’s rapid technological change, economic globalisation and increased migration. One new role they are being asked to play is to work beyond their school borders so that they can contribute not only to the success of their own school but to the system as a whole – so that every school is a good school.
|
Dual marching cubes
|
We present the definition and computational algorithms for a new class of surfaces which are dual to the isosurface produced by the widely used marching cubes (MC) algorithm. These new isosurfaces have the same separating properties as the MC surfaces but they are comprised of quad patches that tend to eliminate the common negative aspect of poorly shaped triangles of the MC isosurfaces. Based upon the concept of this new dual operator, we describe a simple, but rather effective iterative scheme for producing smooth separating surfaces for binary, enumerated volumes which are often produced by segmentation algorithms. Both the dual surface algorithm and the iterative smoothing scheme are easily implemented.
|
Factors associated with the development of self-harm amongst a socio-economically deprived cohort of adolescents in Santiago, Chile
|
Studies carried out in the West indicate that the incidence of self-harm (SH) is particularly high amongst adolescents, but few studies have investigated its incidence and aetiology in low-income countries. The purpose of this study was to investigate risk factors associated with new onset episodes of SH, amongst Chilean adolescents from low socio-economic backgrounds. Prospective cohort study nested within a cluster randomised controlled trial. A 6-month follow-up for 2,042 adolescents, median age 14 years, from socio-economically deprived areas of Santiago, Chile. The lifetime prevalence of SH was 23 %. The incidence rate of SH at 6 months was 14 % amongst those reporting no SH at baseline. In multivariable analyses, risk factors for incident SH include depressive symptoms, suicidal thoughts, poor problem-solving skills and cannabis misuse. The prevalence and incidence of SH in this socio-economically deprived sample differed highly according to gender. Poor problem-solving skills, suicidal thoughts, and cannabis misuse were associated with onset of SH.
|
Artificial neural networks accurately predict mortality in patients with nonvariceal upper GI bleeding.
|
BACKGROUND
Risk stratification systems that accurately identify patients with a high risk for bleeding through the use of clinical predictors of mortality before endoscopic examination are needed. Computerized (artificial) neural networks (ANNs) are adaptive tools that may improve prognostication.
OBJECTIVE
To assess the capability of an ANN to predict mortality in patients with nonvariceal upper GI bleeding and compare the predictive performance of the ANN with that of the Rockall score.
DESIGN
Prospective, multicenter study.
SETTING
Academic and community hospitals.
PATIENTS
This study involved 2380 patients with nonvariceal upper GI bleeding.
INTERVENTION
Upper GI endoscopy.
MAIN OUTCOME MEASUREMENTS
The primary outcome variable was 30-day mortality, defined as any death occurring within 30 days of the index bleeding episode. Other outcome variables were recurrent bleeding and need for surgery.
RESULTS
We performed analysis of certified outcomes of 2380 patients with nonvariceal upper GI bleeding. The Rockall score was compared with a supervised ANN (TWIST system, Semeion), adopting the same result validation protocol with random allocation of the sample in training and testing subsets and subsequent crossover. Overall, death occurred in 112 cases (4.70%). Of 68 pre-endoscopic input variables, 17 were selected and used by the ANN versus 16 included in the Rockall score. The sensitivity of the ANN-based model was 83.8% (76.7-90.8) versus 71.4% (62.8-80.0) for the Rockall score. Specificity was 97.5 (96.8-98.2) and 52.0 (49.8 4.2), respectively. Accuracy was 96.8% (96.0-97.5) versus 52.9% (50.8-55.0) (P<.001). The predictive performance of the ANN-based model for prediction of mortality was significantly superior to that of the complete Rockall score (area under the curve 0.95 [0.92-0.98] vs 0.67 [0.65-0.69]; P<.001).
LIMITATIONS
External validation on a subsequent independent population is needed, patients with variceal bleeding and obscure GI hemorrhage are excluded.
CONCLUSION
In patients with nonvariceal upper GI bleeding, ANNs are significantly superior to the Rockall score in predicting the risk of death.
|
Upper auricular adhesion malformation: definition, classification, and treatment.
|
BACKGROUND
During treatment of upper auricular malformations, the author found that patients with cryptotia and patients with solitary helical and/or antihelical adhesion malformations showed the same anatomical finding of cartilage adhesion. The author defined them together as upper auricular adhesion malformations.
METHODS
Between March of 1992 and March of 2006, 194 upper auricular adhesion malformations were corrected in 137 patients. All of these cases were retrospectively studied and classified. Of these, 92 malformations in 68 recent patients were corrected with new surgical methods (these were followed up for more than 6 months).
RESULTS
The group of solitary helical and/or antihelical cartilage malformation patients was classified as group I and the cryptotia group as group II. These two groups were subdivided according to features of cartilage adhesion and classified into seven subgroups. Thirty-two malformations were classified as belonging to group I and 162 malformations to group II. There were 61 patients with bilateral upper auricular adhesion malformations. Nineteen patients (31 percent of the patients with bilateral malformations) showed malformations belonging to both groups I and II on both ears. On postoperative observation in patients corrected with new methods, it was noticed that the following unfavorable results had occurred in 18 upper auricular adhesion malformation cases (20 percent): venous congestion or partial skin necrosis of used flaps, "pinched antitragus," low-set upper auricle, hypertrophic scars, and baldness.
CONCLUSIONS
The new consideration for, and the singling out of, upper auricular adhesion malformation can lead to better understanding of the groups of upper auricular malformations to which it belongs, the decision for treatment, and, possibly, clarification of the pathophysiology in the future.
|
Video Desnowing and Deraining Based on Matrix Decomposition
|
The existing snow/rain removal methods often fail for heavy snow/rain and dynamic scene. One reason for the failure is due to the assumption that all the snowflakes/rain streaks are sparse in snow/rain scenes. The other is that the existing methods often can not differentiate moving objects and snowflakes/rain streaks. In this paper, we propose a model based on matrix decomposition for video desnowing and deraining to solve the problems mentioned above. We divide snowflakes/rain streaks into two categories: sparse ones and dense ones. With background fluctuations and optical flow information, the detection of moving objects and sparse snowflakes/rain streaks is formulated as a multi-label Markov Random Fields (MRFs). As for dense snowflakes/rain streaks, they are considered to obey Gaussian distribution. The snowflakes/rain streaks, including sparse ones and dense ones, in scene backgrounds are removed by low-rank representation of the backgrounds. Meanwhile, a group sparsity term in our model is designed to filter snow/rain pixels within the moving objects. Experimental results show that our proposed model performs better than the state-of-the-art methods for snow and rain removal.
|
Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos
|
Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor robot navigation. In this work we address unsupervised learning of scene depth and robot ego-motion where supervision is provided by monocular videos, as cameras are the cheapest, least restrictive and most ubiquitous sensor for robotics. Previous work in unsupervised image-to-depth learning has established strong baselines in the domain. We propose a novel approach which produces higher quality results, is able to model moving objects and is shown to transfer across data domains, e.g. from outdoors to indoor scenes. The main idea is to introduce geometric structure in the learning process, by modeling the scene and the individual objects; camera ego-motion and object motions are learned from monocular videos as input. Furthermore an online refinement method is introduced to adapt learning on the fly to unknown domains. The proposed approach outperforms all state-of-the-art approaches, including those that handle motion e.g. through learned flow. Our results are comparable in quality to the ones which used stereo as supervision and significantly improve depth prediction on scenes and datasets which contain a lot of object motion. The approach is of practical relevance, as it allows transfer across environments, by transferring models trained on data collected for robot navigation in urban scenes to indoor navigation settings. The code associated with this paper can be found at https://sites.google.com/
|
Bridging the gap. The separate worlds of evidence-based medicine and patient-centered medicine.
|
Modern medical care is influenced by two paradigms: 'evidence-based medicine' and 'patient-centered medicine'. In the last decade, both paradigms rapidly gained in popularity and are now both supposed to affect the process of clinical decision making during the daily practice of physicians. However, careful analysis shows that they focus on different aspects of medical care and have, in fact, little in common. Evidence-based medicine is a rather young concept that entered the scientific literature in the early 1990s. It has basically a positivistic, biomedical perspective. Its focus is on offering clinicians the best available evidence about the most adequate treatment for their patients, considering medicine merely as a cognitive-rational enterprise. In this approach the uniqueness of patients, their individual needs and preferences, and their emotional status are easily neglected as relevant factors in decision-making. Patient-centered medicine, although not a new phenomenon, has recently attracted renewed attention. It has basically a humanistic, biopsychosocial perspective, combining ethical values on 'the ideal physician', with psychotherapeutic theories on facilitating patients' disclosure of real worries, and negotiation theories on decision making. It puts a strong focus on patient participation in clinical decision making by taking into account the patients' perspective, and tuning medical care to the patients' needs and preferences. However, in this approach the ideological base is better developed than its evidence base. In modern medicine both paradigms are highly relevant, but yet seem to belong to different worlds. The challenge for the near future is to bring these separate worlds together. The aim of this paper is to give an impulse to this integration. Developments within both paradigms can benefit from interchanging ideas and principles from which eventually medical care will benefit. In this process a key role is foreseen for communication and communication research.
|
SIDES: a cooperative tabletop computer game for social skills development
|
This paper presents a design case study of SIDES: Shared Interfaces to Develop Effective Social Skills. SIDES is a tool designed to help adolescents with Asperger's Syndrome practice effective group work skills using a four-player cooperative computer game that runs on tabletop technology. We present the design process and evaluation of SIDES conducted over six months with a middle school social group therapy class. Our findings indicate that cooperative tabletop computer games are a motivating and supportive tool for facilitating effective group work among our target population and reveal several design lessons to inform the development of similar systems.
|
Hessling's Quantum Equivalence Principle and the Temperature of an Extremal Reissner-Nordstr\"{o}m Black Hole
|
The Hessling improvement of the Haag, Narnhofer and Stein principle is analysed in the case of a massless scalar field propagating outside of an extremal R-N black hole. It is found that this sort of ``Quantum (Einstein's) Equivalence Principle'' selects only the R-N vacuum as a physically sensible state, i.e., it selects the temperature $T=0$ only.
|
RECONCEPTUALISING GAMIFICATION : PLAY AND PEDAGOGY
|
Gamification is a complex and controversial concept. It has been both embraced as a marketing and education revolution, and dismissed as practice of exploitation. Contested within the debate around gamification has been the very concept of what a game is, what the core mechanics of games are, and whether gamification truly mobilises these core mechanics. This paper will challenge the foundation of this debate through reconceptualising gamification not as a simple set of techniques and mechanics, but as a pedagogic heritage, an alternative framework for training and shaping participant behaviour that has at its core the concepts of entertainment and engagement. In doing so it will recontextualise current practices of gamification into a longer and deeper history, and suggest potential pathways for more sophisticated gamification in the future.
|
Analysis of Anxious Word Usage on Online Health Forums
|
Online health communities and support groups are a valuable source of information for users suffering from a physical or mental illness. Users turn to these forums for moral support or advice on specific conditions, symptoms, or side effects of medications. This paper describes and studies the linguistic patterns of a community of support forum users over time focused on the used of anxious related words. We introduce a methodology to identify groups of individuals exhibiting linguistic patterns associated with anxiety and the correlations between this linguistic pattern and other word usage. We find some evidence that participation in these groups does yield positive effects on their users by reducing the frequency of anxious related word used over time.
|
Adaptive Speculative Processing of Out-of-Order Event Streams
|
Distributed event-based systems are used to detect meaningful events with low latency in high data-rate event streams that occur in surveillance, sports, finances, etc. However, both known approaches to dealing with the predominant out-of-order event arrival at the distributed detectors have their shortcomings: buffering approaches introduce latencies for event ordering, and stream revision approaches may result in system overloads due to unbounded retraction cascades.
This article presents an adaptive speculative processing technique for out-of-order event streams that enhances typical buffering approaches. In contrast to other stream revision approaches developed so far, our novel technique encapsulates the event detector, uses the buffering technique to delay events but also speculatively processes a portion of it, and adapts the degree of speculation at runtime to fit the available system resources so that detection latency becomes minimal.
Our technique outperforms known approaches on both synthetical data and real sensor data from a realtime locating system (RTLS) with several thousands of out-of-order sensor events per second. Speculative buffering exploits system resources and reduces latency by 40% on average.
|
Interventions to improve teamwork and communications among healthcare staff.
|
BACKGROUND
Concern over the frequency of unintended harm to patients has focused attention on the importance of teamwork and communication in avoiding errors. This has led to experiments with teamwork training programmes for clinical staff, mostly based on aviation models. These are widely assumed to be effective in improving patient safety, but the extent to which this assumption is justified by evidence remains unclear.
METHODS
A systematic literature review on the effects of teamwork training for clinical staff was performed. Information was sought on outcomes including staff attitudes, teamwork skills, technical performance, efficiency and clinical outcomes.
RESULTS
Of 1036 relevant abstracts identified, 14 articles were analysed in detail: four randomized trials and ten non-randomized studies. Overall study quality was poor, with particular problems over blinding, subjective measures and Hawthorne effects. Few studies reported on every outcome category. Most reported improved staff attitudes, and six of eight reported significantly better teamwork after training. Five of eight studies reported improved technical performance, improved efficiency or reduced errors. Three studies reported evidence of clinical benefit, but this was modest or of borderline significance in each case. Studies with a stronger intervention were more likely to report benefits than those providing less training. None of the randomized trials found evidence of technical or clinical benefit.
CONCLUSION
The evidence for technical or clinical benefit from teamwork training in medicine is weak. There is some evidence of benefit from studies with more intensive training programmes, but better quality research and cost-benefit analysis are needed.
|
Ground influence on the input impedance of transient dipole and bow-tie antennas
|
In this paper, the influence of a lossy ground on the input impedance of dipole and bow-tie antennas excited by a short pulse is investigated. It is shown that the ground influence on the input impedance of transient dipole and bow-tie antennas is significant only for elevations smaller than 1/5 of the wavelength that corresponds to the central frequency of the exciting pulse. Furthermore, a principal difference between the input impedance due to traveling-wave and standing-wave current distributions is pointed out.
|
Formation, reactivity and aging of amorphous ferric oxides in the presence of model and membrane bioreactor derived organics.
|
Iron salts are routinely dosed in wastewater treatment as a means of achieving effluent phosphorous concentration goals. The iron oxides that result from addition of iron salts partake in various reactions, including reductive dissolution and phosphate adsorption. The reactivity of these oxides is controlled by the conditions of formation and the processes, such as aggregation, that lead to a reduction in accessible surface sites following formation. The presence of organic compounds is expected to significantly impact these processes in a number of ways. In this study, amorphous ferric oxide (AFO) reactivity and aging was investigated following the addition of ferric iron (Fe(III)) to three solution systems: two synthetic buffered systems, either containing no organic or containing alginate, and a supernatant system containing soluble microbial products (SMPs) sourced from a membrane bioreactor (MBR). Reactivity of the Fe(III) phases in these systems at various times (1-60 min) following Fe(III) addition was quantified by determining the rate constants for ascorbate-mediated reductive dissolution over short (5 min) and long (60 min) dissolution periods and for a range (0.5-10 mM) of ascorbate concentrations. AFO particle size was monitored using dynamic light scattering during the aging and dissolution periods. In the presence of alginate, AFO particles appeared to be stabilized against aggregation. However, aging in the alginate system was remarkably similar to the inorganic system where aging is associated with aggregation. An aging mechanism involving restructuring within the alginate-AFO assemblage was proposed. In the presence of SMPs, a greater diversity of Fe(III) phases was evident with both a small labile pool of organically complexed Fe(III) and a polydisperse population of stabilized AFO particles present. The prevalence of low molecular weight organic molecules facilitated stabilization of the Fe(III) oxyhydroxides formed but subsequent aging observed in the alginate system did not occur. The reactivity of the Fe(III) in the supernatant system was maintained with little loss in reactivity over at least 24 h. The capacity of SMPs to maintain high reactivity of AFO has important implications in a reactor where Fe(III) phases encounter alternating redox conditions due to sludge recirculation, creating a cycle of reductive dissolution, oxidation and precipitation.
|
Studying emotion theories through connectivity analysis: Evidence from generalized psychophysiological interactions and graph theory
|
Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.