title
stringlengths
8
300
abstract
stringlengths
0
10k
Coupled two-way clustering analysis of gene microarray data.
We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.
Corrected Late Triassic latitudes for continents adjacent to the North Atlantic.
We use a method based on a statistical geomagnetic field model to recognize and correct for inclination error in sedimentary rocks from early Mesozoic rift basins in North America, Greenland, and Europe. The congruence of the corrected sedimentary results and independent data from igneous rocks on a regional scale indicates that a geocentric axial dipole field operated in the Late Triassic. The corrected paleolatitudes indicate a faster poleward drift of approximately 0.6 degrees per million years for this part of Pangea and suggest that the equatorial humid belt in the Late Triassic was about as wide as it is today.
What is user engagement? A conceptual framework for defining user engagement with technology
The purpose of this article is to critically deconstruct the term engagement as it applies to peoples' experiences with technology. Through an extensive, critical multidisciplinary literature review and exploratory study of users of Web searching, online shopping, Webcasting, and gaming applications, we conceptually and operationally defined engagement. Building on past research, we conducted semistructured interviews with the users of four applications to explore their perception of being engaged with the technology. Results indicate that engagement is a process comprised of four distinct stages: point of engagement, period of sustained engagement, disengagement, and reengagement. Furthermore, the process is characterized by attributes of engagement that pertain to the user, the system, and user-system interaction. We also found evidence of the factors that contribute to nonengagement. Emerging from this research is a definition of engagement—a term not defined consistently in past work—as a quality of user experience characterized by attributes of challenge, positive affect, endurability, aesthetic and sensory appeal, attention, feedback, variety/novelty, interactivity, and perceived user control. This exploratory work provides the foundation for future work to test the conceptual model in various application areas, and to develop methods to measure engaging user experiences. INTRODUCTION In the past few decades, human-computer interaction studies have emphasized the need to move beyond usability to understand and design for more engaging experiences (Hassenzahl & Tractinsky, 2006; Jacques, Preece, & Carey, 1995; Laurel, 1993). A Web interface that is boring, a multimedia presentation that does not captivate users' attention, or an online forum that fails to engender a sense of community are quickly dismissed with a simple mouse click. Failing to engage users equates with no sale on an electronic commerce site and no transmission of information from a Web site; people go elsewhere to perform their tasks and communicate with colleagues and friends. Successful technologies are not just usable; they engage users. Despite the 1 This is the authors’ version of an article published in Journal of the American Society for Information Science & Technology, http://www.asis.org/jasist.html. This version has been updated to reflect changes in the final published version. Complete citation is as follows: O’Brien, H.L. & Toms, E.G. (2008). What is user engagement? A conceptual framework for defining user engagement with technology. Journal of the American Society for Information Science & Technology, 59(6), 938955. DOI: 10.1002/asi.20801.
Molding process development for high density I/Os Fan-Out Wafer Level Package (FOWLP) with fine pitch RDL
With the perpetual demand for greater functionalities, better performance and greater energy efficiency at cheaper manufacturing cost and smaller form factor, Fan-Out Wafer Level Packaging (FOWLP) technology has emerged as one of the most promising technology in fulfilling the demands from electronic devices for mobile and network applications. In our FOWLP mold-first approach development work, we developed the 300mm compression molding process for high density input/output (IO)s reconstituted FOWLP mold wafer fabricated with fine pitch redistributed layer (RDL) of line width / line space (LW/LS) ≤5/5μm using the conventional mold-first approach. Our compression molding process development aims to achieve a chip-to-mold non planarity ≤3μm, warpage of reconstituted FOWLP mold wafer ≤1mm and wafer level die shift ≤10μm. In this paper, we will discuss on the 300mm compression molding development work like materials selection, pick-and-place (PnP) process parameters and die shift compensation via constant and dynamic pre-shift methodologies in achieving our targeted specifications for the fabrication of fine pitch RDL FOWLP.
Motion Estimation for Self-Driving Cars with a Generalized Camera
In this paper, we present a visual ego-motion estimation algorithm for a self-driving car equipped with a close-to-market multi-camera system. By modeling the multi-camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2-point minimal solution for the generalized essential matrix where the full relative motion including metric scale can be obtained. We provide the analytical solutions for the general case with at least one inter-camera correspondence and a special case with only intra-camera correspondences. We show that up to a maximum of 6 solutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the special case with only intra-camera correspondences where the scale becomes unobservable and provide a practical alternative solution. Our formulation can be efficiently implemented within RANSAC for robust estimation. We verify the validity of our assumptions on the motion model by comparing our results on a large real-world dataset collected by a car equipped with 4 cameras with minimal overlapping field-of-views against the GPS/INS ground truth.
Neural Autoregressive Flows
Normalizing flows and autoregressive models have been successfully combined to produce state-of-the-art results in density estimation, via Masked Autoregressive Flows (MAF) (Papamakarios et al., 2017), and to accelerate stateof-the-art WaveNet-based speech synthesis to 20x faster than real-time (Oord et al., 2017), via Inverse Autoregressive Flows (IAF) (Kingma et al., 2016). We unify and generalize these approaches, replacing the (conditionally) affine univariate transformations of MAF/IAF with a more general class of invertible univariate transformations expressed as monotonic neural networks. We demonstrate that the proposed neural autoregressive flows (NAF) are universal approximators for continuous probability distributions, and their greater expressivity allows them to better capture multimodal target distributions. Experimentally, NAF yields state-of-the-art performance on a suite of density estimation tasks and outperforms IAF in variational autoencoders trained on binarized MNIST. 1
A retrospective content analysis of studies on factors constraining the implementation of health sector reform in Ghana.
Ghana has undertaken many public service management reforms in the past two decades. But the implementation of the reforms has been constrained by many factors. This paper undertakes a retrospective study of research works on the challenges to the implementation of reforms in the public health sector. It points out that most of the studies identified: (1) centralised, weak and fragmented management system; (2) poor implementation strategy; (3) lack of motivation; (4) weak institutional framework; (5) lack of financial and human resources and (6) staff attitude and behaviour as the major causes of ineffective reform implementation. The analysis further revealed that quite a number of crucial factors obstructing reform implementation which are particularly internal to the health system have either not been thoroughly studied or overlooked. The analysis identified lack of leadership; weak communication and consultation; lack of stakeholder participation, corruption and unethical professional behaviour as some of the missing variables in the literature. The study, therefore, indicated that there are gaps in the literature that needed to be filled through rigorous reform evaluation based on empirical research particularly at district, sub-district and community levels. It further suggested that future research should be concerned with the effects of both systems and structures and behavioural factors on reform implementation.
Health App Use Among US Mobile Phone Owners: A National Survey
BACKGROUND Mobile phone health apps may now seem to be ubiquitous, yet much remains unknown with regard to their usage. Information is limited with regard to important metrics, including the percentage of the population that uses health apps, reasons for adoption/nonadoption, and reasons for noncontinuance of use. OBJECTIVE The purpose of this study was to examine health app use among mobile phone owners in the United States. METHODS We conducted a cross-sectional survey of 1604 mobile phone users throughout the United States. The 36-item survey assessed sociodemographic characteristics, history of and reasons for health app use/nonuse, perceived effectiveness of health apps, reasons for stopping use, and general health status. RESULTS A little over half (934/1604, 58.23%) of mobile phone users had downloaded a health-related mobile app. Fitness and nutrition were the most common categories of health apps used, with most respondents using them at least daily. Common reasons for not having downloaded apps were lack of interest, cost, and concern about apps collecting their data. Individuals more likely to use health apps tended to be younger, have higher incomes, be more educated, be Latino/Hispanic, and have a body mass index (BMI) in the obese range (all P<.05). Cost was a significant concern among respondents, with a large proportion indicating that they would not pay anything for a health app. Interestingly, among those who had downloaded health apps, trust in their accuracy and data safety was quite high, and most felt that the apps had improved their health. About half of the respondents (427/934, 45.7%) had stopped using some health apps, primarily due to high data entry burden, loss of interest, and hidden costs. CONCLUSIONS These findings suggest that while many individuals use health apps, a substantial proportion of the population does not, and that even among those who use health apps, many stop using them. These data suggest that app developers need to better address consumer concerns, such as cost and high data entry burden, and that clinical trials are necessary to test the efficacy of health apps to broaden their appeal and adoption.
Five Facts About Prices : A Reevaluation of Menu Cost Models
We establish five facts about prices in the U.S. economy: 1) The median duration of consumer prices when sales are excluded at the product level is 11 months. The median duration of finished goods producer prices is 8.7 months. 2) Two-thirds of regular price changes are price increases. 3) The frequency of price increases responds strongly to inflation while the frequency of price decreases and the size of price increases and price decreases do not. 4) The frequency of price change is highly seasonal: It is highest in the 1st quarter and lowest in the 4th quarter. 5) The hazard function of price changes for individual consumer and producer goods is downward sloping for the first few months and then flat (except for a large spike at 12 months in consumer services and all producer prices). These facts are based on CPI microdata and a new comprehensive data set of microdata on producer prices that we construct from raw production files underlying the PPI. We show that the 1st, 2nd and 3rd facts are consistent with a benchmark menu-cost model, while the 4th and 5th facts are not.
Spin polarized tunneling and spin injection in Fe-GaAs hybrid structures
Spin electronics, or spintronics, is a new branch of electronics whereby the spin degree of freedom in electronic devices is employed. For understanding the physics of spin injection in semiconductors, this thesis is aimed at contributing to fabricate ferromagnetic metal-semiconductor hybrid structures, typically Fe-GaAs hybrid structures, in which the spin-polarized transport phenomena are studied. In order to understand the spin transport at the Fe/GaAs interface, the spin-polarized tunneling is studied first in this work. The Fe/GaAs/Fe/Co magnetic tunneling junctions are fabricated and the TMR effect as well as I-V characteristics is measured at different temperatures. Interpretations of the experimental data by the theoretical model allow us to characterize the junction quality, which shows that apart from the conductivity mismatch problem, the oxidation of the semiconductor surface and the interdiffusion between Fe and GaAs are key issues in the fabrication of high quality ferromagnet-semiconductor hybrid structures. The spin-polarized tunneling through a sulphur-passivated GaAs barrier is studied to clarify the passivation effect. However, our experiments show no positive influence of sulphur passivation. The spin injection in the ferromagnetic metal-semiconductor hybrid structures is investigated in the second part of this work. Before performing the spin injection experiments, we try to measure the interface resistivity of Fe/GaAs Schottky barriers with different doping densities at low temperatures. Using the measured interface resistance as a guide for experimental design, the magnetic p-n junction diodes and Fe/GaAs/Fe structures are fabricated, and spin injection is investigated in these devices. In the magnetic p-n junction diode, a negative GMR-like effect is found under a large applied bias, when the relative magnetizations of the two magnetic electrodes are changed from parallel to antiparallel. The experimental finding agrees with the theoretical prediction very well. For spin injection in Fe/GaAs/Fe structures, the experiments are carefully performed by different surface treatments with different doping profiles of the GaAs. The small but clear magnetoresistance could only be found in the device with 50nm homogeneous heavily doped GaAs under a large bias, indicating a surface spin polarization of 2.6% in the Fe/GaAs/Fe structure.
Exact probability of erasure and a decoding algorithm for convolutional codes on the binary erasure channel
Analytic expressions for the exact probability of erasure for systematic, rate-1/2 convolutional codes used to communicate over the binary erasure channel and decoded using the soft-input, soft-output (SISO) and a posteriori probability (APP) algorithms are given. An alternative forward-backward algorithm which produces the same result as the SISO algorithm is also given. This low-complexity implementation, based upon lookup tables, is of interest for systems which use convolutional codes, such as turbo codes.
Impacts of Biochar (Black Carbon) Additions on the Sorption and Efficacy of Herbicides
The aim of this chapter is to review the effect of biochar on the fate and efficacy of herbicides. The increasing use and need of energy worldwide, together with the depletion of fossil fuels make the search and use of renewable energy sources a priority. Biomass is recognized as a potential renewable energy source, and pyrolysis is considered the most promising thermo-chemical conversion of biomass into bioenergy products (Özçimen & Karaosmanoğlu, 2004). Burning biomass in the absence of oxygen (pyrolysis) yields three products: a liquid (bio-oil), solid, and a gas (Bridgwater, 2003), with the traditional use of these products as renewable fuel and energy sources. Biochar currently lacks a universal definition, as can been seen in the range of definitions in the literature. According to Azargohar & Dalai (2006) biochar can be considered the solid product of pyrolysis and Sohi et al., (2010) define it as biomass-derived char intended specifically for application to soil. Warnock et al. (2007) defined biochar as the term reserved for the plant biomass derived materials contained within the black carbon (BC) continuum. We recommend that the term biochar be defined as the solid residual remaining after the thermo-chemical transformation of biomass whose main intended purpose is as a means of C sequestration. However, to retain the “biochar” classification there are two restrictions: 1) biochar itself cannot be used as a fuel source (although the utilization of the energy during production of the biochar is acceptable and encouraged) and 2) excludes those forms of black C derived from non-renewable (fossil fuel) resources [e.g. coal, petroleum, tires] (Lehmann et al., 2006). The origin of charcoal in the environment can be natural or synthetic. In the first case, wildfire and volcanic processes are responsible for its formation (Scott, 2010; Scott & Damblon, 2010); meanwhile in the second case, thermal processes such as combustion and pyrolysis convert biomass into a char (residual solid product). Pyrolysis is described by Bridgwater (2003; 2006) as thermal decomposition in absence of oxygen, and is always the first step in the processes of gasification and combustion. Production of charcoal is favored by low temperatures and very long residence time conditions (Bridgwater, 1992). According to Goldberg (1985) black carbon is produced by the incomplete combustion of fossil fuels and vegetation that comprises the range of products of char, charcoal, graphite, ash, and
Information theoretic framework of trust modeling and evaluation for ad hoc networks
The performance of ad hoc networks depends on cooperation and trust among distributed nodes. To enhance security in ad hoc networks, it is important to evaluate trustworthiness of other nodes without centralized authorities. In this paper, we present an information theoretic framework to quantitatively measure trust and model trust propagation in ad hoc networks. In the proposed framework, trust is a measure of uncertainty with its value represented by entropy. We develop four Axioms that address the basic understanding of trust and the rules for trust propagation. Based on these axioms, we present two trust models: entropy-based model and probability-based model, which satisfy all the axioms. Techniques of trust establishment and trust update are presented to obtain trust values from observation. The proposed trust evaluation method and trust models are employed in ad hoc networks for secure ad hoc routing and malicious node detection. A distributed scheme is designed to acquire, maintain, and update trust records associated with the behaviors of nodes' forwarding packets and the behaviors of making recommendations about other nodes. Simulations show that the proposed trust evaluation system can significantly improve the network throughput as well as effectively detect malicious behaviors in ad hoc networks.
Carbohydrate terminology and classification
Dietary carbohydrates are a group of chemically defined substances with a range of physical and physiological properties and health benefits. As with other macronutrients, the primary classification of dietary carbohydrate is based on chemistry, that is character of individual monomers, degree of polymerization (DP) and type of linkage (α or β), as agreed at the Food and Agriculture Organization/World Health Organization Expert Consultation in 1997. This divides carbohydrates into three main groups, sugars (DP 1–2), oligosaccharides (short-chain carbohydrates) (DP 3–9) and polysaccharides (DP⩾10). Within this classification, a number of terms are used such as mono- and disaccharides, polyols, oligosaccharides, starch, modified starch, non-starch polysaccharides, total carbohydrate, sugars, etc. While effects of carbohydrates are ultimately related to their primary chemistry, they are modified by their physical properties. These include water solubility, hydration, gel formation, crystalline state, association with other molecules such as protein, lipid and divalent cations and aggregation into complex structures in cell walls and other specialized plant tissues. A classification based on chemistry is essential for a system of measurement, predication of properties and estimation of intakes, but does not allow a simple translation into nutritional effects since each class of carbohydrate has overlapping physiological properties and effects on health. This dichotomy has led to the use of a number of terms to describe carbohydrate in foods, for example intrinsic and extrinsic sugars, prebiotic, resistant starch, dietary fibre, available and unavailable carbohydrate, complex carbohydrate, glycaemic and whole grain. This paper reviews these terms and suggests that some are more useful than others. A clearer understanding of what is meant by any particular word used to describe carbohydrate is essential to progress in translating the growing knowledge of the physiological properties of carbohydrate into public health messages.
MARSSx 86 : A Full System Simulator for x 86 CPUs
We present MARSS, an open source, fast, full system simulation tool built on QEMU to support cycle-accurate simulation of superscalar homogeneous and heterogeneous multicore x86 processors. MARSS includes detailed models of coherent caches, interconnections, chipsets, memory and IO devices. MARSS simulates the execution of all software components in the system, including unmodified binaries of applications, OS and libraries.
Controlled Language for Multilingual Document Production: Experience with Caterpillar Technical English 1
Caterpillar Inc., a heavy equipment manufacturing company headquartered in Peoria IL, supports world-wide distribution of a large number of products and parts. Each Caterpillar product integrates several complex subsystems (engine, hydraulic system, drive system, implements, electrical, etc.) for which a variety of technical documents must be produced (operations and maintenance, testing and adjusting, disassembly and assembly, specifications, etc.). To support consistent, high-quality authoring and translation of these documents from English into a variety of target languages, Caterpillar uses Caterpillar Technical English (CTE), a controlled English system developed in conjunction with CarnegieMellon University’s Center forMachine Translation (CMT) andCarnegie Group Incorporated (CGI).
Capturing the Complexity in Advanced Technology Use: Adaptive Structuration Theory
The past decade has brought advanced information technologies, which include electronic messaging systems, executive information systems, collaborative systems, group decision support systems, and other technologies that use sophisticated information management to enable multiparty participation in organization activities. Developers and users of these systems hold high hopes for their potential to change organizations for the better, but actual changes often do not occur, or occur inconsistently. We propose adaptive structuration theory (AST) as a viable approach for studying the role of advanced information technologies in organization change. AST examines the change process from two vantage points: (1) the types of structures that are provided by advanced technologies, and (2) the structures that actually emerge in human action as people interact with these technologies. To illustrate the principles of AST, we consider the small group meeting and the use of a group decision support system (GDSS). A GDSS is an interesting technology for study because it can be structured in a myriad of ways, and social interaction unfolds as the GDSS is used. Both the structure of the technology and the emergent structure of social action can be studied. We begin by positioning AST among competing theoretical perspectives of technology and change. Next, we describe the theoretical roots and scope of the theory as it is applied to GDSS use and state the essential assumptions, concepts, and propositions of AST. We outline an analytic strategy for applying AST principles and provide an illustration of how our analytic approach can shed light on the impacts of advanced technologies on organizations. A major strength of AST is that it expounds the nature of social structures within advanced information technologies and the key interaction processes that figure in their use. By capturing these processes and tracing their impacts, we can reveal the complexity of technology-organization relationships. We can attain a better understanding of how to implement technologies, and we may also be able to develop improved designs or educational programs that promote productive adaptations. (Information Technology; Structural Theory; Technology Impacts) 1.0. Introduction Information plays a distinctly social, interpersonal role in organizations (Feldman and March 1981). Perhaps for this reason, development and evaluation of technologies to support the exchange of information among organizational members has become a research tradition within the organization and information sciences (Goodman 1986, Keen and Scott Morton 1978, Van de Ven and Delbecq 1974). The past decade has brought advanced information technologies, which include electronic messaging systems, executive information sys1047-7039/94/0502/0121/$01.25 Copyright ? 1994. The Institute of Management Sciences ORGANIZATION SCIENCE/VO1. 5, No. 2, May 1994 121 This content downloaded from 140.119.81.207 on Mon, 14 Sep 2015 14:43:08 UTC All use subject to JSTOR Terms and Conditions GERARDINE DESANCTIS AND MARSHALL SCOTT POOLE Adaptive Structuration Theory tems, collaborative systems, group decision support systems, and other technologies that enable multiparty participation in organizational activities through sophisticated information management (Huber 1990, Huseman and Miles 1988, Rice 1984). Developers and users of these systems hold high hopes for their potential to change traditional organizational design, intelligence, and decision-making for the better, but what changes do these systems actually bring to the workplace? Wh,at technology impacts should we anticipate, and how can we interpret the changes that we observe? Many researchers believe that the effects of advanced technologies are less a function of the technologies themselves than of how they are used by people. For this reason, actual behavior in the context of advanced technologies frequently differs from the "intended" impacts (Kiesler 1986, Markus and Robey 1988, Siegel, Dubrovsky, Kiesler and McGuire 1986). People adapt systems to their particular work needs, or they resist them or fail to use them at all; and there are wide variances in the patterns of computer use and, consequently, their effects on decision making and other outcomes. We propose adaptive structuration theory (AST) as a framework for studying variations in organization change that occur as advanced technologies are used. The central concepts of AST, structuration (Bourdieu 1978, Giddens 1979) and appropriation (Ollman 1971), provide a dynamic picture of the process by which people incorporate advanced technologies into their work practices. According to AST, adaptation of technology structures by organizational actors is a key factor in organizational change. There is a "duality" of structure (Orlikowski 1992) whereby there is an interplay between the types of structures that are inherent to advanced technologies (and, hence, anticipated by designers and sponsors) and the structures that emerge in human action as people interact with these technologies. As a setting for our theoretical exposition, we consider the small group using a group decision support system (GDSS). A GDSS is one type of advanced information technology; it combines computing, communication, and decision support capabilities to aid in group idea generation, planning, problem solving, and choice making. In a typical configuration, a GDSS provides a computer terminal and keyboard to each participant in a meeting so that information (e.g., facts, ideas, comments, votes) can be readily entered and retrieved; specialized software provides decision structures for aggregating, sorting, and otherwise managing the meeting information (Dennis et al. 1988, DeSanctis and Gallupe 1987, Huber 1984). A GDSS is an interesting technology for study because its features can be arranged in a myriad of ways and social interaction is intimately involved in GDSS use. Consequently, the structure of the technology and the emergent structure of social action are in prominent view for the researcher to study. There currently is burgeoning interest in GDSSs and their potential role in facilitating organizational change. GDSS is a rich context in which to expound AST, but the principles of the theory apply to the broad array of advanced information technologies. In this paper we outline the assumptions of AST and detail a methodological strategy for studying how advanced technologies such as GDSSs are brought into social interaction to effect behavioral change. We begin by positioning AST among an array of theoretical perspectives on technology and change. Next, we describe the theoretical roots and scope of the theory and state the essential assumptions and concepts of AST. We summarize the relationships among the theoretical constructs in the form of propositions; the propositions can serve as the basis for specification of variables and hypotheses in future research. Finally, we outline a method for identifying structuring moves and present an illustration of the theory's application. Together, the theory and method provide an approach for penetrating the surface of advanced technology use to consider the deep structure of technology-induced organizational change. 2.0. Theoretical Roots of AST 2.1. Competing Views of Advanced Information Technology Effects Two major schools of thought have pursued the study of information technology and organizational change (see Table 1). The decision-making school has been more dominant. This school is rooted in the positivist tradition of research and presumes that decision making is "the primordial organizational act" (Perrow 1986); it emphasizes the cognitive processes associated with rational decision making and adopts a psychological approach to the study of technology and change. Decision theorists espouse "systems rationalism" (Rice 1984), the view that technology should consist of structures (e.g., data and decision models) designed to overcome human weaknesses (e.g., "bounded rationality" and "process losses"). Once applied, the technology should bring productivity, efficiency, and satisfaction to individuals and organizations. Variants within the decision school include "task-technology fit" models (Jarvenpaa 1989), which stress that technology must match work tasks in order to bring improvements in 122 ORGANIZATION SCIENCE/VO1. 5, No. 2, May 1994 This content downloaded from 140.119.81.207 on Mon, 14 Sep 2015 14:43:08 UTC All use subject to JSTOR Terms and Conditions GERARDINE DESANCTIS AND MARSHALL SCOTT POOLE Adaptive Structuration Theory Table 1 Adaptive Structuration Theory Blends Perspectives from the Decision-making School and the Institutional School Major Perspectives on Technology and Characteristics of Organizational Change Each Perspective Examples of Theoretical Approaches Decision-making School focus on technology engineering decision theory (Keen and Scott Morton 1978) hard-line determinism task-technology "fit" (Jarvenpaa 1989) relatively static models of behavior "garbage can" models (Pinfield 1986) positivist approach to research ideographic, cross-sectional research designs Social Technology School focus on technology and social structure sociotechnical systems theory (Bostrom (integrative perspectives) and Heinen 1977, Pasmore 1988) soft-line determinism structural symbolic interaction theory mixed models of behavior (Saunders and Jones 1990, Trevino et al. 1987) positivist and interpretive Barley's (1990) application of structuration theory approaches are integrated Orlikowski's (1992) structurational model adaptive structuration theory Institutional School focus on social structure segmented institutional (Kling 1980) nondeterministic models social information processing (Fulk et al. 1987, pure process models Salancik and Pfeffer 1978, Walther 1992) interpretive approach to research symbolic interactionism (Blumer 1969, Reichers 1987) nom
Resilient remediation: Addressing extreme weather and climate change, creating community value
1U.S. Sustainable Remediation Forum, Piedmont, California 2University of Reading, Reading, UK 3Cranfield University, Cranfield, Cranfield, UK 4Farallon Consulting, L.L.C., Issaquah, Washington 5EcoAdapt, Bainbridge Island, Washington 6CDM Smith, Edison, New Jersey 7School of Environment, Tsinghua University, Beijing, China 8EcoAdapt, Bainbridge Island, Washington 9Lawrence Berkeley National Laboratory, Berkeley, California 10National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan 11Wactor & Wick LLP, Oakland, California Correspondence Barbara Maco, 511 Boulevard Way, Piedmont, CA 94610. Email: [email protected] Abstract Recent devastating hurricanes demonstrated that extreme weather and climate change can jeopardize contaminated land remediation and harm public health and the environment. Since early 2016, the Sustainable Remediation Forum (SURF) has led research and organized knowledge exchanges to examine (1) the impacts of climate change and extreme weather events on hazardous waste sites, and (2) how we can mitigate these impacts and create value for communities. The SURF team found that climate change and extreme weather events can undermine the effectiveness of the approved site remediation, and can also affect contaminant toxicity, exposure, organism sensitivity, fate and transport, long-term operations, management, and stewardship of remediation sites. Further, failure to consider social vulnerability to climate change could compromise remediation and adaptation strategies. SURF's recommendations for resilient remediation build on resources and drivers from state, national, and international sources, and marry the practices of sustainable remediation and climate change adaptation. They outline both general principles and site-specific protocols and provide global examples of mitigation and adaptation strategies. Opportunities for synergy include vulnerability assessments that benefit and build on established hazardous waste management law, policy, and practices. SURF's recommendations can guide owners and project managers in developing a site resiliency strategy. Resilient remediation can help expedite cleanup and redevelopment, decrease public health risks, and create jobs, parks, wetlands, and resilient energy sources. Resilient remediation and redevelopment can also positively contribute to achieving international goals for sustainable land management, climate action, clean energy, and sustainable cities.
Parenteral nutrition support for patients with pancreatic cancer. Results of a phase II study
Cachexia is a common problem in patients (pts) suffering from upper gastrointestinal cancer. In addition, most of these patients suffer from malabsorption and stenosis of the gastrointestinal tract due to their illness. Various methods of supplementary nutrition (enteral, parenteral) are practised. In patients with advanced pancreatic cancer (APC), phase angle, determined by bio-electrical impedance analysis (BIA), seems to be a survival predictor. The positive influence of BIA determinate predictors by additional nutrition is currently under discussion. To examine the impact of additional parenteral nutrition (APN) we assessed outpatients suffering from APC and progressive cachexia. The assessment based on the BIA method. Assessment parameters were phase angle, ECM/BCM index (ratio of extracellular mass to body cell mass), and BMI (body mass index). Patients suffering from progressive weight loss in spite of additional enteral nutritional support were eligible for the study. Median treatment duration in 32 pts was 18 [8-35] weeks. Response evaluation showed a benefit in 27 pts (84%) in at least one parameter. 14 pts (43.7%) improved or stabilised in all three parameters. The median ECM/BCM index was 1.7 [1.11-3.14] at start of APN and improved down to 1.5 [1.12-3.36] during therapy. The median BMI increased from 19.7 [14.4-25.9] to 20.5 [15.4-25.0]. The median phase angle improved by 10% from 3.6 [2.3-5.1] to 3.9 [2.2-5.1]. We demonstrated the positive impact of APN on the assessed parameters, first of all the phase angle, and we observed at least a temporary benefit or stabilisation of the nutritional status in the majority of the investigated patients. Based on these findings we are currently investigating the impact of APN on survival in a larger patient cohort. ClinicalTrials.gov Identifier: NCT00919659
An Examination of Digital Forensic Models
Law enforcement is in a perpetual race with criminals in the application of digital technologies, and requires the development of tools to systematically search digital devices for pertinent evidence. Another part of this race, and perhaps more crucial, is the development of a methodology in digital forensics that encompasses the forensic analysis of all genres of digital crime scene investigations. This paper explores the development of the digital forensics process, compares and contrasts four particular forensic methodologies, and finally proposes an abstract model of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstractionmodel of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstraction Introduction The digital age can be characterized as the application of computer technology as a tool that enhances traditional methodologies. The incorporation of computer systems as a tool into private, commercial, educational, governmental, and other facets of modern life has improved
Connoisseur : Can GANs Learn Simple 1 D Parametric Distributions ?
Generative Adversarial Network (GAN) has been shown to possess the capability to learn distributions of data, given infinite capacity of models [1, 2]. Empirically, approximations with deep neural networks seem to have “sufficiently large” capacity and lead to several success in many applications, such as image generation. However, most of the results are difficult to evaluate because of the curse of dimensionality and the unknown distribution of the data. To evaluate GANs, in this paper, we consider simple one-dimensional data coming from parametric distributions circumventing the aforementioned problems. We formulate rigorous techniques for evaluation under this setting. Based on this evaluation, we find that many state-ofthe-art GANs are very difficult to train to learn the true distribution and can usually only find some of the modes. If the GAN has learned, such as MMD GAN, we observe it has some generalization capabilities.
An overview of embedding models of entities and relationships for knowledge base completion
Knowledge bases (KBs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform knowledge base completion or link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This article serves as a brief overview of embedding models of entities and relationships for knowledge base completion, summarizing up-to-date experimental results on standard benchmark datasets FB15k, WN18, FB15k-237, WN18RR, FB13 and WN11.
Brain Tumour Extraction from MRI Images Using MATLAB
Medical image processing is the most challenging and emerging field now a days. Processing of MRI images is one of the part of this field. This paper describes the proposed strategy to detect & extraction of brain tumour from patient’s MRI scan images of the brain. This method incorporates with some noise removal functions, segmentation and morphological operations which are the basic concepts of image processing. Detection and extraction of tumour from MRI scan images of the brain is done by using MATLAB software.
Non-rigid registration of 3D surfaces by deformable 2D triangular meshes
Non-rigid surface registration, particularly registration of human faces, finds a wide variety of applications in computer vision and graphics. We present a new automatic surface registration method which utilizes both attraction forces originating from geometrical and textural similarities, and stresses due to non-linear elasticity of the surfaces. Reference and target surfaces are first mapped onto their feature image planes, then these images are registered by subjecting them to local deformations, and finally 3D correspondences are established. Surfaces are assumed to be elastic sheets and are represented by triangular meshes. The internal elastic forces act as a regularizer in this ill-posed problem. Furthermore, the non-linear elasticity model allows us to handle large deformations, which can be essential, for instance, for facial expressions. The method has been tested successfully on 3D scanned human faces, with and without expressions. The algorithm runs quite efficiently using a multiresolution approach.
Arbitrary path tracking control of articulated vehicles using nonlinear control theory
In this paper, we will design a path tracking controller for an articulated vehicle (a semitrailer-like vehicle) using time scale transformation and exact linearization. The proposed controller allows articulated vehicles to Follow arbitrary paths consisting of arcs and lines, while they are moving forward andlor backward. The experimental result of the &shaped path tracking control of the articulated vehicle moving backward is also presented.
Communication-Efficient Learning of Deep Networks from Decentralized Data
Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10–100× as compared to synchronized stochastic gradient descent.
On-Line Analytical Processing
On-line analytical processing (OLAP) describes an approach to decision support, which aims to extract knowledge from a data warehouse, or more specifically, from data marts. Its main idea is providing navigation through data to non-expert users, so that they are able to interactively generate ad hoc queries without the intervention of IT professionals. This name was introduced in contrast to on-line transactional processing (OLTP), so that it reflected the different requirements and characteristics between these classes of uses. The concept falls in the area of business intelligence.
The Manufacturing Knowledge Repository - Consolidating Knowledge to Enable Holistic Process Knowledge Management in Manufacturing
The manufacturing industry is faced with strong competition making the companies’ knowledge resources and their systematic management a critical success factor. Yet, existing concepts for the management of process knowledge in manufacturing are characterized by major shortcomings. Particularly, they are either exclusively based on structured knowledge, e. g., formal rules, or on unstructured knowledge, such as documents, and they focus on isolated aspects of manufacturing processes. To address these issues, we present the Manufacturing Knowledge Repository, a holistic repository that consolidates structured and unstructured process knowledge to facilitate knowledge management and process optimization in manufacturing. First, we define requirements, especially the types of knowledge to be handled, e. g., data mining models and text documents. On this basis, we develop a conceptual repository data model associating knowledge items and process components such as machines and process steps. Furthermore, we discuss implementation issues including storage architecture variants and finally present both an evaluation of the data model and a proof of concept based on a prototypical implementation in a case example.
Feature-Based Facial Expression Recognition: Sensitivity Analysis and Experiments with A Multilayer Perceptron
In this paper, we report our experiments on feature-based facial expression recognition within an architecture based on a two-layer perceptron. We investigate the use of two types of features extracted from face images: the geometric positions of a set of fiducial points on a face, and a set of multi-scale and multi-orientation Gabor wavelet coefficients at these points. They can be used either independently or jointly. The recognition performance with different types of features has been compared, which shows that Gabor wavelet coefficients are much more powerful than geometric positions. Furthermore, since the first layer of the perceptron actually performs a nonlinear reduction of the dimensionality of the feature space, we have also studied the desired number of hidden units, i.e., the appropriate dimension to represent a facial expression in order to achieve a good recognition rate. It turns out that five to seven hidden units are probably enough to represent the space of feature expressions. Then, we have investigated the importance of each individual fiducial point to facial expression recognition. Sensitivity analysis reveals that points on cheeks and on forehead carry little useful information. After discarding them, not only the computational efficiency increases, but also the generalization performance slightly improves. Finally, we have studied the significance of image scales. Experiments show that facial expression recognition is mainly a low frequency process, and a spatial resolution of 64 pixels 64 pixels is probably enough.
Research Commentary - The Digital Transformation of Healthcare: Current Status and the Road Ahead
As the United States expends extraordinary efforts towards the digitization of its healthcare system, and as policy makers across the globe look to information technology as a means of making healthcare systems safer, more affordable, and more accessible, a rare and remarkable opportunity has emerged for the information systems research community to leverage its in-depth knowledge to both advance theory and influence practice and policy. Although Health IT (HIT) has tremendous potential for improving quality and reducing cost in healthcare, significant challenges need to be overcome to fully realize this potential. In this commentary, we survey the landscape of existing studies on HIT to provide an overview of the current status of HIT research. We then identify three major areas that warrant further research: 1) HIT design, implementation, and meaningful use, 2) measurement and quantification of HIT payoff and impact, and 3) extending the traditional realm of HIT. We discuss specific research questions in each domain and suggest appropriate methods to approach them. We encourage IS scholars to become active participants in the global discourse on healthcare transformation through information technology.
1 Image super-resolution : Historical overview and future challenges
1.
Neurologic Effects of Caffeine
Classic drugs of abuse lead to specific increases in cerebral functional activity and dopamine release in the shell of the nucleus accumbens (the key neural structure for reward, motivation, and addiction). In contrast, caffeine at doses reflecting daily human consumption does not induce a release of dopamine in the shell of the nucleus accumbens but leads to a release of dopamine in the prefrontal cortex, which is consistent with its reinforcing properties.
Better Word Representations with Recursive Neural Networks for Morphology
Vector-space word representations have been very successful in recent years at improving performance across a variety of NLP tasks. However, common to most existing work, words are regarded as independent entities without any explicit relationship among morphologically related words being modeled. As a result, rare and complex words are often poorly estimated, and all unknown words are represented in a rather crude way using only one or a few vectors. This paper addresses this shortcoming by proposing a novel model that is capable of building representations for morphologically complex words from their morphemes. We combine recursive neural networks (RNNs), where each morpheme is a basic unit, with neural language models (NLMs) to consider contextual information in learning morphologicallyaware word representations. Our learned models outperform existing word representations by a good margin on word similarity tasks across many datasets, including a new dataset we introduce focused on rare words to complement existing ones in an interesting way.
Simulation and analysis of a single phase AC-DC boost PFC converter with a passive snubber for power quality improvement
In this paper an ac-dc boost Power Factor Correction (PFC) converter with a passive snubber is introduced. The proposed passive snubber decreases the conduction losses but increases the range of soft switching achieved. The switch is given Zero Current Switching (ZCS) turn ON and Zero Voltage switching (ZVS) turn OFF. There are no additional stresses on all the semiconductor devices used. Due to the soft switching features, the PFC converter attains a high power factor of 0.98. The proposed converter with passive snubber is simulated in MATLAB and analyzed mode wise with detailed corresponding waveforms.
Thermodynamics of the silver–olefin bond: the influence of chelation and the symbiotic effect
Enthalpy and entropy changes in the formation of chelated complexes in aqueous solution between Ag+ and some thio- and seleno-alkenes demonstrate the thermodynamic origin of the symbiotic effect and the importance of using the lowest possible temperatures for reactions involving the co-ordination of olefins.
Acceptance-based interventions for the treatment of chronic pain: A systematic review and meta-analysis
Acceptance-based interventions such as mindfulness-based stress reduction program and acceptance and commitment therapy are alternative therapies for cognitive behavioral therapy for treating chronic pain patients. To assess the effects of acceptance-based interventions on patients with chronic pain, we conducted a systematic review and meta-analysis of controlled and noncontrolled studies reporting effects on mental and physical health of pain patients. All studies were rated for quality. Primary outcome measures were pain intensity and depression. Secondary outcomes were anxiety, physical wellbeing, and quality of life. Twenty-two studies (9 randomized controlled studies, 5 clinical controlled studies [without randomization] and 8 noncontrolled studies) were included, totaling 1235 patients with chronic pain. An effect size on pain of 0.37 was found for the controlled studies. The effect on depression was 0.32. The quality of the studies was not found to moderate the effects of acceptance-based interventions. The results suggest that at present mindfulness-based stress reduction program and acceptance and commitment therapy are not superior to cognitive behavioral therapy but can be good alternatives. More high-quality studies are needed. It is recommended to focus on therapies that integrate mindfulness and behavioral therapy. Acceptance-based therapies have small to medium effects on physical and mental health in chronic pain patients. These effects are comparable to those of cognitive behavioral therapy.
Modeling Dominance in Group Conversations Using Nonverbal Activity Cues
Dominance - a behavioral expression of power - is a fundamental mechanism of social interaction, expressed and perceived in conversations through spoken words and audiovisual nonverbal cues. The automatic modeling of dominance patterns from sensor data represents a relevant problem in social computing. In this paper, we present a systematic study on dominance modeling in group meetings from fully automatic nonverbal activity cues, in a multi-camera, multi-microphone setting. We investigate efficient audio and visual activity cues for the characterization of dominant behavior, analyzing single and joint modalities. Unsupervised and supervised approaches for dominance modeling are also investigated. Activity cues and models are objectively evaluated on a set of dominance-related classification tasks, derived from an analysis of the variability of human judgment of perceived dominance in group discussions. Our investigation highlights the power of relatively simple yet efficient approaches and the challenges of audiovisual integration. This constitutes the most detailed study on automatic dominance modeling in meetings to date.
A Benchmark Dataset to Study the Representation of Food Images
It is well-known that people love food. However, an insane diet can cause problems in the general health of the people. Since health is strictly linked to the diet, advanced computer vision tools to recognize food images (e.g. acquired with mobile/wearable cameras), as well as their properties (e.g., calories), can help the diet monitoring by providing useful information to the experts (e.g., nutritionists) to assess the food intake of patients (e.g., to combat obesity). The food recognition is a challenging task since the food is intrinsically deformable and presents high variability in appearance. Image representation plays a fundamental role. To properly study the peculiarities of the image representation in the food application context, a benchmark dataset is needed. These facts motivate the work presented in this paper. In this work we introduce the UNICT-FD889 dataset. It is the first food image dataset composed by over 800 distinct plates of food which can be used as benchmark to design and compare representation models of food images. We exploit the UNICT-FD889 dataset for Near Duplicate Image Retrieval (NDIR) purposes by comparing three standard state-of-the-art image descriptors: Bag of Textons, PRICoLBP and SIFT. Results confirm that both textures and colors are fundamental properties in food representation. Moreover the experiments point out that the Bag of Textons representation obtained considering the color domain is more accurate than the other two approaches for NDIR.
The contrasting influence of short-term hypoxia on the hydraulic properties of cells and roots of wheat and lupin
Little is knownaboutwaterflowacross intact root cells and roots in response tohypoxia.Responsesmaybe rapid if regulated by aquaporin activity, but only if water crosses membranes. We measured the transport properties of roots and cortical cells of three important crop species in response to hypoxia (0.05molO2m ):wheat (TriticumaestivumL.), narrowleafed lupin (Lupinus angustifolius L.) and yellow lupin (Lupinus luteus L.). Hypoxia influenced solute transport within minutes of exposure as indicatedby increases in root pressure (Pr) anddecreases in turgor pressure (Pc), but these effectswere only significant in lupins. Re-aeration returnedPr to original levels in yellow lupin, but in narrow-leafed lupin,Pr declined to zero or lower values without recovery even when re-aerated. Hypoxia inhibited hydraulic conductivity of root cortical cells (Lpc) in all three species, but only inhibited hydraulic conductivity of roots (Lpr) in wheat, indicating different pathways for radial water flow across lupin and wheat roots. The inhibition of Lpr of wheat depended on the length of the root, and inhibition of Lpc in the endodermis could account for the changes in Lpr. During re-aeration, aquaporin activity increased in wheat roots causing an overshoot in Lpr. The results of this study demonstrate that the roots of these species not only vary in hydraulic properties but also vary in their sensitivity to the same external O2 concentration. Additional keywords: hydraulic conductivity, oxygen deficiency, pressure probe, root pressure, turgor pressure.
Initial Access, Mobility, and User-Centric Multi-Beam Operation in 5G New Radio
5G radio access networks are expected to provide very high capacity, ultra-reliability and low latency, seamless mobility, and ubiquitous end-user experience anywhere and anytime. Driven by such stringent service requirements coupled with the expected dense deployments and diverse use case scenarios, the architecture of 5G New Radio (NR) wireless access has further evolved from the traditionally cell-centric radio access to a more flexible beam-based user-centric radio access. This article provides an overview of the NR system multi-beam operation in terms of initial access procedures and mechanisms associated with synchronization, system information, and random access. We further discuss inter-cell mobility handling in NR and its reliance on new downlink-based measurements to compensate for a lack of always-on reference signals in NR. Furthermore, we describe some of the user-centric coordinated transmission mechanisms envisioned in NR in order to realize seamless intra/inter-cell handover between physical transmission and reception points and reduce the interference levels across the network.
Fetal Growth versus Birthweight: The Role of Placenta versus Other Determinants
INTRODUCTION Birthweight is used as an indicator of intrauterine growth, and determinants of birthweight are widely studied. Less is known about determinants of deviating patterns of growth in utero. We aimed to study the effects of maternal characteristics on both birthweight and fetal growth in third trimester and introduce placental weight as a possible determinant of both birthweight and fetal growth in third trimester. METHODS The STORK study is a prospective cohort study including 1031 healthy pregnant women of Scandinavian heritage with singleton pregnancies. Maternal determinants (age, parity, body mass index (BMI), gestational weight gain and fasting plasma glucose) of birthweight and fetal growth estimated by biometric ultrasound measures were explored by linear regression models. Two models were fitted, one with only maternal characteristics and one which included placental weight. RESULTS Placental weight was a significant determinant of birthweight. Parity, BMI, weight gain and fasting glucose remained significant when adjusted for placental weight. Introducing placental weight as a covariate reduced the effect estimate of the other variables in the model by 62% for BMI, 40% for weight gain, 33% for glucose and 22% for parity. Determinants of fetal growth were parity, BMI and weight gain, but not fasting glucose. Placental weight was significant as an independent variable. Parity, BMI and weight gain remained significant when adjusted for placental weight. Introducing placental weight reduced the effect of BMI on fetal growth by 23%, weight gain by 14% and parity by 17%. CONCLUSION In conclusion, we find that placental weight is an important determinant of both birthweight and fetal growth. Our findings indicate that placental weight markedly modifies the effect of maternal determinants of both birthweight and fetal growth. The differential effect of third trimester glucose on birthweight and growth parameters illustrates that birthweight and fetal growth are not identical entities.
Robotic unilateral and bilateral upper-limb movement training for stroke survivors afflicted by chronic hemiparesis
Stroke is the leading cause of long-term neurological disability and the principle reason for seeking rehabilitative services in the US. Learning based rehabilitation training enables independent mobility in the majority of patients post stroke, however, restoration of fine manipulation, motor function and task specific functions of the hemiplegic arm and hand is noted in fewer than 15% of the stroke patients. Brain plasticity is the innate mechanism enabling the recovery of motor skills through neurological reorganization of the brain as a response to limbs' manipulation. The objective of this research was to evaluate the therapeutic efficacy for the upper limbs with a dual arm exoskeleton system (EXO-UL7) using three different modalities: bilateral mirror image with symmetric movements of both arms, unilateral movement of the affected arm and standard care. Five hemiparetic subjects were randomly assigned to each therapy modality. An upper limb exoskeleton was used to provide bilateral and unilateral treatments. Standard care was provided by a licensed physical therapist. Subjects were evaluated before and after the interventions using 13 different clinical measures. Following these treatments all of the subjects demonstrated significant improved of their fine motor control and gross control across all the treatment modalities. Subjects exhibited significant improvements in range of motion of the shoulder, and improved muscle strength for bilateral training and standard care, but not for unilateral training. In conclusion, a synergetic approach in which robotic treatments (unilateral and bilateral depending on the level of the motor control) are supplemented by the standard of care may maximize the outcome of the motor control recover following stroke.
A Distributed Canny Edge Detector: Algorithm and FPGA Implementation
The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100 MHz and is faster than existing FPGA and GPU implementations.
Control of Integrated Powertrain With Electronic Throttle and Automatic Transmission
A process to design the control strategy for a vehicle with electronic throttle control (ETC) and automatic transmission is proposed in this paper. The driver's accelerator pedal position is interpreted as a power request, which is to be satisfied by coordinating the transmission gear shift and the throttle opening in an optimal fashion. The dynamic programming (DP) technique is used to obtain the optimal gear shift and throttle opening which maximizes fuel economy while satisfying the power demand. The optimal results at different power levels are then combined to form a gear map and a throttle map which governs the operation of the integrated powertrain. A control architecture concept is presented where the relationship between the accelerator pedal position and the power demand level can be adjusted according to the preference of the vehicle performance target. Simulation, vehicle test, and dynamometer test results show that the proposed integrated powertrain control scheme produces power consistently and improves fuel efficiency compared with conventional powertrain control schemes
Precautionary Demand for Money in a Monetary Search Business Cycle Model
the model account for relevant properties of the data, but not very significantly. The biggest quantitative potential for search frictions in this setting appears to be in the dynamic implications they generate for inventories and markups in the retail sector. Assessment of this potential is in progress.
Adaptive photo collection page layout
This paper presents a new photo collection page layout that attempts to maximize page coverage without having photos overlap. Layout is based on a hierarchical page partition, which provides explicit control over the aspect ratios and relative areas of the photos. We present an efficient method for finding a partition that produces a photo arrangement suitable for the shape of the page. Rather than relying on a stochastic search we employ a deterministic procedure that mimics the natural process of adding photos to the layout one by one.
Effects of climate change on international tourism
We present a simulation model of the flow of tourists between 207 countries. The model almost perfectly reproduces the calibration year 1995, and performs well in reproducing the observations for 1980, 1985 and 1990. The model is used to generate scenarios of international tourist departures and arrivals for the period 2000-2075, with particular emphasis on climate change. The growth rate of international tourism is projected to increase over the coming decades, but may slow down later in the century as demand for travel saturates. Emissions of carbon dioxide would increase fast as well. With climate change, preferred destinations would shift to higher latitudes and altitudes. Tourists from temperate climates would spend more holidays in their home countries. As such tourists currently dominate the international tourism market, climate change would decrease worldwide tourism. The effects of climate change, however, are small compared to the baseline projections.
A deep language model for software code
Existing language models such as n-grams for software code often fail to capture a long context where dependent code elements scatter far apart. In this paper, we propose a novel approach to build a language model for software code to address this particular issue. Our language model, partly inspired by human memory, is built upon the powerful deep learning-based Long Short Term Memory architecture that is capable of learning long-term dependencies which occur frequently in software code. Results from our intrinsic evaluation on a corpus of Java projects have demonstrated the effectiveness of our language model. This work contributes to realizing our vision for DeepSoft, an end-to-end, generic deep learning-based framework for modeling software and its development process.
New Horizons for a Data-Driven Economy
ion from the underlying big data technologies is needed to enable ease of use for data scientists, and for business users. Many of the techniques required for real-time, prescriptive analytics, such as predictive modelling, optimization, and simulation, are data and compute intensive. Combined with big data these require distributed storage and parallel, or distributed computing. At the same time many of the machine learning and data mining algorithms are not straightforward to parallelize. A recent survey (Paradigm 4 2014) found that “although 49 % of the respondent data scientists could not fit their data into relational databases anymore, only 48 % have used Hadoop or Spark—and of those 76 % said they could not work effectively due to platform issues”. This is an indicator that big data computing is too complex to use without sophisticated computer science know-how. One direction of advancement is for abstractions and high-level procedures to be developed that hide the complexities of distributed computing and machine learning from data scientists. The other direction of course will be more skilled data scientists, who are literate in distributed computing, or distributed computing experts becoming more literate in data science and statistics. Advances are needed for the following technologies: • Abstraction is a common tool in computer science. Each technology at first is cumbersome. Abstraction manages complexity so that the user (e.g., 13 Big Data in the Energy and Transport Sectors 241
Support Vector Ordinal Regression
In this letter, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The sequential minimal optimization algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on some benchmark and real-world data sets, including applications of ordinal regression to information retrieval, verify the usefulness of these approaches.
The relationship between mobile phone location sensor data and depressive symptom severity
BACKGROUND Smartphones offer the hope that depression can be detected using passively collected data from the phone sensors. The aim of this study was to replicate and extend previous work using geographic location (GPS) sensors to identify depressive symptom severity. METHODS We used a dataset collected from 48 college students over a 10-week period, which included GPS phone sensor data and the Patient Health Questionnaire 9-item (PHQ-9) to evaluate depressive symptom severity at baseline and end-of-study. GPS features were calculated over the entire study, for weekdays and weekends, and in 2-week blocks. RESULTS The results of this study replicated our previous findings that a number of GPS features, including location variance, entropy, and circadian movement, were significantly correlated with PHQ-9 scores (r's ranging from -0.43 to -0.46, p-values <  .05). We also found that these relationships were stronger when GPS features were calculated from weekend, compared to weekday, data. Although the correlation between baseline PHQ-9 scores with 2-week GPS features diminished as we moved further from baseline, correlations with the end-of-study scores remained significant regardless of the time point used to calculate the features. DISCUSSION Our findings were consistent with past research demonstrating that GPS features may be an important and reliable predictor of depressive symptom severity. The varying strength of these relationships on weekends and weekdays suggests the role of weekend/weekday as a moderating variable. The finding that GPS features predict depressive symptom severity up to 10 weeks prior to assessment suggests that GPS features may have the potential as early warning signals of depression.
Condom Negotiations among Female Sex Workers in the Philippines: Environmental Influences
BACKGROUND Social and structural influences of condom negotiation among female sex workers (FSWs) remain understudied. This study assesses environmental and individual factors associated with condom negotiation among FSWs at high risk for acquiring HIV in a large urban setting of Metro Manila, Philippines. METHODS Female bar/spa workers (N = 498), aged 18 and over, underwent interview-led surveys examining their sexual health practices in the context of their risk environments. Data were collected from April 2009-January 2010 from 54 venues. Multiple logistic regressions were conducted to assess socio-behavioral factors (e.g., age, education, length of time employed as an entertainer, and alcohol/drug use) and socio-structural factors (e.g., venue-level peer/manager support, condom rule/availability, and sex trafficking) associated with condom negotiation, adjusting for individuals nested within venues. RESULTS Of 142 FSWs who traded sex in the previous 6 months (included in the analysis), 24% did not typically negotiate condom use with venue patrons. Factors in the physical environment--trafficked/coerced into work (AOR = 12.92, 95% CI = 3.34-49.90), economic environment--sex without a condom to make more money (AOR = 1.52, 95% CI 1.01-2.30), policy environment--sex without a condom because none was available (AOR = 2.58, 95% CI = 1.49-4.48), and individual risk--substance use (AOR = 2.36, 95% CI = 1.28-4.35) were independently associated with FSWs' lack of condom negotiation with venue patrons. CONCLUSIONS Factors in the physical, economic, and policy environments, over individual (excepting substance use) and social level factors, were significantly associated with these FSWs' condom negotiations in the Philippines. Drawing upon Rhodes' risk environment framework, these results highlight the need for policies that support safer sex negotiations among sex workers in the context of their risk environments. Interventions should reduce barriers to condom negotiation for FSWs trafficked/coerced into their work, substance using, and impacted by economic conditions and policies that do not support condom availability.
Childhood Obesity Perceptions Among African American Caregivers in a Rural Georgia Community: A Mixed Methods Approach
Given the pivotal role of African American caregiver’s perceptions of childhood obesity in rural areas, the inclusion of caregiver’s perceptions could potentially reduce childhood obesity rates. The objective of the current study was to explore childhood obesity perceptions among African Americans in a rural Georgia community. This concurrent mixed methods study utilized two theoretical frameworks: Social Cognitive Theory and Social Ecological Model. Using a convenience sample, caregivers ages 22–65 years completed a paper-based survey (n = 135) and a face-to-face interview (n = 12) to explore perceptions of obesity risk factors, health complications, weight status, built environment features, and obesity prevention approaches. Descriptive statistics were generated and a six-step process was used for qualitative analysis. Participants commonly cited behavioral risk factors; yet, social aspects and appearance of the community were not considered contributing factors. Chronic diseases were reported as obesity health complications. Caregivers had a distorted view of their child’s weight status. In addition, analysis revealed that caregivers assessed child’s weight and height measurements by the child’s appearance or a recent doctor visit. Environmental barriers reported by caregivers included safety concerns and insufficient physical activity venues and programs. Also, caregivers conveyed parents are an imperative component of preventing obesity. Although this study found caregivers were aware of obesity risk factors, health complications, built environment features, and prevention approaches their obesity perceptions were not incorporated into school or community prevention efforts. Findings suggest that children residing in rural areas are in need of tailored efforts that address caregiver perceptions of obesity.
Deep Neural Networks for YouTube Recommendations
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Levels of ICT Integration among Teacher Educators in a Teacher Education Academic College
This article examines the perspective of teacher educators and academic officials in an academic teacher education program regarding the integration of ICT in the teacher education program. The study portrays the current state of the ICT integration process and the implementation of the program for “Adapting Teacher Training Colleges to 21st Century Education” in a specific academic college in one of Israel’s outlying areas. This mixed methods study combined quantitative and qualitative methods. Data was collected by means of a closed questionnaire, an open-ended questionnaire for the teacher educators (N = 68), and semi-structured interviews conducted with the academic officials (N = 12). Findings revealed a hierarchical range of ICT integration in teaching, which reflects different profiles of teacher educators who integrate innovative pedagogies. The three integration levels (the basic level, the focused level, and the creative level) reflect the scope of ICT integration in the context of teacher training creating a continuum of integration and implementation, which can serve as an infrastructure for the effective adoption and integration of this innovative pedagogy by teacher educators and academic officials in academic teacher training colleges.
NASA@WORK — Welcome!
NASA@WORK is an internal, agency-wide platform that provides NASA employees an unconventional and inventive way to share knowledge and advance projects.
External validity in randomised controlled trials of acupuncture for osteoarthritis knee pain.
OBJECTIVES To assess two aspects of the external validity of acupuncture research for osteoarthritis knee pain and determine the common acupoints and treatment parameters used. METHODS The external validity of 16 randomised controlled trials (RCTs) was investigated using a scale consisting of two aspects: reporting and performance. The reporting aspect included acupuncturist's background, study location, treatment detailed, patient characteristics, positive trial results, adverse effects and between-group statistical differences, whereas treatment appropriateness, appropriate controls and outcomes were classified as the performance aspect. Acupuncture treatment in RCTs was compared with common practice according to the literature sources and survey of acupuncturists working in different parts of Thailand. RESULTS The levels of external validity for the reporting and performance aspects were in the range of 31.3% to 100%. Statistic values such as mean difference and confidence interval were reported by the minority of trials (43.8%). Patient satisfaction and quality of life were seldom used (31.3%). There were minor differences between research and practice in terms of the points used (25.0%), number of treatment sessions (6.3%) and frequency (12.5%). The most frequently used points were ST34, ST35, ST36, SP6, SP9, SP10, GB34, Xiyan and ah shi points, and the commonly used treatment parameters were 20 minutes, 10-15 sessions and two treatments weekly. CONCLUSIONS Reporting of the external validity of acupuncture RCTs for knee pain was notably inadequate in terms of trial setting, treatment provider and statistical reporting. The majority of studies involved appropriate controls and outcomes and applied acupuncture treatments in line with practice.
Detection of copy-move image forgery based on discrete cosine transform
Since powerful editing software is easily accessible, manipulation on images is expedient and easy without leaving any noticeable evidences. Hence, it turns out to be a challenging chore to authenticate the genuineness of images as it is impossible for human’s naked eye to distinguish between the tampered image and actual image. Among the most common methods extensively used to copy and paste regions within the same image in tampering image is the copy-move method. Discrete Cosine Transform (DCT) has the ability to detect tampered regions accurately. Nevertheless, in terms of precision (FP) and recall (FN), the block size of overlapping block influenced the performance. In this paper, the researchers implemented the copy-move image forgery detection using DCT coefficient. Firstly, by using the standard image conversion technique, RGB image is transformed into grayscale image. Consequently, grayscale image is segregated into overlying blocks of m × m pixels, m = 4.8. 2D DCT coefficients are calculated and reposition into a feature vector using zig-zag scanning in every block. Eventually, lexicographic sort is used to sort the feature vectors. Finally, the duplicated block is located by the Euclidean Distance. In order to gauge the performance of the copy-move detection techniques with various block sizes with respect to accuracy and storage, threshold D_similar = 0.1 and distance threshold (N)_d = 100 are used to implement the 10 input images in order. Consequently, 4 × 4 overlying block size had high false positive thus decreased the accuracy of forged detection in terms of accuracy. However, 8 × 8 overlying block accomplished more accurately for forged detection in terms of precision and recall as compared to 4 × 4 overlying block. In a nutshell, the result of the accuracy performance of different overlying block size are influenced by the diverse size of forged area, distance between two forged areas and threshold value used for the research.
Impact of acute kidney injury on coagulation in adult minimal change nephropathy
A hypercoagulable state exists in patients with nephrotic syndrome (NS), which more easily leads to venous thromboembolism (VTE). However, whether acute kidney injury (AKI), a common complication of NS, affects the hypercoagulable state and VTE has rarely been elucidated. In this study, we aimed to explore coagulation changes and analyze relevant influencing factors in NS-AKI patients.A total of 269 consecutive NS patients with minimal change disease (MCD) between 2011 and 2016 were included in this observational study. Ninety-one cases were in the AKI group and 178 cases in the non-AKI group. The 1:1 propensity score matching (PSM) method was applied to match the baseline information. The coagulation biomarkers were compared, and the thrombosis events were recorded. Linear correlation was performed to detect any relation between D-dimer and clinical data.The PSM method gave matched pairs of 88 MCD patients with AKI and non-AKI patients, resulting in no differences in baseline information. The D-dimer, fibrinogen, and thromboelastography parameters maximum amplitude (MA), G values of the MCD-AKI patients were significantly higher than the levels of the MCD patients without AKI (D-dimer: 1.8 [1.0, 3.3] vs 1.1 [0.6, 1.7] mg/L, P < 0.001; fibrinogen: 7.0±2.0 vs 6.5 ± 1.4 g/L, P = 0.036; MA: 74.6 ± 5.0 vs 70.5 ± 5.3 mm, P = 0.020; G: 15.7 ± 5.3 vs 12.5 ± 3.3, P = 0.034). For the MCD patients, the serum creatinine, white blood cell count, and interleukin-6 levels in the patients with D-dimers >1 mg/L were significantly higher than those of patients with D-dimers ≤1 mg/L. The correlation analysis showed that the D-dimer level was correlated with serum creatinine, white blood cell count, and interleukin-6 (r = 0.410, P =  < 0.001; r = 0.248, P =  < 0.001; r = 0.306, P =  < 0.001, respectively). Five deep vein thrombosis events occurred in the AKI group and 1 pulmonary embolism event occurred in the non-AKI group after adjusting the propensity score value. AKI appeared to have an association with higher incidence of VTE, but the difference was not statistically significant (RR: 4.9, 95% CI: 0.6-42.7, P = 0.154).The MCD-NS patients complicated with AKI had a more severe hypercoagulable state, which might be associated with the active inflammation of AKI that mediated activation of the coagulation system.
Fluid and electrolytes in the aged.
OBJECTIVE To review the physiological changes in fluid and electrolytes that occur in aging. DATA SOURCES Data collected for this review were identified from a MEDLINE database search of the English-language literature. The indexing terms were fluids, intravenous fluids, fluid resuscitation, fluid management, perioperative, electrolytes, aged, elderly, hemodynamics, hyponatremia, hypernatremia, hypocalcemia, hypercalcemia, hypomagnesemia, hypermagnesemia, hypophosphatemia, hypokalemia, and hyperkalemia. Relevant references from articles obtained by means of the above search terms were also used. STUDY SELECTION All pertinent studies were included. Only articles that were case presentations or did not specifically address the topic were excluded. DATA SYNTHESIS The fastest-growing segment of the population in the United States is individuals 65 years or older. It is imperative that health care professionals review the physiological changes that manifest during the aging process. Fluids and electrolytes are important perioperative factors that undergo age-related changes. These changes include impaired thirst perception; decreased glomerular filtration rate; alterations in hormone levels, including antidiuretic hormone, atrial natriuretic peptide, and aldosterone; decreased urinary concentrating ability; and limitations in excretion of water, sodium, potassium, and acid. CONCLUSIONS There are age-related alterations in the homeostatic mechanisms used to maintain electrolyte and water balance. Health care providers must familiarize themselves with these alterations to guide treatment of this growing population.
Incidence of injury in professional mixed martial arts competitions.
Mixed Martial Arts (MMA) competitions were introduced in the United States with the first Ultimate Fighting Championship (UFC) in 1993. In 2001, Nevada and New Jersey sanctioned MMA events after requiring a series of rule changes. The purpose of this study was to determine the incidence of injury in professional MMA fighters. Data from all professional MMA events that took place between September 2001 and December 2004 in the state of Nevada were obtained from the Nevada Athletic Commission. Medical and outcome data from events were analyzed based on a pair-matched case-control design. Both conditional and unconditional logistic regression models were used to assess risk factors for injury. A total of 171 MMA matches involving 220 different fighters occurred during the study period. There were a total of 96 injuries to 78 fighters. Of the 171 matches fought, 69 (40.3%) ended with at least one injured fighter. The overall injury rate was 28.6 injuries per 100 fight participations or 12.5 injuries per 100 competitor rounds. Facial laceration was the most common injury accounting for 47.9% of all injuries, followed by hand injury (13.5%), nose injury (10.4%), and eye injury (8.3%). With adjustment for weight and match outcome, older age was associated with significantly increased risk of injury. The most common conclusion to a MMA fight was a technical knockout (TKO) followed by a tap out. The injury rate in MMA competitions is compatible with other combat sports involving striking. The lower knockout rates in MMA compared to boxing may help prevent brain injury in MMA events. Key PointsMixed martial arts (MMA) has changed since the first MMA matches in the United States and now has increased safety regulations and sanctioning.MMA competitions have an overall high rate of injury.There have been no MMA deaths in the United States.The knockout (KO) rate in MMA appears to be lower than the KO rate of boxing matches.MMA must continue to be supervised by properly trained medical professionals and referees to ensure fighter safety in the future.
Capability Model for Open Data: An Empirical Analysis
Creating superior competitiveness is central to open data organization's survivability in the fast changing and competitive open data market. In their quest to develop and increase competitiveness and survivability, many of these organizations are moving towards developing open data capabilities. Research-based knowledge on open data capabilities and how they relate to each other remains sparse, however, with most of the open data literature focusing on social and economic value of open data, not capabilities required. By exploring the related literature on business and organizational capabilities and linking the findings to the empirical evidence collected through the survey of 49 open data organizations around the world, this study develops an open data capability model. The model emerged from our deductive research process improves both theoretical and practical understanding of open data capabilities and their relationships required to help increase competitiveness and survivability of these types of organizations.
Vault: Fast Bootstrapping for Cryptocurrencies
Decentralized cryptocurrencies rely on participants to keep track of the state of the system in order to verify new transactions. As the number of users and transactions grows, this requirement places a significant burden on the users, as they need to download, verify, and store a large amount of data in order to participate. Vault is a new cryptocurrency designed to minimize these storage and bootstrapping costs for participants. Vault builds on Algorand’s proof-of-stake consensus protocol and uses several techniques to achieve its goals. First, Vault decouples the storage of recent transactions from the storage of account balances, which enables Vault to delete old account state. Second, Vault allows sharding state across participants in a way that preserves strong security guarantees. Finally, Vault introduces the notion of stamping certificates that allow a new client to catch up securely and efficiently in a proof-of-stake system without having to verify every single block. Experiments with a prototype implementation of Vault’s data structures shows that Vault reduces the bandwidth cost of joining the network as a full client by 99.7% compared to Bitcoin and 90.5% compared to Ethereum when downloading a ledger containing 500 million transactions.
Soil acidification monitoring in the Netherlands
The last decades of the 20th century are characterized by a vast increase in activities developed and measures taken by the authorities to accommodate the fear of environmental problems that boomed since the late 1960s. The growing environmental awareness and the subsequent incorporation of environmental values and arguments in the social discourse and practice has been multifaceted and complex. Within this diverse context, the present study took shape; a shape that reflects the socio-political aspects tied in with environmental attention in general and environmental research in specific. Essential to this study is the increased need for the authorities to monitor the development of the quality of the environment. Policy makers need monitoring systems to identify problems, to list priorities, and to check whether measures taken have the desired effects on environmental quality. For these purposes not only monitoring systems are needed, also reference values for environmental quality are required to judge the values measured within the monitoring setting. The aim of this thesis is to contribute to the scientific foundation of environmental quality management in general and that of soil quality and soil acidification monitoring specifically.
Functional restoration of elbow extension after spinal-cord injury using a neural network-based synergistic FES controller
Individuals with a C5/C6 spinal-cord injury (SCI) have paralyzed elbow extensors, yet retain weak to strong voluntary control of elbow flexion and some shoulder movements. They lack elbow extension, which is critical during activities of daily living. This research focuses on the functional evaluation of a developed synergistic controller employing remaining voluntary elbow flexor and shoulder electromyography (EMG) to control elbow extension with functional electrical stimulation (FES). Remaining voluntarily controlled upper extremity muscles were used to train an artificial neural network (ANN) to control stimulation of the paralyzed triceps. Surface EMG was collected from SCI subjects while they produced isometric endpoint force vectors of varying magnitude and direction using triceps stimulation levels predicted by a biomechanical model. ANNs were trained with the collected EMG and stimulation levels. We hypothesized that once trained and implemented in real-time, the synergistic controller would provide several functional benefits. We anticipated the synergistic controller would provide a larger range of endpoint force vectors, the ability to grade and maintain forces, the ability to complete a functional overhead reach task, and use less overall stimulation than a constant stimulation scheme.
Automated cancer diagnosis based on histopathological images : a systematic survey
In traditional cancer diagnosis, pathologists examine biopsies to make diagnostic assessments largely based on cell morphology and tissue distribution. However, this is subjective and often leads to considerable variability. On the other hand, computational diagnostic tools enable objective judgments by making use of quantitative measures. This paper presents a systematic survey of the computational steps in automated cancer diagnosis based on histopathology. These computational steps are: 1.) image preprocessing to determine the focal areas, 2.) feature extraction to quantify the properties of these focal areas, and 3.) classifying the focal areas as malignant or not or identifying their malignancy levels. In Step 1, the focal area determination is usually preceded by noise reduction to improve its success. In the case of cellular-level diagnosis, this step also comprises nucleus/cell segmentation. Step 2 defines appropriate representations of the focal areas that provide distinctive objective measures. In Step 3, automated diagnostic systems that operate on quantitative measures are designed. After the design, this step also estimates the accuracy of the system. In this paper, we detail these computational steps, address their challenges, and discuss the remedies to overcome the challenges, emphasizing the importance of constituting benchmark data sets. Such benchmark data sets allow comparing the different features and system designs and prevent misleading accuracy estimation of the systems. Therefore, this allows determining the subsets of distinguishing features, devise new features, and improve the success of automated cancer diagnosis.
PERMIT: Network Slicing for Personalized 5G Mobile Telecommunications
5G mobile systems are expected to meet different strict requirements beyond the traditional operator use cases. Effectively, to accommodate needs of new industry segments such as healthcare and manufacturing, 5G systems need to accommodate elasticity, flexibility, dynamicity, scalability, manageability, agility, and customization along with different levels of service delivery parameters according to the service requirements. This is currently possible only by running the networks on top of the same infrastructure, the technology called network function virtualization, through this sharing of the development and infrastructure costs between the different networks. In this article, we discuss the need for the deep customization of mobile networks at different granularity levels: per network, per application, per group of users, per individual users, and even per data of users. The article also assesses the potential of network slicing to provide the appropriate customization and highlights the technology challenges. Finally, a high-level architectural solution is proposed, addressing a massive multi-slice environment.
Automatic Annotation of Daily Activity from Smartphone-Based Multisensory Streams
We present a system for automatic annotation of daily experience from multisensory streams on smartphones. Using smartphones as platform facilitates collection of naturalistic daily activity, which is difficult to collect with multiple on-body sensors or array of sensors affixed to indoor locations. However, recognizing daily activities in unconstrained settings is more challenging than in controlled environments: 1) multiples heterogeneous sensors equipped in smartphones are noisier, asynchronous, vary in sampling rates and can have missing data; 2) unconstrained daily activities are continuous, can occur concurrently, and have fuzzy onset and offset boundaries; 3) ground-truth labels obtained from the user’s self-report can be erroneous and accurate only in a coarse time scale. To handle these problems, we present in this paper a flexible framework for incorporating heterogeneous sensory modalities combined with state-of-the-art classifiers for sequence labeling. We evaluate the system with real-life data containing 11721 minutes of multisensory recordings, and demonstrate the accuracy and efficiency of the proposed system for practical lifelogging applications.
Dynamic Fungal Cell Wall Architecture in Stress Adaptation and Immune Evasion.
Deadly infections from opportunistic fungi have risen in frequency, largely because of the at-risk immunocompromised population created by advances in modern medicine and the HIV/AIDS pandemic. This review focuses on dynamics of the fungal polysaccharide cell wall, which plays an outsized role in fungal pathogenesis and therapy because it acts as both an environmental barrier and as the major interface with the host immune system. Human fungal pathogens use architectural strategies to mask epitopes from the host and prevent immune surveillance, and recent work elucidates how biotic and abiotic stresses present during infection can either block or enhance masking. The signaling components implicated in regulating fungal immune recognition can teach us how cell wall dynamics are controlled, and represent potential targets for interventions designed to boost or dampen immunity.
Uncertainty in Deep Learning
Deep learning has attracted tremendous attention from researchers in various fields of information engineering such as AI, computer vision, and language processing [Kalchbrenner and Blunsom, 2013; Krizhevsky et al., 2012; Mnih et al., 2013], but also from more traditional sciences such as physics, biology, and manufacturing [Anjos et al., 2015; Baldi et al., 2014; Bergmann et al., 2014]. Neural networks, image processing tools such as convolutional neural networks, sequence processing models such as recurrent neural networks, and regularisation tools such as dropout, are used extensively. However, fields such as physics, biology, and manufacturing are ones in which representing model uncertainty is of crucial importance [Ghahramani, 2015; Krzywinski and Altman, 2013]. With the recent shift in many of these fields towards the use of Bayesian uncertainty [Herzog and Ostwald, 2013; Nuzzo, 2014; Trafimow and Marks, 2015], new needs arise from deep learning. In this work we develop tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation. In the first part of this thesis we develop the theory for such tools, providing applications and illustrative examples. We tie approximate inference in Bayesian models to dropout and other stochastic regularisation techniques, and assess the approximations empirically. We give example applications arising from this connection between modern deep learning and Bayesian modelling such as active learning of image data and data efficient deep reinforcement learning. We further demonstrate the method’s practicality through a survey of recent applications making use of the suggested tools in language applications, medical diagnostics, bioinformatics, image processing, and autonomous driving. In the second part of the thesis we explore its theoretical implications, and the insights stemming from the link between Bayesian modelling and deep learning. We discuss what determines model uncertainty properties, analyse the approximate inference analytically in the linear case, and theoretically examine various priors such as spike and slab priors.
Neural Random Forests
Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard networks, and less restrictions on the geometry of the decision boundaries. Consistency results are proved, and substantial numerical evidence is provided on both synthetic and real data sets to assess the excellent performance of our methods in a large variety of prediction problems. Index Terms — Random forests, neural networks, ensemble methods, randomization, sparse networks. 2010 Mathematics Subject Classification: 62G08, 62G20, 68T05.
Trial by Dutch laboratories for evaluation of non‐invasive prenatal testing. Part II—women's perspectives†
OBJECTIVE To evaluate preferences and decision-making among high-risk pregnant women offered a choice between Non-Invasive Prenatal Testing (NIPT), invasive testing or no further testing. METHODS Nationwide implementation study (TRIDENT) offering NIPT as contingent screening test for women at increased risk for fetal aneuploidy based on first-trimester combined testing (>1:200) or medical history. A questionnaire was completed after counseling assessing knowledge, attitudes and participation following the Multidimensional Measure of Informed Choice. RESULTS A total of 1091/1253 (87%) women completed the questionnaire. Of these, 1053 (96.5%) underwent NIPT, 37 (3.4%) invasive testing and 1 (0.1%) declined testing. 91.7% preferred NIPT because of test safety. Overall, 77.9% made an informed choice, 89.8% had sufficient knowledge and 90.5% had positive attitudes towards NIPT. Women with intermediate (odds ratio (OR) = 3.51[1.70-7.22], p < 0.001) or high educational level (OR = 4.36[2.22-8.54], p < 0.001) and women with adequate health literacy (OR = 2.60[1.36-4.95], p = 0.004) were more likely to make an informed choice. Informed choice was associated with less decisional conflict and less anxiety (p < 0.001). Intention to terminate the pregnancy for Down syndrome was higher among women undergoing invasive testing (86.5%) compared to those undergoing NIPT (58.4%) (p < 0.001). CONCLUSIONS The majority of women had sufficient knowledge and made an informed choice. Continuous attention for counseling is required, especially for low-educated and less health-literate women. © 2016 The Authors. Prenatal Diagnosis published by John Wiley & Sons, Ltd.
Ettore Majorana and his heritage seventy years later
The physicists working in several areas of research know quite well the name of Ettore Majorana, since it is currently associated to fundamental concepts like Majorana neutrinos in particle physics and cosmology or Majorana fermions in condensed matter physics. But, probably, very few is known about other substantial contributions of that ingenious scholar, and even less about his personal background. For non specialists, instead, the name of Ettore Majorana is usually intimately related to the fact that he disappeared rather mysteriously on March 26, 1938, just seventy years ago, and was never seen again. The life and the work of this Italian scientist is the object of the present review, which will also offer a summary of the main results achieved in recent times by the historical and scientific researches on his work.
Annotation of the Corymbia terpene synthase gene family shows broad conservation but dynamic evolution of physical clusters relative to Eucalyptus
Terpenes are economically and ecologically important phytochemicals. Their synthesis is controlled by the terpene synthase (TPS) gene family, which is highly diversified throughout the plant kingdom. The plant family Myrtaceae are characterised by especially high terpene concentrations, and considerable variation in terpene profiles. Many Myrtaceae are grown commercially for terpene products including the eucalypts Corymbia and Eucalyptus. Eucalyptus grandis has the largest TPS gene family of plants currently sequenced, which is largely conserved in the closely related E. globulus. However, the TPS gene family has been well studied only in these two eucalypt species. The recent assembly of two Corymbia citriodora subsp. variegata genomes presents an opportunity to examine the conservation of this important gene family across more divergent eucalypt lineages. Manual annotation of the TPS gene family in C. citriodora subsp. variegata revealed a similar overall number, and relative subfamily representation, to that previously reported in E. grandis and E. globulus. Many of the TPS genes were in physical clusters that varied considerably between Eucalyptus and Corymbia, with several instances of translocation, expansion/contraction and loss. Notably, there was greater conservation in the subfamilies involved in primary metabolism than those involved in secondary metabolism, likely reflecting different selective constraints. The variation in cluster size within subfamilies and the broad conservation between the eucalypts in the face of this variation are discussed, highlighting the potential contribution of selection, concerted evolution and stochastic processes. These findings provide the foundation to better understand terpene evolution within the ecologically and economically important Myrtaceae.
Longitudinal split tears of the ulnotriquetral ligament.
Unlike tears of the peripheral triangular fibrocartilage or avulsions of the distal radioulnar ligaments, longitudinal split tears of the ulnotriquetral (UT) ligament do not cause any instability to the distal radioulnar joint or the ulnocarpal articulation. It is mainly a pain syndrome that can be incapacitating. However, because the UT ligament arises from the palmar radioulnar ligament of the triangular fibrocartilage complex (TFCC), it is by definition, an injury of the TFCC. The purpose of this article is to describe the cause of chronic ulnar wrist pain arising from a longitudinal split tear of the UT ligament.
Stochastic Modeling of Hybrid Cache Systems
In recent years, there is an increasing demand of big memory systems so to perform large scale data analytics. Since DRAM memories are expensive, some researchers are suggesting to use other memory systems such as non-volatile memory (NVM) technology to build large-memory computing systems. However, whether the NVM technology can be a viable alternative (either economically and technically) to DRAM remains an open question. To answer this question, it is important to consider how to design a memory system from a "system perspective",that is, incorporating different performance characteristics andprice ratios from hybrid memory devices. This paper presents an analytical model of a "hybrid page cache system" so to understand the diverse design space and performance impact of a hybrid cache system. We consider (1) various architectural choices, (2) design strategies, and (3) configuration of different memory devices. Using this model, we provide guidelines on how to design hybrid page cache to reach a good trade-off between high system throughput (in I/O per sec or IOPS) and fast cache reactivity which is defined by the time to fill the cache. We also show how one can configure the DRAM capacity and NVM capacity under a fixed budget. We pick PCM as an example for NVM and conduct numerical analysis. Our analysis indicates that incorporating PCM in a page cache system significantly improves the system performance, and it also shows larger benefit to allocate more PCM in page cache in some cases. Besides, for the common setting of performance-price ratio of PCM, "flat architecture" offers as a better choice, but "layered architecture" outperforms if PCM write performance can be significantly improved in the future.
AFEW-VA database for valence and arousal estimation in-the-wild
Continuous dimensional models of human affect, such as those based on valence and arousal, have been shown to be more accurate in describing a broad range of spontaneous, everyday emotions than the more traditional models of discrete stereotypical emotion categories (e.g. happiness, surprise). However, most prior work on estimating valence and arousal considered only laboratory settings and acted data. It is unclear whether the findings of these studies also hold when the methodologies proposed in these works are tested on data collected in-the-wild. In this paper we investigate this. We propose a new dataset of highly accurate per-frame annotations of valence and arousal for 600 challenging video clips extracted from feature films (also used in part for the AFEW dataset). For each video clip, we further provide per-frame annotations of 68 facial landmarks. We subsequently evaluate a number of common baseline and state-of-the-art methods on both a commonly used laboratory recording dataset (Semaine database) and the newly proposed recording set (AFEW-VA). Our results show that geometric features perform well independently of the settings. However, as expected, methods that perform well on constrained data do not necessarily generalise to uncontrolled data and vice-versa. © 2017 Elsevier B.V. All rights reserved.
The Faults in Our Pi Stars: Security Issues and Open Challenges in Deep Reinforcement Learning
Since the inception of Deep Reinforcement Learning (DRL) algorithms, there has been a growing interest in both research and industrial communities in the promising potentials of this paradigm. The list of current and envisioned applications of deep RL ranges from autonomous navigation and robotics to control applications in the critical infrastructure, air traffic control, defense technologies, and cybersecurity. While the landscape of opportunities and the advantages of deep RL algorithms are justifiably vast, the security risks and issues in such algorithms remain largely unexplored. To facilitate and motivate further research on these critical challenges, this paper presents a foundational treatment of the security problem in DRL. We formulate the security requirements of DRL, and provide a high-level threat model through the classification and identification of vulnerabilities, attack vectors, and adversarial capabilities. Furthermore, we present a review of current literature on security of deep RL from both offensive and defensive perspectives. Lastly, we enumerate critical research venues and open problems in mitigation and prevention of intentional attacks against deep RL as a roadmap for further research in this area.
Face swapping: automatically replacing faces in photographs
In this paper, we present a complete system for automatic face replacement in images. Our system uses a large library of face images created automatically by downloading images from the internet, extracting faces using face detection software, and aligning each extracted face to a common coordinate system. This library is constructed off-line, once, and can be efficiently accessed during face replacement. Our replacement algorithm has three main stages. First, given an input image, we detect all faces that are present, align them to the coordinate system used by our face library, and select candidate face images from our face library that are similar to the input face in appearance and pose. Second, we adjust the pose, lighting, and color of the candidate face images to match the appearance of those in the input image, and seamlessly blend in the results. Third, we rank the blended candidate replacements by computing a match distance over the overlap region. Our approach requires no 3D model, is fully automatic, and generates highly plausible results across a wide range of skin tones, lighting conditions, and viewpoints. We show how our approach can be used for a variety of applications including face de-identification and the creation of appealing group photographs from a set of images. We conclude with a user study that validates the high quality of our replacement results, and a discussion on the current limitations of our system.
Salmeterol xinafoate in asthmatic patients under consideration for maintenance oral corticosteroid therapy. UK Study Group.
In severe chronic asthma, long-term oral steroids may be necessary to control symptoms. In patients in whom such treatment was under consideration, the efficacy and safety of salmeterol xinafoate 100 micrograms b.i.d. was investigated in a randomized, double-blind, placebo-controlled parallel-group, multicentre study. One hundred and nineteen chronic symptomatic asthmatics were randomized to receive either salmeterol, 100 micrograms b.i.d. (n = 55; baseline % predicted morning peak expiratory flow (PEF) 59%; forced expiratory volume in one second (FEV1) 66%) or placebo (n = 64; baseline % predicted morning PEF 63%; FEV1 66%) both via the Diskhaler. Morning and evening PEF and asthma symptoms were recorded in daily record booklets by the patient over a 12 week period. A significant improvement in morning PEF was achieved after 1 month in the salmeterol treated group; this persisted throughout the treatment period (estimated treatment difference 22 L.min-1). There was a significant increase in the proportion of symptom-free nights experienced by the salmeterol treated group (33 (SD 32) %) compared with placebo (13 (26) %), and a significant decrease in daily use of relief medication (mean decrease 5.1 (4.7) doses per day with salmeterol, 2.5 (4.0) doses with placebo). Both treatments were well-tolerated, with no evidence of any difference in the side-effects associated with beta 2-agonists. In conclusion, the addition of salmeterol (100 micrograms daily) to the existing treatment of chronic asthmatics under consideration for maintenance oral corticosteroid therapy is well-tolerated, improves lung function and provides additional symptom control.
SCRUM Development Process
ABSTRACT. The stated, accepted philosophy for systems development is that the development process is a well understood approach that can be planned, estimated, and successfully completed. This has proven incorrect in practice. SCRUM assumes that the systems development process is an unpredictable, complicated process that can only be roughly described as an overall progression. SCRUM defines the systems development process as a loose set of activities that combines known, workable tools and techniques with the best that a development team can devise to build systems. Since these activities are loose, controls to manage the process and inherent risk are used. SCRUM is an enhancement of the commonly used iterative/incremental object-oriented development cycle.
Addressing Issues Impacting Advanced Nursing Practice Worldwide.
Advanced practice nursing roles are developing globally, and opportunities for advanced practice nursing are expanding worldwide due to the need for expert nursing care at an advanced level of practice. Yet it is well recognized that barriers exist with respect to APRNs being able to practice to the full extent of their education and training. Addressing barriers to APRN practice worldwide and ensuring that APRNs are able to practice to the full extent of their education and training can help to promote optimal role fulfillment as well as assessment of the impact of the APRN role.
Social media meets hotel revenue management : Opportunities , issues and unanswered questions
Hotel companies are struggling to keep up with the rapid consumer adoption of social media. Although many companies have begun to develop social media programs, the industry has yet to fully explore the potential of this emerging data and communication resource. The revenue management department, as it evolves from tactical inventory management to a more expansive role across the organization, is poised to be an early adopter of the opportunities afforded by social media. We propose a framework for evaluating social media-related revenue management opportunities, discuss the issues associated with leveraging these opportunities and propose a roadmap for future research in this area. Journal of Revenue and Pricing Management (2011) 10, 293–305. doi:10.1057/rpm.2011.12; published online 6 May 2011
Table Detection in Noisy Off-line Handwritten Documents
Table detection can be a valuable step in the analysis of unstructured documents. Although much work has been conducted in the domain of machine-print including books, scientific papers, etc., little has been done to address the case of handwritten inputs. In this paper, we study table detection in scanned handwritten documents subject to challenging artifacts and noise. First, we separate text components (machine-print, handwriting) from the rest of the page using an SVM classifier. We then employ a correlation-based approach to measure the coherence between adjacent text lines which may be part of the same table, solving the resulting page decomposition problem using dynamic programming. A report of preliminary results from ongoing experiments concludes the paper.
Practical Issues in Automatic Documentation Generation
PLANDoc, a system under joint development by Columbia and Bellcore, documents the activity of planning engineers as they study telephone routes. It takes as input a trace of the engineer’s interaction with a network planning tool and produces 1-2 page summary. In this paper, we describe the user needs analysis we performed and how it influenced the development of PLANDoc. In particular, we show how it pinpointed the need for a sublanguage specification, allowing us to identify input messages and to characterize the different sentence paraphrases for realizing them. We focus on the systematic use of conjunction in combination with paraphrase that we developed for PLANDoc, which allows for the generation of summaries that are both concise–avoiding repetition of similar information, and fluent– avoiding repetition of similar phrasing.
The Bayesian brain: Phantom percepts resolve sensory uncertainty
Phantom perceptions arise almost universally in people who sustain sensory deafferentation, and in multiple sensory domains. The question arises 'why' the brain creates these false percepts in the absence of an external stimulus? The model proposed answers this question by stating that our brain works in a Bayesian way, and that its main function is to reduce environmental uncertainty, based on the free-energy principle, which has been proposed as a universal principle governing adaptive brain function and structure. The Bayesian brain can be conceptualized as a probability machine that constantly makes predictions about the world and then updates them based on what it receives from the senses. The free-energy principle states that the brain must minimize its Shannonian free-energy, i.e. must reduce by the process of perception its uncertainty (its prediction errors) about its environment. As completely predictable stimuli do not reduce uncertainty, they are not worthwhile of conscious processing. Unpredictable things on the other hand are not to be ignored, because it is crucial to experience them to update our understanding of the environment. Deafferentation leads to topographically restricted prediction errors based on temporal or spatial incongruity. This leads to an increase in topographically restricted uncertainty, which should be adaptively addressed by plastic repair mechanisms in the respective sensory cortex or via (para)hippocampal involvement. Neuroanatomically, filling in as a compensation for missing information also activates the anterior cingulate and insula, areas also involved in salience, stress and essential for stimulus detection. Associated with sensory cortex hyperactivity and decreased inhibition or map plasticity this will result in the perception of the false information created by the deafferented sensory areas, as a way to reduce increased topographically restricted uncertainty associated with the deafferentation. In conclusion, the Bayesian updating of knowledge via active sensory exploration of the environment, driven by the Shannonian free-energy principle, provides an explanation for the generation of phantom percepts, as a way to reduce uncertainty, to make sense of the world.
Study and Mitigation of Origin Stripping Vulnerabilities in Hybrid-postMessage Enabled Mobile Applications
postMessage is popular in HTML5 based web apps to allow the communication between different origins. With the increasing popularity of the embedded browser (i.e., WebView) in mobile apps (i.e., hybrid apps), postMessage has found utility in these apps. However, different from web apps, hybrid apps have a unique requirement that their native code (e.g., Java for Android) also needs to exchange messages with web code loaded in WebView. To bridge the gap, developers typically extend postMessage by treating the native context as a new frame, and allowing the communication between the new frame and the web frames. We term such extended postMessage "hybrid postMessage" in this paper. We find that hybrid postMessage introduces new critical security flaws: all origin information of a message is not respected or even lost during the message delivery in hybrid postMessage. If adversaries inject malicious code into WebView, the malicious code may leverage the flaws to passively monitor messages that may contain sensitive information, or actively send messages to arbitrary message receivers and access their internal functionalities and data. We term the novel security issue caused by hybrid postMessage "Origin Stripping Vulnerability" (OSV). In this paper, our contributions are fourfold. First, we conduct the first systematic study on OSV. Second, we propose a lightweight detection tool against OSV, called OSV-Hunter. Third, we evaluate OSV-Hunter using a set of popular apps. We found that 74 apps implemented hybrid postMessage, and all these apps suffered from OSV, which might be exploited by adversaries to perform remote real-time microphone monitoring, data race, internal data manipulation, denial of service (DoS) attacks and so on. Several popular development frameworks, libraries (such as the Facebook React Native framework, and the Google cloud print library) and apps (such as Adobe Reader and WPS office) are impacted. Lastly, to mitigate OSV from the root, we design and implement three new postMessage APIs, called OSV-Free. Our evaluation shows that OSV-Free is secure and fast, and it is generic and resilient to the notorious Android fragmentation problem. We also demonstrate that OSV-Free is easy to use, by applying OSV-Free to harden the complex "Facebook React Native" framework. OSV-Free is open source, and its source code and more implementation and evaluation details are available online.
A Social Curiosity Inspired Recommendation Model to Improve Precision, Coverage and Diversity
With the prevalence of social networks, social recommendation is rapidly gaining popularity. Currently, social information has mainly been utilized for enhancing rating prediction accuracy, which may not be enough to satisfy user needs. Items with high prediction accuracy tend to be the ones that users are familiar with and may not interest them to explore. In this paper, we take a psychologically inspired view to recommend items that will interest users based on the theory of social curiosity and study its impact on important dimensions of recommender systems. We propose a social curiosity inspired recommendation model which combines both user preferences and user curiosity. The proposed recommendation model is evaluated using large scale real world datasets and the experimental results demonstrate that the inclusion of social curiosity significantly improves recommendation precision, coverage and diversity.
Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales
We address therating-inference problem, wherein rather than simply decide whether a review is “thumbs up” or “thumbs down”, as in previous sentiment analysis work, one must determine an author’s evaluation with respect to a multi-point scale (e.g., one to five “stars”). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, “three stars” is intuitively closer to “four stars” than to “one star”. We first evaluate human performance at the task. Then, we apply a metaalgorithm, based on a metric labelingformulation of the problem, that alters a given n-ary classifier’s output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem. Publication info: Proceedings of the ACL, 2005.
The Unintentional Procrastination Scale
Procrastination refers to the delay or postponement of a task or decision and is often conceptualised as a failure of self-regulation. Recent research has suggested that procrastination could be delineated into two domains: intentional and unintentional. In this two-study paper, we aimed to develop a measure of unintentional procrastination (named the Unintentional Procrastination Scale or the 'UPS') and test whether this would be a stronger marker of psychopathology than intentional and general procrastination. In Study 1, a community sample of 139 participants completed a questionnaire that consisted of several items pertaining to unintentional procrastination that had been derived from theory, previous research, and clinical experience. Responses were subjected to a principle components analysis and assessment of internal consistency. In Study 2, a community sample of 155 participants completed the newly developed scale, along with measures of general and intentional procrastination, metacognitions about procrastination, and negative affect. Data from the UPS were subjected to confirmatory factor analysis and revised accordingly. The UPS was then validated using correlation and regression analyses. The six-item UPS possesses construct and divergent validity and good internal consistency. The UPS appears to be a stronger marker of psychopathology than the pre-existing measures of procrastination used in this study. Results from the regression models suggest that both negative affect and metacognitions about procrastination differentiate between general, intentional, and unintentional procrastination. The UPS is brief, has good psychometric properties, and has strong associations with negative affect, suggesting it has value as a research and clinical tool.
Patient safety in cancer care: a time for action.
Retrospective chart audit studies of acute care in several countries have shown that between 3% and 16% of patients experience one or more harmful adverse events while hospitalized and that about half of these events are preventable ( 1 ). These studies indicate that medication treatment is an area of high risk. We still know relatively little about the incidence of adverse events in nonacute settings or for specific patient populations. In this issue of the Journal, Riechelmann et al. ( 2 ) begin to address this issue for cancer patients attending an outpatient oncology clinic. They surveyed patients concerning the medication that they had taken in the previous 4 weeks. From the information they received, they determined that 27% of the patients had the potential for one or more possibly serious drug interactions. The majority of these possible drug interactions involved not antineoplastic agents but rather drugs that were being administered for noncancer comorbidities. Although an incidence of 27% seems high, the results are consistent with what is known about the potential for adverse drug events in patients who have contact with several doctors. Indeed, Tamblyn et al. ( 3 ) concluded that “a single primary care physician and a single dispensing pharmacy may be ‘protective’ factors in preventing potentially inappropriate drug combinations.” This observation has been reinforced by Blendon et al. ( 4 ), who in 2002 carried out a random survey of more than 3800 adults in the United States, Australia, Canada, New Zealand, and the United Kingdom whose health had been defi ned as less than optimal based on their responses to a questionnaire. More than one-third of the 66% who regularly took one or more medications had not had their medications reviewed by the physician they relied on the most for the previous 2 years. Further, 15.8% of those seeing one or two doctors reported either a medication or medical error in the last 2 years, compared with 28.8% of those seeing three or more physicians.
Late recanalisation beyond 24 hours is associated with worse outcome: an observational study
We evaluated the rate of late recanalisation beyond 24 h after intravenous thrombolysis (IVT) and its relationship with haemorrhagic transformation and outcome. We reviewed prospectively collected clinical and imaging data from acute ischaemic stroke patients with distal internal carotid artery or proximal middle cerebral artery occlusion who underwent angiography on admission, 24 h and 1 week after IVT. Patients were trichotomised according to vascular status: timely recanalisation (<24 h), late recanalisation (24 h-7 days), and no recanalisation. Non-invasive angiography revealed timely recanalisation in 52 (50.0 %) patients, late recanalisation in 25 (24.0 %) patients, and no recanalisation in 27 (26.0 %) patients. Pre-existing atrial fibrillation was associated with the occurrence of late recanalisation (odds ratio 6.674; 95 % CI: 1.197 to 37.209; p = 0.030). In patients without timely recanalisation, shift analysis indicated that late recanalisation led to a worse modified Rankin Scale score (odds ratio 6.787; 95 % CI: 2.094 to 21.978; p = 0.001). About half of all patients without recanalisation by 24 h after IVT may develop late recanalisation within 1 week, along with higher mRS scores by 3 months. Pre-existing atrial fibrillation is an independent predictor for late recanalisation. • About half of patients may develop late recanalisation within 1 week. • Pre-existing atrial fibrillation was associated with the occurrence of late recanalisation. • Late recanalisation led to a higher mRS score than no recanalisation.
Active tamoxifen metabolite plasma concentrations after coadministration of tamoxifen and the selective serotonin reuptake inhibitor paroxetine.
BACKGROUND Tamoxifen, a selective estrogen receptor modulator (SERM), is converted to 4-hydroxy-tamoxifen and other active metabolites by cytochrome P450 (CYP) enzymes. Selective serotonin reuptake inhibitors (SSRIs), which are often prescribed to alleviate tamoxifen-associated hot flashes, can inhibit CYPs. In a prospective clinical trial, we tested the effects of coadministration of tamoxifen and the SSRI paroxetine, an inhibitor of CYP2D6, on tamoxifen metabolism. METHODS Tamoxifen and its metabolites were measured in the plasma of 12 women of known CYP2D6 genotype with breast cancer who were taking adjuvant tamoxifen before and after 4 weeks of coadministered paroxetine. We assessed the inhibitory activity of pure tamoxifen metabolites in an estradiol-stimulated MCF7 cell proliferation assay. To determine which CYP isoforms were involved in the metabolism of tamoxifen to specific metabolites, we used CYP isoform-specific inhibitors. All statistical tests were two-sided. RESULTS We separated, purified, and identified the metabolite 4-hydroxy-N-desmethyl-tamoxifen, which we named endoxifen. Plasma concentrations of endoxifen statistically significantly decreased from a mean of 12.4 ng/mL before paroxetine coadministration to 5.5 ng/mL afterward (difference = 6.9 ng/mL, 95% confidence interval [CI] = 2.7 to 11.2 ng/mL) (P =.004). Endoxifen concentrations decreased by 64% (95% CI = 39% to 89%) in women with a wild-type CYP2D6 genotype but by only 24% (95% CI = 23% to 71%) in women with a variant CYP2D6 genotype (P =.03). Endoxifen and 4-hydroxy-tamoxifen inhibited estradiol-stimulated MCF7 cell proliferation with equal potency. In vitro, troleandomycin, an inhibitor of CYP3A4, inhibited the demethylation of tamoxifen to N-desmethyl-tamoxifen by 78% (95% CI = 65% to 91%), and quinidine, an inhibitor of CYP2D6, reduced the subsequent hydroxylation of N-desmethyl-tamoxifen to endoxifen by 79% (95% CI = 50% to 108%). CONCLUSIONS Endoxifen is an active tamoxifen metabolite that is generated via CYP3A4-mediated N-demethylation and CYP2D6-mediated hydroxylation. Coadministration of paroxetine decreased the plasma concentration of endoxifen. Our data suggest that CYP2D6 genotype and drug interactions should be considered in women treated with tamoxifen.
Amplitude calibration of quartz tuning fork (QTF) force sensor with an atomic force microscope
Amplitude calibration of the quartz tuning fork (QTF) sensor includes the measurement of the sensitivity factor (αTF). We propose, AFM based methods (cantilever tracking and z-servo tracking of the QTF's amplitude of vibration) to determine the sensitivity factor of the QTF. The QTF is mounted on a xyz-scanner of the AFM and a soft AFM probe is approached on the apex of a tine of the QTF by driving the z-servo and using the normal deflection voltage (Vtb) of position sensitive detector (PSD) as feedback signal. Once the tip contacts the tine, servo is switched off. QTF is electrically excited with a sinusoidal signal from OC4 (Nanonis) and amplitude of the QTF's output at transimpedance amplifier (Vtf) and amplitude of VTB (Vp) is measured by individual lock-in amplifiers which are internally synchronized to the phase of the excitation signal of the QTF. Before, the measurements optical lever is calibrated. By relating the both voltages (Vp & Vtf), sensitivity factor of the QTF (αTF) is determined. In the second approach, after the tip contacts the tine, the z-servo is switched off firstly, then the feedback signal is switched to Vp and frequency sweep for the QTF, Vtb as well as z-servo are started, instantaneously. To keep the Vp at set-point the feedback control moves the z-servo to track the vibration amplitude of the QTF and thus the distance traveled by the z-servo (Δζ) during sweep is equal to the fork's amplitude of vibration (ΔxTF). αtf is determined by relating Δz and VTF. Both approaches can be non-destructively applied for QTF sensor calibration. AFM imaging of the AFM calibration grating TGZ1 (from NT-MDT Russia) has been performed with a calibrated QTF sensor.
Study of Parallel Image Processing with the Implementation of vHGW Algorithm using CUDA on NVIDIA ’ S GPU Framework
This paper provides an effective study of the implementation of parallel image processing techniques using CUDA on NVIDIA GPU framework. It also discusses about the major requirements of parallelism in medical image processing techniques. Additional important aspect of this paper is to develop vHGW(van Herk/Gill-Werman morphology) algorithm intended for erosion and dilation proposed for diverse types of structuring elements of random length and along with random angle parallely on NVidia’s GPU GeForce GTX 860M. The main motive behind implementing image morphological operations is its importance for extracting components of an image. That can be beneficial in the demonstration and explanation of shape of the region. These experiments have been implemented on CUDA 5.0 architecture with NVidia’s GPU GeForce GTX 860M and got significant results in terms of time.
A probabilistic model for food image recognition in restaurants
A large amount of food photos are taken in restaurants for diverse reasons. This dish recognition problem is very challenging, due to different cuisines, cooking styles and the intrinsic difficulty of modeling food from its visual appearance. Contextual knowledge is crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and geolocation of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then we reformulate the problem using a probabilistic model connecting dishes, restaurants and geolocations. We apply that model in three different tasks: dish recognition, restaurant recognition and geolocation refinement. Experiments on a dataset including 187 restaurants and 701 dishes show that combining multiple evidences (visual, geolocation, and external knowledge) can boost the performance in all tasks.
Self-organizing neural integration of pose-motion features for human action recognition
The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions.
Detection of GAN-Generated Fake Images over Social Networks
The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.
A Case for A Collaborative Query Management System
Over the past 40 years, database management systems (DBMSs) have evolved to provide a sophisticated variety of data management capabilities. At the same time, tools for managing queries over the data have remained relatively primitive. One reason for this is that queries are typically issued through applications. They are thus debugged once and re-used repeatedly. This mode of interaction, however, is changing. As scientists (and others) store and share increasingly large volumes of data in data centers, they need the ability to analyze the data by issuing exploratory queries. In this paper, we argue that, in these new settings, data management systems must provide powerful query management capabilities, from query browsing to automatic query recommendations. We first discuss the requirements for a collaborative query management system. We outline an early system architecture and discuss the many research challenges associated with building such an engine.
Databases of Expressive Speech
This paper discusses the construction of speech databases for research into speech information processing and describes a problem illustrated by the case of emotional speech synthesis. It introduces a project for the processing of expressive speech, and describes the data collection techniques and the subsequent analysis of supra-linguistic, and emotional features signalled in the speech. It presents annotation guidelines for distinguishing speaking-style differences, and argues that the focus of analysis for expressive speech processing applications should be on the speaker relationships (defined herein), rather than on emotions.