title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
A Miniaturized Hilbert Inverted-F Antenna for Wireless Sensor Network Applications | Miniaturized inverted-F antennas (IFAs) are proposed for wireless sensor network applications in the 2.45 GHz band. By employing Hilbert geometry, an overall size reduction of 77% was achieved compared to the conventional rectangular patch antenna. The proposed antenna can be easily built in a miniaturized wireless sensor network (WSN). According to the design rules presented in this paper, antennas can be simulated rapidly. An experimental prototype of the miniature antenna was fabricated on a 1.6-mm-thick FR4 substrate. The bandwidth of this antenna less than 10 dB is 220 MHz (2.32-2.54 GHz) and the percentage of the bandwidth is 9.1%. The peak gain is 1.4 dBi. The measurement results indicate that the antenna shows good performance. |
Accuracy and precision of a novel non-invasive core thermometer. | BACKGROUND
Accurate measurement of core temperature is a standard component of perioperative and intensive care patient management. However, core temperature measurements are difficult to obtain in awake patients. A new non-invasive thermometer has been developed, combining two sensors separated by a known thermal resistance ('double-sensor' thermometer). We thus evaluated the accuracy of the double-sensor thermometer compared with a distal oesophageal thermometer to determine if the double-sensor thermometer is a suitable substitute.
METHODS
In perioperative and intensive care patient populations (n=68 total), double-sensor measurements were compared with measurements from a distal oesophageal thermometer using Bland-Altman analysis and Lin's concordance correlation coefficient (CCC).
RESULTS
Overall, 1287 measurement pairs were obtained at 5 min intervals. Ninety-eight per cent of all double-sensor values were within +/-0.5 degrees C of oesophageal temperature. The mean bias between the methods was -0.08 degrees C; the limits of agreement were -0.66 degrees C to 0.50 degrees C. Sensitivity and specificity for detection of fever were 0.86 and 0.97, respectively. Sensitivity and specificity for detection of hypothermia were 0.77 and 0.93, respectively. Lin's CCC was 0.93.
CONCLUSIONS
The new double-sensor thermometer is sufficiently accurate to be considered an alternative to distal oesophageal core temperature measurement, and may be particularly useful in patients undergoing regional anaesthesia. |
Validation of the Spanish Version of the COPD-Q Questionnaire on COPD Knowledge. | RATIONALE
Although recognition of the importance of educating chronic obstructive pulmonary disease (COPD) patients has grown in recent years, their understanding of this disease is not being measured due to a lack of specific instruments. The aim of this study was to validate the COPD-Q questionnaire, a 13-item instrument for determining COPD knowledge.
METHODS
The COPD-Q was translated and backtranslated, and subsequently submitted to logic and content validation by a group of COPD experts and 8 COPD patients. Reliability was studied in an independent group of 59 patients with severe COPD seen in the pulmonology ward or clinics of 6 hospitals in Spain (Andalusia, Baleares, Castilla-La Mancha, Galicia and Madrid). This sample was also used for other internal and external validations.
RESULTS
The mean age of the group was approximately 70 years and their health awareness was low-to-medium. The number of correct answers was 8.3 (standard deviation: 1.9), median 8, range 3-13. Floor and ceiling effects were 0% and 1.5%, respectively. Internal consistency of the questionnaire was good (Cronbach's alpha=0.85) and reliability was also high, with a kappa coefficient >0.6 for all items and an intraclass correlation efficient of 0.84 for the total score.
CONCLUSION
The 13-item COPD-Q is a valid, applicable and reliable instrument for determining patients' knowledge of COPD. |
SoK: Cryptographically Protected Database Search | Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly, systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions:1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms.2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality.3) An analysis of attacks against protected search for different base queries.4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search. |
Aerosol pentamidine-induced bronchoconstriction. Predictive factors and preventive therapy. | OBJECTIVE
To describe the frequency of aerosol pentamidine-induced bronchoconstriction, its relationship to non-specific airway responsiveness, and its response to preventive therapy using salbutamol, ipratropium bromide, or sodium cromoglycate.
METHODS
Consecutive HIV-infected individuals starting prophylactic AP were eligible if they had not been previously treated with this agent. Simple spirometry was performed before and 10 min after a single 60-mg dose given through an ultrasonic nebulizer. Methacholine challenge was performed in all subjects 24 h to four days after the initial AP dose. Subjects with a change in FEV1 (delta FEV1) greater than or equal to 10 percent decrease after the initial AP dose were restudied on three separate occasions (greater than 24 hours apart) after premedication with two puffs of salbutamol (200 micrograms), ipratropium bromide (40 micrograms), or sodium cromoglycate (2 mg), in random order.
RESULTS
Fifty-three subjects were studied. The median delta FEV1 after a single dose of AP was -7.0 percent (range: -47 percent, 1.8 percent). The delta FEV1 following AP was only partially predicted by the degree of nonspecific bronchial responsiveness as measured by a standard methacholine challenge. Age, current smoking, history of asthma, baseline FEV1, or a prior episode of PCP failed to predict the delta FEV1 following AP. Eighteen subjects (34 percent) had a delta FEV1 greater than or equal to 10 percent decrease (median: -17.0 percent). In these subjects, after premedication with salbutamol, ipratropium bromide, and sodium cromoglycate, the median delta FEV1 was 1.0, 0.8, and -9.6 percent, respectively.
CONCLUSION
Aerosol pentamidine produced a decrease in FEV1 greater than or equal to 10 percent in 34 percent of subjects. This was not accurately predicted by the methacholine response. The bronchoconstriction induced by AP was effectively prevented by either salbutamol or ipratropium, whereas cromoglycate was only partially effective. |
Relevance Theory and the Saying/Implicating Distinction | A distinction between saying and implicating has held a central place in pragmatic s since Grice, with ‘what is said’ usually equated with the (context-relative) semantic content of an utterance. In relevance theory, a distinction is made between two kinds of communicated assumptions, explicatures and implicatures, with explicatures defined as pragmatic developments of encoded linguistic meaning. It is argued here that, given a context-free semantics for linguistic expression types, together with the explicature/implicature distinction, there is no role for any minimally propositional notion of ‘what is said’. |
Proteus: Exploiting Numerical Precision Variability in Deep Neural Networks | This work exploits the tolerance of Deep Neural Networks (DNNs) to reduced precision numerical representations and specifically, their recently demonstrated ability to tolerate representations of different precision per layer while maintaining accuracy. This flexibility enables improvements over conventional DNN implementations that use a single, uniform representation. This work proposes Proteus, which reduces the data traffic and storage footprint needed by DNNs, resulting in reduced energy and improved area efficiency for DNN implementations. Proteus uses a different representation per layer for both the data (neurons) and the weights (synapses) processed by DNNs. Proteus is a layered extension over existing DNN implementations that converts between the numerical representation used by the DNN execution engines and the shorter, layer-specific fixed-point representation used when reading and writing data values to memory be it on-chip buffers or off-chip memory. Proteus uses a novel memory layout for DNN data, enabling a simple, low-cost and low-energy conversion unit.
We evaluate Proteus as an extension to a state-of-the-art accelerator [7] which uses a uniform 16-bit fixed-point representation. On five popular DNNs Proteus reduces data traffic among layers by 43% on average while maintaining accuracy within 1% even when compared to a single precision floating-point implementation. As a result, Proteus improves energy by 15% with no performance loss. Proteus also reduces the data footprint by at least 38% and hence the amount of on-chip buffering needed resulting in an implementation that requires 20% less area overall. This area savings can be used to improve cost by building smaller chips, to process larger DNNs for the same on-chip area, or to incorporate an additional three execution engines increasing peak performance bandwidth by 18%. |
"Four Civilizations" and the Evolution of Post-Mao Chinese Socialist Ideology | (ProQuest: ... denotes non-USASCII text omitted.) In the contemporary Chinese streetscape, "civilization" is everywhere. The Chinese rendering of civilization, wenming (...),1 has become ubiquitous on poster and billboard advertising, street-side banners and signage, building-mounted plaques and large-character slogans in rural and urban China.2 The volume of literature in China devoted to socialist civilization theory is immense. Chinese Communist Party (CCP) and academic journals, educational guides, ethics and citizenship handbooks, newspaper articles and websites-particularly since the mid-1990shave carried stories and commentaries covering myriad aspects of the CCP's formulation of what it means to be "civilized". A formidable Party-state apparatus in the form of Spiritual Civilization Offices and Standing Committees implements the promotion of the civilization narrative at provincial, district and work-unit level. While "civilizing" discourses as social and governing phenomena in China are receiving increasing scholarly attention, the discursive function of civilization within CCP ideology has not been given the attention that it warrants. Morality campaigns in the post-Mao period have been studied largely as theatres of factional struggle within the CCP leadership and as manifestations of a politicized control balance between social order and economic liberalization. While these campaigns have satisfied various political agendas, they have also been seen as attempts to fill the perceived inadequacies of China's moral culture in dealing with the unintended realities of contemporary market reform. For much of the post-Mao era, the promotion of a positive relationship between China's economic development and moral health has been effected through the binary framework of the "two civilizations"-material civilization and spiritual civilization.3 From the early 1980s, claiming a genealogy in classical Marxism, the idea of the two civilizations provided an ideologically palatable framework through which the CCP articulated the values necessary to achieve "balanced development" (pinghengde fazhan ...).4 While continued economic growth highlighted gains in "material civilization", regular morality drives promoting "socialist spiritual civilization" ostensibly attempted to instill within the Chinese citizenry a modern socialist morality robust enough to handle the new challenges of the socialist market economy. Such slogans as "to grasp with two hands"5 reinforced the idea that complimentary progress in both the economic and moral spheres was necessary if China was to achieve its developmental ends without losing its soul in the process. The two civilizations became necessary halves of a discursive coin, a unifying narrative representing the management of a multi-layered struggle between economic and moral progress, materialism and ideology, reform and conservatism, globalization and nationalism, cultural dissolution and the positive repackaging of China's cultural traditions. Importantly, the two civilizations proved to be a dynamic framework, allowing the promotion of the material and the spiritual to alternate, depending on which was considered in vogue at the time. Over most of the post-Mao period this dynamic presented itself as an evolution from an initial emphasis by Deng Xiaoping on material civilization to a subsequent emphasis on the spiritual under Jiang Zemin. Furthermore, the meaning and use of spiritual civilization itself was changeable, making it a durable-yet at times contested-concept. Although it represented an active departure from politically configured notions of progress such as class struggle, Deng's spiritual civilization was defined in socialist terminology, while under Jiang Zemin it found expression in the language of cultural nationalism. At the CCP's Sixteenth National Congress in November 2002, however, the twenty-four-year evolution of the two-civilization duality took an altogether unexpected turn with Jiang Zemin's introduction of a third civilization-"political civilization". … |
A review of tricaine methanesulfonate for anesthesia of fish | Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish. |
Filtering of Azimuth Ambiguity in Stripmap Synthetic Aperture Radar Images | Due to the specific characteristics of the SAR system, peculiar artifacts can appear on SAR images. In particular, finite pulse repetition frequency (PRF) and nonideal antenna pattern give rise to azimuth ambiguity, with the possible presence of “ghosts” on the image. They are due to the replica of strong targets located outside of the antenna main beam, superposed onto low intensity areas of the imaged scene. In this paper, we propose a method for the filtering of azimuth ambiguities on stripmap SAR images, that we name “asymmetric mapping and selective filtering” (AM&SF) method. Our framework is based on the theory of selective filtering and on a two-step procedure. In the first step, two asymmetric filters are used to suppress ambiguities due to each sidelobe of the antenna pattern, and the ratios between the original and filtered images are used to produce two maps of the ambiguity-affected areas (one for each sidelobe). In the second step, these maps are used to produce a final image in which only the areas affected by the ambiguities are replaced by their filtered (via the proper of the two filters) versions. The proposed method can be employed in situations in which similar approaches fail, and it has a smaller computational burden. The framework is positively tested on TerraSAR-X and COSMO/SkyMed SAR images of different marine scenes. |
Face Alignment Across Large Poses: A 3D Solution | Face alignment, which fits a face model to an image and extracts the semantic meanings of facial pixels, has been an important topic in CV community. However, most algorithms are designed for faces in small to medium poses (below 45), lacking the ability to align faces in large poses up to 90. The challenges are three-fold: Firstly, the commonly used landmark-based face model assumes that all the landmarks are visible and is therefore not suitable for profile views. Secondly, the face appearance varies more dramatically across large poses, ranging from frontal view to profile view. Thirdly, labelling landmarks in large poses is extremely challenging since the invisible landmarks have to be guessed. In this paper, we propose a solution to the three problems in an new alignment framework, called 3D Dense Face Alignment (3DDFA), in which a dense 3D face model is fitted to the image via convolutional neutral network (CNN). We also propose a method to synthesize large-scale training samples in profile views to solve the third problem of data labelling. Experiments on the challenging AFLW database show that our approach achieves significant improvements over state-of-the-art methods. |
Event-centric Context Modeling : The Case of Story Comprehension and Story Generation | In this opinion piece, we argue that there is a need for alternative design directions to complement existing AI efforts in narrative and character generation and algorithm development. To make our argument, we a) outline the predominant roles and goals of AI research in storytelling; b) present existing discourse on the benefits and harms of narratives; and c) highlight the pain points in character creation revealed by semi-structured interviews we conducted with 14 individuals deeply involved in some form of character creation. We conclude by proffering several specific design avenues that we believe can seed fruitful research collaborations. In our vision, AI collaborates with humans during creative processes and narrative generation, helps amplify voices and perspectives that are currently marginalized or misrepresented, and engenders experiences of narrative that support spectatorship and listening roles. |
Biomedical Magnesium Alloys : A Review of Material Properties , Surface Modifications and Potential as a Biodegradable Orthopaedic Implant | Magnesium and magnesium based alloys are lightweight metallic materials that are extremely biocompatib le and have similar mechanical properties to natural bone. These materials have the potential to function as an osteoconductive and biodegradable substitute in load bearing applicat ions in the field of hard t issue engineering. However, the effects of corrosion and degradation in the physiological environment of the body has prevented their wide spread applicat ion to date. The aim of this review is to examine the properties, chemical stability, degradation in situ and methods of improving the corrosion resistance of magnesium and its alloys for potential application in the orthopaedic field. To be an effective implant, the surface and sub-surface properties of the material needs to be carefully selected so that the degradation kinetics of the implant can be efficiently controlled. Several surface modification techniques are presented and their effectiveness in reducing the corrosion rate and methods of controlling the degradation period are discussed. Ideally, balancing the gradual loss of material and mechanical strength during degradation, with the increasing strength and stability of the newly forming bone tissue is the ultimate goal. If this goal can be achieved, then orthopaedic implants manufactured from magnesium based alloys have the potential to deliver successful clinical outcomes without the need for revision surgery. |
Constraint-based approach for automatic hinting of digital typefaces | The rasterization process of characters from digital outline fonts to bitmaps on displays must include additional information in the form of hints beside the shape of characters in order to produce high quality bitmaps. Hints describe constraints on sizes and shapes inside characters and across the font that should be preserved during rasterization. We describe a novel, fast and fully automatic method for adding those hints to characters. The method is based on identifying hinting situations inside characters. It includes gathering global font information and linking it to characters, defining a set of constraints, sorting them, and converting them to hints in any known hinting technology (PostScript, TrueType or other). Our scheme is general enough to be applied on any language and on complex scripts such as Chinese Japanese and Korean. Although still inferior to expert manual hinting, our method produces high quality bitmaps which approach this goal. The method can also be used as a solid base for further hinting refinements done manually. |
SMEs Co-opetition and Knowledge Sharing: The IS Role | Co-opetition, simultaneous co-operation and competition, is a recent phenomenon. Co-opetition entails sharing knowledge that may be a key source of competitive advantage. Yet, the knowledge gained by cooperation may also be used for competition. There is little investigation of how this problem may be modeled and, hence, managed. A game-theoretic framework for analysing inter-organisational knowledge sharing under co-opetition and guidelines for the management of explicit knowledge, predicated on co-ordination and control theory has been proposed, but remains untested. This research empirically investigates these issues in the context of small and medium-sized enterprises (SMEs). SMEs provide an interesting setting as they are knowledge generators, but are poor at knowledge exploitation. The paper uses data from UK SMEs to investigate co-opetition, management of knowledge sharing and the role of IS. |
Bias and causal associations in observational research | Readers of medical literature need to consider two types of validity, internal and external. Internal validity means that the study measured what it set out to; external validity is the ability to generalise from the study to the reader's patients. With respect to internal validity, selection bias, information bias, and confounding are present to some degree in all observational research. Selection bias stems from an absence of comparability between groups being studied. Information bias results from incorrect determination of exposure, outcome, or both. The effect of information bias depends on its type. If information is gathered differently for one group than for another, bias results. By contrast, non-differential misclassification tends to obscure real differences. Confounding is a mixing or blurring of effects: a researcher attempts to relate an exposure to an outcome but actually measures the effect of a third factor (the confounding variable). Confounding can be controlled in several ways: restriction, matching, stratification, and more sophisticated multivariate techniques. If a reader cannot explain away study results on the basis of selection, information, or confounding bias, then chance might be another explanation. Chance should be examined last, however, since these biases can account for highly significant, though bogus results. Differentiation between spurious, indirect, and causal associations can be difficult. Criteria such as temporal sequence, strength and consistency of an association, and evidence of a dose-response effect lend support to a causal link. |
Flexible Navigation: Finite state machine-based integrated navigation and control for ROS enabled robots | This paper describes the Flexible Navigation system that extends the ROS Navigation stack and compatible libraries to separate computation from decision making, and integrates the system with FlexBE — the Flexible Behavior Engine, which provides intuitive supervision with adjustable autonomy. Although the ROS Navigation plugin model offers some customization, many decisions are internal to move_base. In contrast, the Flexible Navigation system separates global planning from local planning and control, and uses a hierarchical finite state machine to coordinate behaviors. The Flexible Navigation system includes Python-based state implementations and ROS nodes derived from the move_base plugin model to provide compatibility with existing libraries as well as future extensibility. The paper concludes with complete system demonstrations in both simulation and hardware using the iRobot Create and Kobuki-based Turtlebot running under ROS Kinetic. The system supports multiple independent robots. |
Building Better Theory by Bridging the Quantitative-Qualitative Divide | Qualitative methods for data collection and analysis are not mystical, but they are powerful, particularly when used to build new or refine existing theories. This article provides an introduction to qualitative methods and an overview of tactics for ensuring rigor in qualitative research useful for the novice researcher, as well as more experienced researchers interested in expanding their methodological repertoire or seeking guidance on how to evaluate qualitative research. We focus our discussion on the qualitative analytical technique of grounded theory building, and suggest that organizational research has much to gain by coupling of use of qualitative and quantitative research methods. |
Never-Ending Multiword Expressions Learning | This paper introduces NEMWEL, a system that performs Never-Ending MultiWord Expressions Learning. Instead of using a static corpus and classifier, NEMWEL applies supervised learning on automatically crawled news texts. Moreover, it uses its own results to periodically retrain the classifier, bootstrapping on its own results. In addition to a detailed description of the system’s architecture and its modules, we report the results of a manual evaluation. It shows that NEMWEL is capable of learning new expressions over time with improved precision. |
Future Trends in Marine Robotics [TC Spotlight] | The IEEE Robotics and Automation Society (RAS) Marine Robotics Technical Committee (MRTC) was first established in 2008 following the dismissal of the Underwater Robotics Technical Committee in spring 2008. The goal of the MRTC is to foster research on robots and intelligent systems that extend the human capabilities in marine environments and to promote maritime robotic applications important to science, industry, and defense. The TC organizes conferences, workshops, and special issues that bring marine robotics research to the forefront of the broader robotics community. The TC also introduces its members to the latest development of marine robotics through Web sites and online social media. |
[HPV prophylactic vaccine coverage and factors impacting its practice among students and high school students in Marseilles' area]. | OBJECTIVES
To assess the coverage of HPV vaccine among young women from Marseilles' area and factors influencing the probability of this vaccination.
MATERIALS AND METHODS
An anonymous survey was conducted among 2124 high school and university students from Marseilles' area, France from December 2011 to May 2012.
RESULTS
Mean age of participants was 20.4years (±SD: 3.3). Only 41.6% participants reported being vaccinated against HPV, of whom 768 (93.3) had completed the 3 injections scheme. Among non-vaccinated respondents, 33.6% acknowledged they would accept a catch-up vaccination. Factors influencing the probability of being vaccinated were young age (AOR: 0.728; 95% CI: 0.681-0.779; P<0.001), socioeconomic and/or education level of parents (AOR: 1.324; 95% CI: 1.006-1.742; P=0.045), information about vaccination (AOR: 24.279; 95% CI: 5.417-108.82; P<0.001) and having a general practitioner (GP) favourable to vaccination (AOR: 68.776; 95% CI: 34.511-137.061; P<0.001). Factors influencing the probability to accept a catch-up vaccination were age (AOR: 1.059; 95% CI: 1.001-1.120; P=0.046), socioeconomic and/or education level of parents (AOR: 1.637; 95% CI: 1.198-2.237; P=0.002) and having a GP favourable to vaccination (AOR: 4.381; 95% CI: 2.978-6.445; P<0.001). Only 35.5% of respondents were aware that screening remains necessary following HPV vaccination.
CONCLUSION
The coverage of HPV vaccine among young women from Marseilles' area is insufficient. Factors influencing the probability of being vaccinated against HPV are age, socioeconomic and/or education level of parents and information regarding vaccination. GP plays a major role in the acceptance of HPV vaccine. |
User interest and social influence based emotion prediction for individuals | Emotions are playing significant roles in daily life, making emotion prediction important. To date, most of state-of-the-art methods make emotion prediction for the masses which are invalid for individuals. In this paper, we propose a novel emotion prediction method for individuals based on user interest and social influence. To balance user interest and social influence, we further propose a simple yet efficient weight learning method in which the weights are obtained from users' behaviors. We perform experiments in real social media network, with 4,257 users and 2,152,037 microblogs. The experimental results demonstrate that our method outperforms traditional methods with significant performance gains. |
Myocardial damage in dogs affected by heartworm disease (Dirofilaria immitis): immunohistochemical study of cardiac myoglobin and troponin I in naturally infected dogs. | It has recently been reported that dogs affected by canine heartworm disease (Dirofilaria immitis) can show an increase in plasma levels of myoglobin and cardiac troponin I, two markers of muscle/myocardial injury. In order to determine if this increase is due to myocardial damage, the right ventricle of 24 naturally infected dogs was examined by routine histology and immunohistochemistry with anti-myoglobin and anti-cardiac troponin I antibodies. Microscopic lesions included necrosis and myocyte vacuolization, and were associated with loss of staining for one or both proteins. Results confirm that increased levels of myoglobin and cardiac troponin I are indicative of myocardial damage in dogs affected by heartworm disease. |
Statistical Inverse Ray Tracing for Image-Based 3D Modeling | This paper proposes a new formulation and solution to image-based 3D modeling (aka “multi-view stereo”) based on generative statistical modeling and inference. The proposed new approach, named statistical inverse ray tracing, models and estimates the occlusion relationship accurately through optimizing a physically sound image generation model based on volumetric ray tracing. Together with geometric priors, they are put together into a Bayesian formulation known as Markov random field (MRF) model. This MRF model is different from typical MRFs used in image analysis in the sense that the ray clique, which models the ray-tracing process, consists of thousands of random variables instead of two to dozens. To handle the computational challenges associated with large clique size, an algorithm with linear computational complexity is developed by exploiting, using dynamic programming, the recursive chain structure of the ray clique. We further demonstrate the benefit of exact modeling and accurate estimation of the occlusion relationship by evaluating the proposed algorithm on several challenging data sets. |
A method to assess hand motor blocks in Parkinson's disease with digitizing tablet. | The non-volitional sudden discontinuation of motor activity, called motor block (MB) or freezing is most commonly associated with Parkinson's disease (PD). MB extends beyond the classical manifestations of PD: akinezia, bradykinezia, rigidity, tremor, and postural instability. MB has been observed and quantified in internally cued repetitive movements such as gait, speech, handwriting, and manual tapping tasks as a distinct feature of PD. We present a simple measurement system for objective evaluation of MB during point-to-point hand movements in patients with PD. Hand trajectories were evaluated in eight PD patients based on values obtained from a digitizing tablet (DT) score. 50 trials per day were recorded in seven consecutive working days. Subjects were instructed to consciously prepare and self-initiate movements between arbitrarily fixed starting and target points without lifting a wireless magnetic mouse. MB was identified as the time interval during movement with no change in coordinates. We analyzed three kinematic parameters: duration, start and number of MBs. If MBs were documented, the DT score was 1, if not, 0. Results were then compared with the ratings of the question in motor section related to freezing of hands from the Unified Parkinson's Disease Rating Scale (UPDRS). For all patients, DT score was in agreement with the UPDRS. Present results indicate that DT is useful for assessing MBs during volitional planar hand movement. This low-cost instrument may be included in a clinical test battery because of short testing time and trouble-free preparation of patient. |
Fast Approximate Energy Minimization via Graph Cuts | In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an α-βswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an α-expansion: for a label α, this move assigns an arbitrary set of pixels the label α. Our second algorithm, which requires the smoothness term to be a metric, generates a labeling such that there is no expansion move that decreases the energy. Moreover, this solution is within a known factor of the global minimum. We experimentally demonstrate the effectiveness of our approach on image restoration, stereo and motion. 1 Energy minimization in early vision Many early vision problems require estimating some spatially varying quantity (such as intensity or disparity) from noisy measurements. Such quantities tend to be piecewise smooth; they vary smoothly at most points, but change dramatically at object boundaries. Every pixel p ∈ P must be assigned a label in some set L; for motion or stereo, the labels are disparities, while for image restoration they represent intensities. The goal is to find a labeling f that assigns each pixel p ∈ P a label fp ∈ L, where f is both piecewise smooth and consistent with the observed data. These vision problems can be naturally formulated in terms of energy minimization. In this framework, one seeks the labeling f that minimizes the energy E(f) = Esmooth(f) + Edata(f). Here Esmooth measures the extent to which f is not piecewise smooth, while Edata measures the disagreement between f and the observed data. Many different energy functions have been proposed in the literature. The form of Edata is typically |
Data warehouse design to support customer relationship management analyses | CRM is a strategy that integrates the concepts of Knowledge Management, Data Mining, and Data Warehousing in order to support the organization's decision-making process to retain long-term and profitable relationships with its customers. In this paper, we first present the design implications that CRM poses to data warehousing, and then propose a robust multidimensional starter model that supports CRM analyses. We then present sample CRM queries, test our starter model using those queries and define two measures (% success ratio and CRM suitability ratio) by which CRM models can be evaluated. We finally introduce a preliminary heuristic for designing data warehouses to support CRM analyses. Our study shows that our starter model can be used to analyze various profitability analyses such as customer profitability analysis, market profitability analysis, product profitability analysis, and channel profitability analysis. |
Therapeutic effect of continuous exercise training program on serum creatinine concentration in men with hypertension: a randomized controlled trial. | BACKGROUND
Creatinine (Cr) has been implicated as an independent predictor of hypertension and exercise has been reported as adjunct therapy for hypertension. The purpose of the present study was to investigate the effect of continuous training programme on blood pressure and serum creatinine concentration in black African subjects with hypertension.
METHODS
Three hundred and fifty seven male patients with mild to moderate (systolic blood pressure [SBP] between 140-180 & diastolic blood pressure [DBP] between 90-109 mmHg) essential hypertension were age matched and randomly grouped into continuous & control groups. The continuous group involved in an 8 weeks continuous training (60-79% HR reserve) of between 45 minutes to 60 minutes, 3 times per week, while the control group remain sedentary. SBP, DBP, VO2max, serum Cr, body mass index (BMI), waist hip ratio (WHR) and percent (%) body fat. Analysis of covariance (ANCOVA) and Pearson correlation tests were used in data analysis.
RESULTS
Findings of the study revealed significant decreased effects of continuous training programme on SBP, DBP, Cr, BMI, WHR, % body fat and significant increase in VO2max at p< 0.05. Serum Cr is significantly and negatively correlated with SBP (-.335), DBP (.194), BMI (.268), WHR (-.258) and % body fat (-.190) at p<0.05.
CONCLUSION
The present study demonstrated a rationale bases for the adjunct therapeutic role of moderate intensity continuous exercise training as a multi-therapy in the down regulation of blood pressure, serum Cr, body size and body fat in hypertension. |
Key elements to enable millimeter wave communications for 5G wireless systems | Current cellular spectrum at below 3 GHz bands is experiencing severe shortage and cannot keep up with the dramatic proliferation of mobile traffic in the near future, requiring the search for innovative solutions to enable the 5G era. mmWave communications, with a possible gigabit-per-second data rate, have attracted great attention as a candidate for 5G broadband cellular communication networks. However, a complete characterization of mmWave links for 5G wireless networks still remains elusive and there are many challenges and research areas that need to be addressed. In this work we discuss several key elements to enable mmWave communications in 5G: · Channel characteristics regarding mmWave signal attenuation due to free space propagation, atmospheric gaseous and rain are explained. · The hybrid (digital plus analog) beamforming architecture in mmWave system is discussed. · The blockage effect in mmWave communications due to penetration loss and possible approaches are presented. · The application of mmWave transmission with narrow beams in non-orthogonal device-todevice communication is proposed. · mmWave transmission in the booster cell of heterogeneous anchor-booster networks. · mmWave transmission for small cell backhaul is further discussed. |
Familial Severe Gigantomastia and Reduction with the Free Nipple Graft Vertical Mammoplasty Technique: Report of Two Cases | Gigantomastia, characterized by massive breast enlargement during adolescence or pregnancy, is thought to be caused by an abnormal and excessive end organ response to a normal hormonal milieu. The amputation technique with the free nipple–areola graft is the mainstay for severe macromastia, but it has been criticized because it results in a flattened, nonaesthetic breast with poor projection. This report presents two sisters with unusual, excessive breast enlargement. The measured distance from the sternal notch to the nipple was 50 cm for the first case and 55 cm for the second case. The free nipple graft transplantation based on the vertical mammoplasty technique was used, and an average of 4,200 g of breast tissue per breast was removed. To increase breast projection, superior dermoglandular flaps were used The follow-up period was 24 months. The patients had long-lasting, pronounced breast mound projection, and the level of satisfaction for both cases was very high. The ideal geometric structure of the breast is rather conical, and the authors believe that reshaping the breast tissue in a vertical plane using the vertical mammoplasty technique may be more effective in the long term and may provide better projection. |
Glass Ceiling Effect : A Focus on Pakistani Women | Glass ceiling and gender discrimination are the biggest barriers holding back Pakistani women from occupying high positions of prestige in the corporate world. Working women in Pakistan face obstacles moving up the corporate ladder and are often excluded from the decision-making. The present research paper looks into the prevailing situation of glass ceiling effect from Pakistani workingwomen’s perspective along with barriers and while concluding the discussion makes recommendations to the relevant stake holders. |
Analytics-as-a-Service (AaaS) Tool for Unstructured Data Mining | Analytics-as-a-Service (AaaS) has become indispensable because it affords stakeholders to discover knowledge in Big Data. Previously, data stored in data warehouses follow some schema and standardization which leads to efficient data mining. However, the "Big Data" epoch has witnessed the rise of structured, semi-structured, and unstructured data, a trend that motivated enterprises to employ the NoSQL data storages to accommodate the high-dimensional data. In this paper, we introduce an AaaS tool that aims at accomplishing terms and topics extraction and organization from unstructured data sources such as NoSQL databases and textual contents (e.g., websites). The primary accomplishment in this paper is the detail justification of the architectural design of our proposed framework. This includes the proposed algorithms (e.g., concurrency search, linear search, etc.) and the performance of macro tasks such as filtering, tagging, and so on. |
Emotional state talk and emotion understanding: a training study with preschool children. | ABSTRACTThe present study investigates whether training preschool children in the active use of emotional state talk plays a significant role in bringing about greater understanding of emotion terms and improved emotion comprehension. Participants were 100 preschool children (M=52 months; SD=9·9; range: 35-70 months), randomly assigned to experimental or control conditions. They were pre- and post-tested to assess their language comprehension, metacognitive language comprehension and emotion understanding. Analyses of pre-test data did not show any significant differences between experimental and control groups. During the intervention phase, the children were read stories enriched with emotional lexicon. After listening to the stories, children in the experimental group took part in conversational language games designed to stimulate use of the selected emotional terms. In contrast, the control group children did not take part in any special linguistic activities after the story readings. Analyses revealed that the experimental group outperformed the control group in the understanding of inner state language and in the comprehension of emotion. |
GPU Versus FPGA for High Productivity Computing | Heterogeneous or co-processor architectures are becoming an important component of high productivity computing systems (HPCS). In this work the performance of a GPU based HPCS is compared with the performance of a commercially available FPGA based HPC. Contrary to previous approaches that focussed on specific examples, a broader analysis is performed by considering processes at an architectural level. A set of benchmarks is employed that use different process architectures in order to exploit the benefits of each technology. These include the asynchronous pipelines common to "map" tasks, a partially synchronous tree common to "reduce" tasks and a fully synchronous, fully connected mesh. We show that the GPU is more productive than the FPGA architecture for most of the benchmarks and conclude that FPGA-based HPCS is being marginalised by GPUs. |
Intrinsic motivations and open-ended development in animals, humans, and robots: an overview | 1 Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy 2 Department of Psychology, University of Sheffield, Sheffield, UK 3 Department of Clinical and Social Sciences in Psychology, University of Rochester, River, New York, USA 4 Department of Computer Science, University of Massachusetts Amherst, Massachusetts, USA *Correspondence: [email protected] |
Curvature and torsion estimators based on parametric curve fitting | Many applications of geometry processing and computer vision rely on geometric properties of curves, particularly their curvature. Several methods have already been proposed to estimate the curvature of a planar curve, most of them for curves in digital spaces. This work proposes a new scheme for estimating curvature and torsion of planar and spatial curves, based on weighted least–squares fitting and local arc–length approximation. The method is simple enough to admit a convergence analysis that take into acount the effect of noise in the samples. The implementation of the method is compared to other curvature estimation methods showing a good performance. Applications to prediction in geometry compression are presented both as a practical application and as a validation of this new scheme. |
Stacked Denoising Autoencoder-Based Deep Collaborative Filtering Using the Change of Similarity | Recommender systems based on deep learning technology pay huge attention recently. In this paper, we propose a collaborative filtering based recommendation algorithm that utilizes the difference of similarities among users derived from different layers in stacked denoising autoencoders. Since different layers in a stacked autoencoder represent the relationships among items with rating at different levels of abstraction, we can expect to make recommendations more novel, various and serendipitous, compared with a normal collaborative filtering using single similarity. The results of experiments using MovieLens dataset show that the proposed recommendation algorithm can improve the diversity of recommendation lists without great loss of accuracy. |
QUOTA: The Quantile Option Architecture for Reinforcement Learning | In this paper, we propose the Quantile Option Architecture (QUOTA) for exploration based on recent advances in distributional reinforcement learning (RL). In QUOTA, decision making is based on quantiles of a value distribution, not only the mean. QUOTA provides a new dimension for exploration via making use of both optimism and pessimism of a value distribution. We demonstrate the performance advantage of QUOTA in both challenging video games and physical robot simulators. |
SixthSense: a wearable gestural interface | In this note, we present SixthSense, a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information. By using a tiny projector and a camera coupled in a pendant like mobile wearable device, SixthSense sees what the user sees and visually augments surfaces, walls or physical objects the user is interacting with; turning them into just-in-time information interfaces. SixthSense attempts to free information from its confines by seamlessly integrating it with the physical world. |
Managing interaction between users and agents in a multi-agent storytelling environment | This paper describes an approach for managing the interaction of human users with computer-controlled agents in an interactive narrative-oriented virtual environment. In these kinds of systems, the freedom of the user to perform whatever action she desires must be balanced with the preservation of the storyline used to control the system's characters. We describe a technique, narrative mediation, that exploits a plan-based model of narrative structure to manage and respond to users' actions inside a virtual world. We define two general classes of response to situations where users execute actions that interfere with story structure: accommodation and intervention. Finally, we specify an architecture that uses these definitions to monitor and automatically characterize user actions, and to compute and implement responses to unanticipated activity. The approach effectively integrates user action and system response into the unfolding narrative, providing for the balance between a user's sense of control within the story world and the user's sense of coherence of the overall narrative. |
Mesh-Based Broadband Home Network Solution: Setup and Experiments | In this paper, we investigate architectural and practical issues related to the setup of a broadband home network solution. Our experience led us to the consideration of a hybrid, wireless and wired, Mesh-Network to enable high data rate service delivery everywhere in the home. We demonstrate the effectiveness of our proposal using a real experimental testbed. This latter consists of a multi-hop mesh network composed of a home gateway and "extenders" supporting several types of physical connectivity including PLC, WiFi, and Ethernet. The solution also includes a layer 2 implementation of the OLSR protocol for path selection. We developed an extension of this protocol for QoS assurance and to enable the proper execution of existing services. We have also implemented a fast WiFi handover algorithm to ensure service continuity in case of user mobility among the extenders inside the home. |
Interval Arithmetic and Interval-Aware Operators for Genetic Programming | Symbolic regression via genetic programming is a flexible approach to machine learning that does not require up-front specification of model structure. However, traditional approaches to symbolic regression require the use of protected operators, which can lead to perverse model characteristics and poor generalisation. In this paper, we revisit interval arithmetic as one possible solution to allow genetic programming to perform regression using unprotected operators. Using standard benchmarks, we show that using interval arithmetic within model evaluation does not prevent invalid solutions from entering the population, meaning that search performance remains compromised. We extend the basic interval arithmetic concept with `safe' search operators that integrate interval information into their process, thereby greatly reducing the number of invalid solutions produced during search. The resulting algorithms are able to more effectively identify good models that generalise well to unseen data. We conclude with an analysis of the sensitivity of interval arithmetic-based operators with respect to the accuracy of the supplied input feature intervals. |
The contribution of South African curricula to prepare health professionals for working in rural or under-served areas in South Africa: a peer review evaluation. | SETTING
The Collaboration for Health Equity through Education and Research (CHEER) was formed in 2003 to examine strategies that would increase the production of health professionals who choose to practise in rural and under-served areas in South Africa.
OBJECTIVES
We aimed to identify how each faculty is preparing its students for service in rural or under-served areas.
METHODS
Peer reviews were conducted at all nine participating universities. A case study approach was used, with each peer review constituting its own study but following a common protocol and tools. Each research team comprised at least three reviewers from different universities, and each review was conducted over at least 3 days on site. The participating faculties were assessed on 11 themes, including faculty mission statements, resource allocation, student selection, first exposure of students to rural and under-served areas, length of exposure, practical experience, theoretical input, involvement with the community, relationship with the health service, assessment of students and research and programme evaluation.
RESULTS
With a few exceptions, most themes were assessed as inadequate or adequate with respect to the preparation of students for practice in rural or under-served areas after qualification, despite implicit intentions to the contrary at certain faculties.
CONCLUSIONS
Common challenges, best practices and potential solutions have been identified through this project. Greater priority must be given to supporting rural teaching sites in terms of resources and teaching capacity, in partnership with government agencies. |
Enterprise Architecture Development and Modelling Combining TOGAF and ArchiMate | In current business practice, an integrated approach to business and IT is indispensable. Take for example a company that needs to assess the impact of introducing a new product in its portfolio. This may require defining additional business processes, hiring extra personnel, changing the supporting applications, and augmenting the technological infrastructure to support the additional load of these applications. Perhaps this may even require a change of the organizational structure. |
Accurate Multiple View 3D Reconstruction Using Patch-Based Stereo for Large-Scale Scenes | In this paper, we propose a depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account. In the proposed method, an efficient patch-based stereo matching process is used to generate depth-map at each image with acceptable errors, followed by a depth-map refinement process to enforce consistency over neighboring views. Compared to state-of-the-art methods, the proposed method can reconstruct quite accurate and dense point clouds with high computational efficiency. Besides, the proposed method could be easily parallelized at image level, i.e., each depth-map is computed individually, which makes it suitable for large-scale scene reconstruction with high resolution images. The accuracy and efficiency of the proposed method are evaluated quantitatively on benchmark data and qualitatively on large data sets. |
Control scheme and networked control architecture for the Berkeley lower extremity exoskeleton (BLEEX) | The Berkeley lower extremity exoskeleton (BLEEX) is a load-carrying and energetically autonomous human exoskeleton that, in this first generation prototype, carries up to a 34 kg (75 Ib) payload for the pilot and allows the pilot to walk at up to 1.3 m/s (2.9 mph). This article focuses on the human-in-the-loop control scheme and the novel ring-based networked control architecture (ExoNET) that together enable BLEEX to support payload while safely moving in concert with the human pilot. The BLEEX sensitivity amplification control algorithm proposed here increases the closed loop system sensitivity to its wearer's forces and torques without any measurement from the wearer (such as force, position, or electromyogram signal). The tradeoffs between not having sensors to measure human variables, the need for dynamic model accuracy, and robustness to parameter uncertainty are described. ExoNET provides the physical network on which the BLEEX control algorithm runs. The ExoNET control network guarantees strict determinism, optimized data transfer for small data sizes, and flexibility in configuration. Its features and application on BLEEX are described |
Randomized Trial Comparing Off-Pump to On-Pump Coronary Artery Bypass Grafting in High-Risk Patients ( # 2003-13903 | Objective: The subset of patients most likely to benefit from off-pump coronary artery bypass grafting (CABG) remains a controversial issue, but the technique has been proposed to decrease postoperative mortality and morbidity. The objective of this study was to compare off-pump to on-pump CABG in patients with known risk factors for mortality and morbidity. 65 high-risk patients were prospectively randomized to undergo off-pump or on-pump CABG. Recruited patients had at least 3 of the following criteria: age greater than 65 years, high blood pressure, diabetes, serum creatinine greater than 133 mol/L, left ventricular ejection fraction lower than 45%, chronic pulmonary disease, unstable angina, congestive heart failure, repeat CABG, anemia, and carotid atherosclerosis. Hospital mortality and morbidity were the primary end-points of the study. Results: Six patients (9%) crossed over from the original randomized group. Twenty-eight patients averaging 70 ± 8 years of age underwent 3 ± 1 grafts off pump, and 37 patients averaging 70 ± 6 years of age underwent 3.4 ± 1 grafts on pump. Revascularization was considered complete in 21 (75%) of off-pump patients compared to 33 (89%) of on-pump patients (P = .1). There were no hospital deaths in off-pump patients, and 2 patients (5%) undergoing on-pump CABG died early following surgery (P = .2). Two off-pump (7%) compared to 11 on-pump (30%) of patients presented composite end-points including death, neurolog-ical injury, renal failure, respiratory failure, and operative myocardial infarction after CABG (P = .02). Conclusion: The present study suggests that off-pump CABG, when technically feasible, significantly reduces morbidity following surgery in a group of high-risk patients. Coronary artery bypass grafting (CABG) is currently performed with or without the use of the cardiopulmonary bypass system (CPB). Although both techniques are being used with success [Van Dijk 2001], a debate is raging between advocates and opponents of off-pump CABG about patient outcomes and surgical indications for one or the other technique [Bonchek 2002]. Several authors have suggested that off-pump CABG could be especially useful and effective in improving clinical outcomes in high-risk patients who require surgical revascu-larization [Hoff 2002, Al-Ruzzeh 2003]. Elderly patients and those with significant comorbidities are more susceptible to the deleterious effects of CPB and are most likely to benefit from the use of off-pump CABG compared to the standard on-pump approach [Demaria 2002]. The objective of the present study was to compare outcomes between off-pump CABG and a standard technique of CABG with … |
Robust end-of-utterance detection for real-time speech recognition applications | In this paper we propose a sub-band energy based end-ofutterance algorithm that is capable of detecting the time instant when the user has stopped speaking. The proposed algorithm finds the time instant at which many enough sub-band spectral energy trajectories fall and stay for a pre-defined fixed time below adaptive thresholds, i.e. a non-speech period is detected after the end of the utterance. With the proposed algorithm a practical speech recognition system can give timely feedback for the user, thereby making the behaviour of the speech recognition system more predictable and similar across different usage environments and noise conditions. The proposed algorithm is shown to be more accurate and noise robust than the previously proposed approaches. Experiments with both isolated command word recognition and continuous digit recognition in various noise conditions verify the viability of the proposed approach with an average proper endof-utterance detection rate of around 94% in both cases, representing 43% error rate reduction over the most competitive previously published method. |
CNN- and LSTM-based Claim Classification in Online User Comments | When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks. |
A novel type of compliant and underactuated robotic hand for dexterous grasping | We built a highly compliant, underactuated, robust and at the same time dexterous anthropomorphic hand. We evaluate its dexterous grasping capabilities by implementing the comprehensive Feix taxonomy of human grasps and by assessing the dexterity of its opposable thumb using the Kapandji test. We also illustrate the hand’s payload limits and demonstrate its grasping capabilities in real-world grasping experiments. To support our claim that compliant structures are beneficial for dexterous grasping, we compare the dimensionality of control necessary to implement the diverse grasp postures with the dimensionality of the grasp postures themselves. We find that actuation space is smaller than posture space and explain the difference with the mechanic interaction between hand and grasped object. Additional desirable properties are derived from using soft robotics technology: the hand is robust to impact and blunt collisions, inherently safe, and not affected by dirt, dust, or liquids. Furthermore, the hand is simple and inexpensive to manufacture. |
A Comprehensive Study of Progressive Cytogenetic Alterations in Clear Cell Renal Cell Carcinoma and a New Model for ccRCC Tumorigenesis and Progression | We present a comprehensive study of cytogenetic alterations that occur during the progression of clear cell renal cell carcinoma (ccRCC). We used high-density high-throughput Affymetrix 100 K SNP arrays to obtain the whole genome SNP copy number information from 71 pretreatment tissue samples with RCC tumors; of those, 42 samples were of human ccRCC subtype. We analyzed patterns of cytogenetic loss and gain from different RCC subtypes and in particular, different stages and grades of ccRCC tumors, using a novel algorithm that we have designed. Based on patterns of cytogenetic alterations in chromosomal regions with frequent losses and gains, we inferred the involvement of candidate genes from these regions in ccRCC tumorigenesis and development. We then proposed a new model of ccRCC tumorigenesis and progression. Our study serves as a comprehensive overview of cytogenetic alterations in a collection of 572 ccRCC tumors from diversified studies and should facilitate the search for specific genes associated with the disease. |
Inception Recurrent Convolutional Neural Network for Object Recognition | Deep convolutional neural networks (DCNNs) are an influential tool for solving various problems in the machine learning and computer vision fields. In this paper, we introduce a new deep learning model called an InceptionRecurrent Convolutional Neural Network (IRCNN), which utilizes the power of an inception network combined with recurrent layers in DCNN architecture. We have empirically evaluated the recognition performance of the proposed IRCNN model using different benchmark datasets such as MNIST, CIFAR-10, CIFAR100, and SVHN. Experimental results show similar or higher recognition accuracy when compared to most of the popular DCNNs including the RCNN. Furthermore, we have investigated IRCNN performance against equivalent Inception Networks and Inception-Residual Networks using the CIFAR-100 dataset. We report about 3.5%, 3.47% and 2.54% improvement in classification accuracy when compared to the RCNN, equivalent Inception Networks, and InceptionResidual Networks on the augmented CIFAR100 dataset respectively. |
Detect Rumor and Stance Jointly by Neural Multi-task Learning | In recent years, an unhealthy phenomenon characterized as the massive spread of fake news or unverified information (i.e., rumors) has become increasingly a daunting issue in human society. The rumors commonly originate from social media outlets, primarily microblogging platforms, being viral afterwards by the wild, willful propagation via a large number of participants. It is observed that rumorous posts often trigger versatile, mostly controversial stances among participating users. Thus, determining the stances on the posts in question can be pertinent to the successful detection of rumors, and vice versa. Existing studies, however, mainly regard rumor detection and stance classification as separate tasks. In this paper, we argue that they should be treated as a joint, collaborative effort, considering the strong connections between the veracity of claim and the stances expressed in responsive posts. Enlightened by the multi-task learning scheme, we propose a joint framework that unifies the two highly pertinent tasks, i.e., rumor detection and stance classification. Based on deep neural networks, we train both tasks jointly usingweight sharing to extract the common and task-invariant features while each task can still learn its task-specific features. Extensive experiments on real-world datasets gathered from Twitter and news portals demonstrate that our proposed framework improves both rumor detection and stance classification tasks consistently with the help of the strong intertask connections, achieving much better performance than stateof-the-art methods. |
Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning. | Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm × 800 µm from 100 µm × 100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79 ± 0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC = 0.72 ± 0.18 and r = 0.85. For the independent test set, DCNN achieved DC = 0.76 ± 0.09 and r = 0.94, while feature-based learning achieved DC = 0.62 ± 0.21 and r = 0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction. |
3'-minor groove binder-DNA probes increase sequence specificity at PCR extension temperatures. | DNA probes with conjugated minor groove binder (MGB) groups form extremely stable duplexes with single-stranded DNA targets, allowing shorter probes to be used for hybridization based assays. In this paper, sequence specificity of 3'-MGB probes was explored. In comparison with unmodified DNA, MGB probes had higher melting temperature (T(m)) and increased specificity, especially when a mismatch was in the MGB region of the duplex. To exploit these properties, fluorogenic MGB probes were prepared and investigated in the 5'-nuclease PCR assay (real-time PCR assay, TaqMan assay). A 12mer MGB probe had the same T(m)(65 degrees C) as a no-MGB 27mer probe. The fluorogenic MGB probes were more specific for single base mismatches and fluorescence quenching was more efficient, giving increased sensitivity. A/T rich duplexes were stabilized more than G/C rich duplexes, thereby leveling probe T(m)and simplifying design. In summary, MGB probes were more sequence specific than standard DNA probes, especially for single base mismatches at elevated hybridization temperatures. |
Lifetime cost-effectiveness of skin cancer prevention through promotion of daily sunscreen use. | OBJECTIVES
Health-care costs for the treatment of skin cancers are disproportionately high in many white populations, yet they can be reduced through the promotion of sun-protective behaviors. We investigated the lifetime health costs and benefits of sunscreen promotion in the primary prevention of skin cancers, including melanoma.
METHODS
A decision-analytic model with Markov chains was used to integrate data from a central community-based randomized controlled trial conducted in Australia and other epidemiological and published sources. Incremental cost per quality-adjusted life-year was the primary outcome. Extensive one-way and probabilistic sensitivity analyses were performed to test the uncertainty in the base findings with plausible variation to the model parameters.
RESULTS
Using a combined household and government perspective, the discounted incremental cost per quality-adjusted life-year gained from the sunscreen intervention was AU$40,890. Over the projected lifetime of the intervention cohort, this would prevent 33 melanomas, 168 cutaneous squamous-cell carcinomas, and 4 melanoma-deaths at a cost of approximately AU$808,000. The likelihood that the sunscreen intervention was cost-effective was 64% at a willingness-to-pay threshold of AU$50,000 per quality-adjusted life-year gained.
CONCLUSIONS
Subject to the best-available evidence depicted in our model, the active promotion of routine sunscreen use to white populations residing in sunny settings is likely to be a cost-effective investment for governments and consumers over the long term. |
Intravitreal triamcinolone acetonide in sympathetic ophthalmia | To report the result of intravitreal triamcinolone acetonide in the treatment of sympathetic ophthalmia. A 29-year-old woman who suffered from sympathetic ophthalmia and who was being treated with systemic corticosteroid therapy received an intravitreal injection of 4 mg of triamcinolone acetonide. By the 15th day after injection visual acuity had improved from 20/200 to 20/40 and serous retinal detachment had almost completely resorbed. Systemic corticosteroid therapy was reduced sequentially. By the third month after injection, the patient was in clinical remission. Her visual acuity was 20/20 and no serous detachment was observed. In this study, short-term improvement in the clinical picture of a patient with sympathetic ophthalmia after intravitreal triamcinolone acetonide injection was described. The results suggest that intravitreal triamcinolone acetonide injection may be an additional tool in the treatment of sympathetic ophthalmia. |
VSL#3 probiotics exerts the anti-inflammatory activity via PI3k/Akt and NF-κB pathway in rat model of DSS-induced colitis | VSL#3 probiotics can be effective on induction and maintenance of the remission of clinical ulcerative colitis. However, the mechanisms are not fully understood. The aim of this study was to examine the effects of VSL#3 probiotics on dextran sulfate sodium (DSS)-induced colitis in rats. Acute colitis was induced by administration of DSS 3.5 % for 7 days in rats. Rats in two groups were treated with either 15 mg VSL#3 or placebo via gastric tube once daily after induction of colitis; rats in other two groups were treated with either the wortmannin (1 mg/kg) via intraperitoneal injection or the wortmannin + VSL#3 after induction of colitis. Anti-inflammatory activity was assessed by myeloperoxidase (MPO) activity. Expression of inflammatory related mediators (iNOS, COX-2, NF-κB, Akt, and p-Akt) and cytokines (TNF-α, IL-6, and IL-10) in colonic tissue were assessed. TNF-α, IL-6, and IL-10 serum levels were also measured. Our results demonstrated that VSL#3 and wortmannin have anti-inflammatory properties by the reduced disease activity index and MPO activity. In addition, administration of VSL#3 and wortmannin for 7 days resulted in a decrease of iNOS, COX-2, NF-κB, TNF-α, IL-6, and p-Akt and an increase of IL-10 expression in colonic tissue. At the same time, administration of VSL#3 and wortmannin resulted in a decrease of TNF-α and IL-6 and an increase of IL-10 serum levels. VSL#3 probiotics therapy exerts the anti-inflammatory activity in rat model of DSS-induced colitis by inhibiting PI3K/Akt and NF-κB pathway. |
A Broad-Coverage Normalization System for Social Media Language | Social media language contains huge amount and wide variety of nonstandard tokens, created both intentionally and unintentionally by the users. It is of crucial importance to normalize the noisy nonstandard tokens before applying other NLP techniques. A major challenge facing this task is the system coverage, i.e., for any user-created nonstandard term, the system should be able to restore the correct word within its top n output candidates. In this paper, we propose a cognitivelydriven normalization system that integrates different human perspectives in normalizing the nonstandard tokens, including the enhanced letter transformation, visual priming, and string/phonetic similarity. The system was evaluated on both wordand messagelevel using four SMS and Twitter data sets. Results show that our system achieves over 90% word-coverage across all data sets (a 10% absolute increase compared to state-ofthe-art); the broad word-coverage can also successfully translate into message-level performance gain, yielding 6% absolute increase compared to the best prior approach. |
Multi-sensor Self-Quantification of Presentations | Presentations have been an effective means of delivering information to groups for ages. Over the past few decades, technological advancements have revolutionized the way humans deliver presentations. Despite that, the quality of presentations can be varied and affected by a variety of reasons. Conventional presentation evaluation usually requires painstaking manual analysis by experts. Although the expert feedback can definitely assist users in improving their presentation skills, manual evaluation suffers from high cost and is often not accessible to most people. In this work, we propose a novel multi-sensor self-quantification framework for presentations. Utilizing conventional ambient sensors (i.e., static cameras, Kinect sensor) and the emerging wearable egocentric sensors (i.e., Google Glass), we first analyze the efficacy of each type of sensor with various nonverbal assessment rubrics, which is followed by our proposed multi-sensor presentation analytics framework. The proposed framework is evaluated on a new presentation dataset, namely NUS Multi-Sensor Presentation (NUSMSP) dataset, which consists of 51 presentations covering a diverse set of topics. The dataset was recorded with ambient static cameras, Kinect sensor, and Google Glass. In addition to multi-sensor analytics, we have conducted a user study with the speakers to verify the effectiveness of our system generated analytics, which has received positive and promising feedback. |
Carbon Dioxide Monitoring in Air Pollution by Adabas and Natural Software Based on CO_2 Deposition Spatial Analysis in Forest Cover | A system of spatial analysis of carbon deposition on forest cover using ADABAS and Natural software is suggested.The system gives a possibility for automatic actualization of data of forest biomass plots and of data of National Forest Inventory System(NFIS) that is synchronized with the interactive map-scheme of territorial arrangement of forest cover carbon.The value of carbon emanating or sink from atmosphere is determined as difference between the value of deposited carbon change and the value of its atmospheric concentration change in some time interval.This gives a possibility for monitoring the level of air pollution by carbon and other greenhouse gases. |
A GPU-based WFST Decoder with Exact Lattice Generation | We describe initial work on an extension of the Kaldi toolkit that supports weighted finite-state transducer (WFST) decoding on Graphics Processing Units (GPUs). We implement token recombination as an atomic GPU operation in order to fully parallelize the Viterbi beam search, and propose a dynamic load balancing strategy for more efficient token passing scheduling among GPU threads. We also redesign the exact lattice generation and lattice pruning algorithms for better utilization of the GPUs. Experiments on the Switchboard corpus show that the proposed method achieves identical 1-best results and lattice quality in recognition and confidence measure tasks, while running 3 to 15 times faster than the single process Kaldi decoder. The above results are reported on different GPU architectures. Additionally we obtain a 46-fold speedup with sequence parallelism and multi-process service (MPS) in GPU. |
Melody Extraction on Vocal Segments Using Multi-Column Deep Neural Networks | Singing melody extraction is a task that tracks pitch contour of singing voice in polyphonic music. While the majority of melody extraction algorithms are based on computing a saliency function of pitch candidates or separating the melody source from the mixture, data-driven approaches based on classification have been rarely explored. In this paper, we present a classification-based approach for melody extraction on vocal segments using multi-column deep neural networks. In the proposed model, each of neural networks is trained to predict a pitch label of singing voice from spectrogram, but their outputs have different pitch resolutions. The final melody contour is inferred by combining the outputs of the networks and post-processing it with a hidden Markov model. In order to take advantage of the data-driven approach, we also augment training data by pitch-shifting the audio content and modifying the pitch label accordingly. We use the RWC dataset and vocal tracks of the MedleyDB dataset for training the model and evaluate it on the ADC 2004, MIREX 2005 and MIR-1k datasets. Through several settings of experiments, we show incremental improvements of the melody prediction. Lastly, we compare our best result to those of previous state-of-the-arts. |
Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems | The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds. |
A mobile application of augmented reality for aerospace maintenance training | Aircraft maintenance technicians (AMTs) must obtain new levels of job task skill and knowledge to effectively work with modem computer-based avionics and advanced composite materials. Traditional methods of training, such as on-the-job training (OJT), may not have potential to fulfill the training requirements to meet future trends in aviation maintenance. A new instruction delivery system could assist AMTs with job task training and job tasks. The purpose of this research is to analyze the use of an augmented reality (AR) system as a training medium for novice AMTs. An AR system has the potential to enable job task training and job task guidance for the novice technician in a real world environment. An AR system could reduce the cost for training and retraining of AMTs by complementing human information processing and assisting with performance of job tasks. An AR system could eliminate the need to leave the aircraft for the retrieval of information from maintenance manuals for inspection and repair procedures. AR has the potential to supply rapid and accurate feedback to an AMT with any information that he/she needs to successfully complete a job task. New technologies that promote smaller computer-based systems make the application of a mobile AR system possible in the near future. |
Generalized Zero-Shot Recognition based on Visually Semantic Embedding | We propose a novel Generalized Zero-Shot learning (GZSL) method that is agnostic to both unseen images and unseen semantic vectors during training. Prior works in this context propose to map high-dimensional visual features to the semantic domain, which we believe contributes to the semantic gap. To bridge the gap, we propose a novel low-dimensional embedding of visual instances that is “visually semantic.” Analogous to semantic data that quantifies the existence of an attribute in the presented instance, components of our visual embedding quantifies existence of a prototypical part-type in the presented instance. In parallel, as a thought experiment, we quantify the impact of noisy semantic data by utilizing a novel visual oracle to visually supervise a learner. These factors, namely semantic noise, visual-semantic gap and label noise lead us to propose a new graphical model for inference with pairwise interactions between label, semantic data, and inputs. We tabulate results on a number of benchmark datasets demonstrating significant improvement in accuracy over state-of-art under both semantic and visual supervision. |
Extracting relevant knowledge for the detection of sarcasm and nastiness in the social web | Automatic detection of emotions like sarcasm or nastiness in online written conversation is a difficult task. It requires a system that can manage some kind of knowledge to interpret that emotional language is being used. In this work, we try to provide this knowledge to the system by considering alternative sets of features obtained according to different criteria. We test a range of different feature sets using two different classifiers. Our results show that the sarcasm detection task benefits from the inclusion of linguistic and semantic information sources, while nasty language is more easily detected using only a set of surface patterns or indicators. 2014 Elsevier B.V. All rights reserved. |
When Personalization Meets Conformity: Collective Similarity based Multi-Domain Recommendation | Existing recommender systems place emphasis on personalization to achieve promising accuracy. However, in the context of multiple domain, users are likely to seek the same behaviors as domain authorities. This conformity effect provides a wealth of prior knowledge when it comes to multi-domain recommendation, but has not been fully exploited. In particular, users whose behaviors are significant similar with the public tastes can be viewed as domain authorities. To detect these users meanwhile embed conformity into recommendation, a domain-specific similarity matrix is intuitively employed. Therefore, a collective similarity is obtained to leverage the conformity with personalization. In this paper, we establish a Collective Structure Sparse Representation(CSSR) method for multi-domain recommendation. Based on adaptive $k$-Nearest-Neighbor framework, we impose the lasso and group lasso penalties as well as least square loss to jointly optimize the collective similarity. Experimental results on real-world data confirm the effectiveness of the proposed method. |
The End of Elsewhere: Writing Modernity Now | LET ME CONFESS MY BIASES at the start: In my view, modernity is not a trope, theory, project, or destination, or if it sometimes seems to be all these things, it is never these things alone. It is instead a condition, historically produced over three centuries around the globe in processes of change that have not ended yet.1 Modernity is not optional in history, in that societies could not simply “choose” another regime of historicity for themselves, for such is the tyranny of modern times.2 Nor is modernity dispensable in history-writing, especially for those who work on the recent past in what some still call “the rest of the world,” which many now would emend to “the world,” period. While not unitary or universal, the modern possesses commonalities across time and space, however differently it is experienced in different places. These commonalities are substantial enough to render impossible any truly “alternative” modernities, as attractive as such an idea may be to critics of Eurocentric models masquerading as universal norms, of whom I am one. The notion of “multiple modernities,” too, only helps to shift our attention away from singularity to the plural inflections of the modern experience, which is importantly diverse but not endlessly multiple: it is sad but true that not every country gets the modernity it wants or deserves.3 The common “grammar of modernity” encompasses such elements as the nationstate, whose numbers have proliferated from fifty at the beginning of the twentieth century to nearly two hundred today, and which appears not likely soon to dissolve, globalization notwithstanding.4 Other institutional commonalities include social shifts in massified urban and disrupted communal life and the insistence, if some- |
Conceptual domain of the matrix in fragmented landscapes. | In extensively modified landscapes, how the matrix is managed determines many conservation outcomes. Recent publications revise popular conceptions of a homogeneous and static matrix, yet we still lack an adequate conceptual model of the matrix. Here, we identify three core effects that influence patch-dependent species, through impacts associated with movement and dispersal, resource availability, and the abiotic environment. These core effects are modified by five 'dimensions': spatial and temporal variation in matrix quality; spatial scale; temporal scale of matrix variation; and adaptation. The conceptual domain of the matrix, defined as three core effects and their interaction with these five dimensions, provides a much-needed framework to underpin management of fragmented landscapes and highlights new research priorities. |
Configuring role-based access control to enforce mandatory and discretionary access control policies | Access control models have traditionally included mandatory access control (or lattice-based access control) and discretionary access control. Subsequently, role-based access control has been introduced, along with claims that its mechanisms are general enough to simulate the traditional methods. In this paper we provide systematic constructions for various common forms of both of the traditional access control paradigms using the role-based access control (RBAC) models of Sandhu et al., commonly called RBAC96. We see that all of the features of the RBAC96 model are required, and that although for the manatory access control simulation, only one administrative role needs to be assumed, for the discretionary access control simulations, a complex set of administrative roles is required. |
Theoretical study of aerobic oxidation of alcohols over Au38 nanocluster by a two-step-modeling approach | Abstract Au nanoclusters, stabilized by PVP, catalyze the aerobic oxidation of p -HBA. In this study, its principal reaction route was analyzed using Au 38 . First, the arrangements of substrates, including ligands, were estimated. Next, these ligands were replaced with negative point charge of −1, and each system was re-optimized. The results obtained from these models exhibited the same tendency for the two reaction pathways. By estimating the electron density transfers between the substrates and oxidants, it was suggested that the electron affinity of the active oxygen species and Au NC is a good index for determining the main reaction pathways. |
Reputation and Corporate Tax Planning : A Moral Licensing View Prepared for the 2017 Lone Star Accounting Research Conference | This study examines how a firm’s overall reputation status (reputation hereafter) affects its tax planning. Drawing on the moral licensing theory, we posit that managers’ and other stakeholders’ perception of a firm’s questionable behavior may be affected by the firm’s reputation and that a good reputation may help a firm to justify, or “license”, such behavior. This licensing effect may reduce a firm’s concerns about its tax avoidance behavior and incentivize reputable firms to engage in more tax reduction activities that have ambiguities in transgression. The empirical findings support our conjecture. Specifically, we test the association between a firm’s established reputation and its tax planning using multiple tax avoidance measures, which capture different tax reduction technologies that either fall into the gray area or violate tax and financial reporting rules. Relative to less reputable firms, more reputable firms on average avoid more taxes by using tax reduction technologies that have ambiguity in transgression, but are less likely to engage in tax-related activities that are blatant transgressions. We further investigate whether the licensing effect of reputation is more pronouced under the more principles-based or rules-based standards. Our findings suggest that the licensing effect is more pronounced under the more principle-based standards. |
Text mining in a digital library | Digital librarians strive to add value to the collections they create and maintain. One way is through selectivity: a carefully chosen set of authoritative documents in a particular topic area is far more useful to those working in the area than a huge, unfocused collection (like the Web). Another is by augmenting the collection with highquality metadata, which supports activities of searching and browsing in a uniform and useful way. A third way, and our topic here, is to enrich the documents by examining their content, extracting information, and using it to enhance the ways they can be located and presented. Text mining is a burgeoning new field that attempts to glean meaningful information from natural-language text. It may be loosely characterized as the process of analyzing text to extract information that is useful for particular purposes. It most commonly targets text whose function is the communication of factual information or opinions, and the motivation for trying to extract information from such text automatically is compelling – even if success is only partial. “Text mining” (sometimes called “text data mining”; [4]) defies tight definition but encompasses a wide range of activities: text summarization; document retrieval; document clustering; text categorization; language identification; authorship ascription; identifying phrases, phrase structures, and key phrases; extracting “entities” such as names, dates, and abbreviations; locating acronyms and their definitions; filling predefined templates with extracted information; and even learning rules from such templates [8]. Techniques of text mining have much to offer digital libraries and their users. Here we describe the marriage of a widely used digital library system (Greenstone) with a development environment for text mining (GATE) to enrich the library reader’s experience. The work is in progress: one level of integration has been demonstrated and another is planned. The project has been greatly facilitated by the fact that both systems are publicly available under the GNU public license – and, in addition, this means that the benefits gained by leveraging text mining techniques will accrue to all Greenstone users. |
Atom-by-atom spectroscopy at graphene edge | The properties of many nanoscale devices are sensitive to local atomic configurations, and so elemental identification and electronic state analysis at the scale of individual atoms is becoming increasingly important. For example, graphene is regarded as a promising candidate for future devices, and the electronic properties of nanodevices constructed from this material are in large part governed by the edge structures. The atomic configurations at graphene boundaries have been investigated by transmission electron microscopy and scanning tunnelling microscopy, but the electronic properties of these edge states have not yet been determined with atomic resolution. Whereas simple elemental analysis at the level of single atoms can now be achieved by means of annular dark field imaging or electron energy-loss spectroscopy, obtaining fine-structure spectroscopic information about individual light atoms such as those of carbon has been hampered by a combination of extremely weak signals and specimen damage by the electron beam. Here we overcome these difficulties to demonstrate site-specific single-atom spectroscopy at a graphene boundary, enabling direct investigation of the electronic and bonding structures of the edge atoms—in particular, discrimination of single-, double- and triple-coordinated carbon atoms is achieved with atomic resolution. By demonstrating how rich chemical information can be obtained from single atoms through energy-loss near-edge fine-structure analysis, our results should open the way to exploring the local electronic structures of various nanodevices and individual molecules. |
Markov Chains for Exploring Posterior Distributions | Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. |
Computation of configuration-space obstacles using the fast Fourier transform | Thh paper presents a new method for computing the configuration-space map of obstacles that is used in motion-planning algorithms. The method derives from the observation that, when the robot is a rigid object that can only translate, the configuration space is a convolution of the workspace and the robot. This convolution is computed with the use of the Fast Fourier Transform (FFT) algorithm. The method is particularly promising for workspaces with many andlor complicated obstacles, or when the shape of the robot is not simple. It is an inherently parallel method that can significantly benefit from existing experience and hardware on the FFT. |
Electric Load Forecasting Using An Artificial Neural Network | This paper presents an artificial neural network(ANN) approach to electric load forecasting. The ANN is used to learn the relationship among past, current and future temperatures and loads. In order to provide the forecasted load, the ANN interpolates among the load and temperature data in a training data set. The average absolute errors of the one-hour and 24-hour ahead forecasts in our test on actual utility data are shown to be 1.40% and 2.06%, respectively. This compares with an average error of 4.22% for 24hour ahead forecasts with a currently used forecasting technique applied to the same data. |
ArchWare: Architecting Evolvable Software | This paper gives an overview of the ArchWare European Project1. The broad scope of ArchWare is to respond to the ever-present demand for software systems that are capable of accommodating change over their lifetime, and therefore are evolvable. In order to achieve this goal, ArchWare develops an integrated set of architecture-centric languages and tools for the modeldriven engineering of evolvable software systems based on a persistent run-time framework. The ArchWare Integrated Development Environment comprises: (a) innovative formal architecture description, analysis, and refinement languages for describing the architecture of evolvable software systems, verifying their properties and expressing their refinements; (b) tools to support architecture description, analysis, and refinement as well as code generation; (c) enactable processes for supporting model-driven software engineering; (d) a persistent run-time framework including a virtual machine for process enactment. It has been developed using ArchWare itself and is available as Open Source Software. |
Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning | Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3. |
Job Satisfaction in Britain : Individual and Job Related Factors | Recently there is a resurgence of interest in the analysis of job satisfaction variables. Job satisfaction is correlated with labor market behavior such as productivity, quits and absenteeism. Recent work examined job satisfaction in relation to various factors. In this paper four different measures of job satisfaction are related to a variety of personal and job characteristics. We use a unique data of 28 240 British employees Workplace Employee Relations Survey (WERS97). Our data set is larger and more recent than in the previous studies. The four measures of job satisfaction considered are satisfaction with influence over job, satisfaction with amount of pay, satisfaction with sense of achievement and satisfaction with respect from supervisors. Although the job satisfaction measures we use are somewhat different than those that are previously used in the literature, a number of results that are commonly obtained with international data are found to hold in our data set as well. |
The effect of Cognitive Style Analysis (CSA) test on achievement: A meta-analytic review | Abstract Riding's (1991) Cognitive Style Analysis test has been a popular UK test of the verbal–imagery and wholistic–analytic cognitive style dimensions. Researchers have claimed numerous academic advantages for these style dimensions, but the direction of the effects is inconsistent. A meta-analysis was therefore conducted that used the CSA along with an academic outcome measure. While 25 studies met the inclusion criteria, only 15 provided sufficient information for statistical analysis. The studies included five experimental and 10 observational studies and involved participants in primary, secondary and tertiary institutions. The results show no clear academic achievement advantage for either the verbal–imagery cognitive style dimension or the wholistic–analytic cognitive style dimension, regardless of the learning task or environment. While the final sample of included studies was relatively small the results of this meta-analysis suggest that the relationship between the CSA's cognitive style dimensions and academic achievement, if any, is likely to be negligible. |
Optimal Adaptive Cruise Control with Guaranteed String Stability | Chi-Ying Liang Huei Peng Department of Mechanical Engineering and Applied Mechanics University of Michigan 2272 G.G. Brown Ann Arbor, MI 48109-2125, USA [email protected] TEL: (734) 936-0352 FAX: (734) 647-3170 Abstract A two-level Adaptive Cruise Control (ACC) synthesis method is presented in this paper. At the upper level, desired vehicle acceleration is computed based on vehicle range and range rate measurement. At the lower (servo) level, an adaptive control algorithm is designed to ensure the vehicle follows the upper level acceleration command accurately. It is shown that the servo-level dynamics can be included in the overall design and string stability can be guaranteed. In other words, the proposed control design produces minimum negative impact on surrounding vehicles. The performance of the proposed ACC algorithm is examined by using a microscopic simulation program—ACCSIM created at the University of Michigan. The architecture and basic functions of ACCSIM are described in this paper. Simulation results under different ACC penetration rate and actuator/engine bandwidth are reported. |
Deep-Spying: Spying using Smartwatch and Deep Learning | Wearable technologies are today on the rise, becoming more common and broadly available to mainstream users. In fact, wristband and armband devices such as smartwatches and fitness trackers already took an important place in the consumer electronics market and are becoming ubiquitous. By their very nature of being wearable, these devices, however, provide a new pervasive attack surface threatening users privacy, among others. In the meantime, advances in machine learning are providing unprecedented possibilities to process complex data efficiently. Allowing patterns to emerge from high dimensional unavoidably noisy data. The goal of this work is to raise awareness about the potential risks related to motion sensors built-in wearable devices and to demonstrate abuse opportunities leveraged by advanced neural network architectures. The LSTM-based implementation presented in this research can perform touchlogging and keylogging on 12-keys keypads with above-average accuracy even when confronted with raw unprocessed data. Thus demonstrating that deep neural networks are capable of making keystroke inference attacks based on motion sensors easier to achieve by removing the need for non-trivial preprocessing pipelines and carefully engineered feature extraction strategies. Our results suggest that the complete technological ecosystem of a user can be compromised when a wearable wristband device is worn. |
How Cutting the Cost of Using a Bank Affects Household’s Behavior of Remittance Transfers: Evidence From a Field Experiment in Rural Malawi | Using a randomized experiment in rural Malawi, this paper finds that providing information on mobile bank buses' services leads to a higher probability of adopting savings accounts in the treatment group. Households in the treated villages are 3.06 percentage points more likely to adopt savings accounts than households in the control group. Second, the information treatment leads to an increase of in the probability of households receiving remittances in the treated villages, as well as an increase in the amount of remittances received. In particular, the effect is strongest for households that lived at least three kilometers away from the trade centers, which suggests that the main cost of transferring remittance is the cost of traveling to a bank. Third, the 2SLS regression provides suggestive evidence that adopting savings accounts leads to an increase in households' remittance activities. The 63.3 percentage points increase in the possibility of households receiving remittances after adopting savings accounts suggests that there previously exist high costs associated with the informal channels of transferring remittances. |
PACRR: A Position-Aware Neural IR Model for Relevance Matching | In order to adopt deep learning for information retrieval, models are needed that can capture all relevant information required to assess the relevance of a document to a given user query. While previous works have successfully captured unigram term matches, how to fully employ position-dependent information such as proximity and term dependencies has been insufficiently explored. In this work, we propose a novel neural IR model named PACRR aiming at better modeling position-dependent interactions between a query and a document. Extensive experiments on six years’ TREC Web Track data confirm that the proposed model yields better results under multiple benchmarks. |
Ultra-Thin Phase-Change Bridge Memory Device Using GeSb | An ultra-thin phase-change bridge (PCB) memory cell, implemented with doped GeSb, is shown with < 100muA RESET current. The device concept provides for simplified scaling to small cross-sectional area (60nm2) through ultra-thin (3nm) films; the doped GeSb phase-change material offers the potential for both fast crystallization and good data retention |
Lichen planopilaris: Epidemiology and prevalence of subtypes - a retrospective analysis in 104 patients. | BACKGROUND
Management of patients with lichen planopilaris (LPP) and frontal fibrosing alopecia (FFA) is rendered difficult as robust epidemiologic data, insights on pathogenesis, associated diseases, possible relevance of concomitant medications or environmental factors are lacking.
PATIENTS AND METHODS
Retrospective analysis of demography, skin status, concomitant medication and diagnostic procedures were performed on 104 medical records (71 classic LPP, 32 FFA, and one Graham-Little-Piccardi-Lassueur syndrome).
RESULTS
Women were more often affected (distribution F: M classic LPP 4.9: 1; FFA: 31: 1). Compared to LPP patients, patients with FFA were significantly older (p < 0.001), more often postmenopausal, and more frequently on hormone replacement therapy. No other specific associations were identified. An association with lichen planus, other autoimmune diseases, or hepatitis virus infection was found only in individual patients. Clinically, FFA patients were significantly more often reported to have reduced eyebrows (p < 0.005), axillary, and/or pubic hair (p = 0.050).
CONCLUSIONS
The findings obtained from this study, with currently largest LPP/FFA patient cohort in Germany, encouraged us to set up a national FFA patient registry. Prospective data collected from larger numbers of patients with standardized questionnaires will help to assess assumed associations and influencing factors and to develop, in the long-term, recommendations for diagnosis and treatment. |
Morphology and Laser-Induced Photochemistry of Silicon and Nickel Nanoparticles | The structural and photoinduced properties of silicon nanoparticles obtained by plasmachemical and electrolytic techniques and the nickel particles deposited on aluminum oxide film in ultra-high vacuum are investigated by Auger electron spectroscopy, transmission electron microscopy, Fourier-transform infrared spectroscopy and time-of-flight spectroscopy. It is found that substantial increase of silicon nanoparticle photoinduced luminescence can be attributed to particle specific structure, as well as to the SiO2 thin film which is formed on the nanocrystalline silicon surface. In case of Ni particles deposited on aluminum oxide film at low mean coverage of about 0.04 monolayers, when the film can be viewed as consisting of separated single adsorbed atoms or very small clusters, the photon irradiation by nanosecond pulsed laser leads to NO desorption. At monolayer Ni coverage formed at a substrate temperature of 80 K laser irradiation causes dissociation of NO molecules. Efficiency of this process at the initial stage is notably enhanced compared to that of NO on the bulk Ni (111) crystal. This enhancement can be attributed to the effect of underlying aluminum oxide support. |
The in vivo adherence intervention for at risk adolescents with asthma: report of a randomized pilot trial. | OBJECTIVE
Low-income and minority adolescents are at high risk for poor asthma outcomes, due in part to adherence. We tested acceptability, feasibility, and effect sizes of an adherence intervention for low socioeconomic status (SES) minority youth with moderate- and severe-persistent asthma. Design and Methods Single-site randomized pilot trial: intervention (n = 12; asthma education, motivational interviewing, problem-solving skills training, 1 month cell-phone with tailored text messaging) versus control (n = 14; asthma education; cell-phone without tailored messaging). Calculated effect-sizes of relative change from baseline (1 and 3 months).
RESULTS
Intervention was judged acceptable and feasible by participants. Participants (12-18 years, mean = 15.1, SD = 1.67) were 76.9% African-American, 80.7% public/no insurance. At 1 and 3 months, asthma symptoms (Cohen's d's = 0.40, 0.96) and HRQOL (PedsQL™; Cohen's d's = 0.23, 1.25) had clinically meaningful medium to large effect sizes.
CONCLUSIONS
This intervention appears promising for at-risk youth with moderate- and severe-persistent asthma. |
Using modern design methods to improve metallurgical machinery students' innovation ability and the research of its application | It is an urgent requirement to train high-quality innovative talents and enhance students' innovate ability for colleges and universities in teaching and research. It was analyzed in this paper that the common modern design methods was used to the cultivation of students' innovative ability of metallurgical machinery, which was introduced to the professional teaching of metallurgical machinery. Taking computer-aided design, computer aided engineering, optimization design and reliability design of modern design methods as examples and through the study of the backup rollers of existing ZR mill in rolling mill machinery, the students' innovate ability was greatly enhanced, and it was of great significance to improve comprehensively the quality of higher education and the talent powerful strategies. |
A randomized controlled trial of a parent training and emotion socialization program for families of hyperactive preschool-aged children. | The present study evaluated the effectiveness of a parent training and emotion socialization program designed specifically for hyperactive preschoolers. Participants were 31 preschool-aged children whose parents were randomly assigned to a parent training (PT) or waitlist (WL) control group. PT parents took part in a 14-week parenting program that involved teaching parenting strategies for managing hyperactive and disruptive behavior as well as emotion socialization strategies for improving children's emotion regulation. Compared to WL mothers, PT mothers reported significantly less child inattention, hyperactivity, oppositional defiance, and emotional lability; were observed using significantly more positive and less negative parenting; and reported significantly less maternal verbosity and unsupportive emotion socialization practices. Results provide some support for the effectiveness of this parenting program for reducing attention-deficit hyperactivity disorder (ADHD) symptoms and associated problems in preschool-aged children. |
Adaptive Affinity Fields for Semantic Segmentation | encode near / long range structural relations. • One size does not fit all classes; picking the one with minimal affinity loss results in trivial solutions. • Select the right size by pushing the affinity field matching to the hard negative cases. • Our adversarial learning for adaptive kernel sizes: Adaptive Affinity Fields for Semantic Segmentation Tsung-Wei Ke*, Jyh-Jing Hwang*, Ziwei Liu, Stella X. Yu |
Rehabilitation of traumatic brain injury in active duty military personnel and veterans: Defense and Veterans Brain Injury Center randomized controlled trial of two rehabilitation approaches. | OBJECTIVES
To determine the relative efficacy of 2 different acute traumatic brain injury (TBI) rehabilitation approaches: cognitive didactic versus functional-experiential, and secondarily to determine relative efficacy for different patient subpopulations.
DESIGN
Randomized, controlled, intent-to-treat trial comparing 2 alternative TBI treatment approaches.
SETTING
Four Veterans Administration acute inpatient TBI rehabilitation programs.
PARTICIPANTS
Adult veterans or active duty military service members (N=360) with moderate to severe TBI.
INTERVENTIONS
One and a half to 2.5 hours of protocol-specific cognitive-didactic versus functional-experiential rehabilitation therapy integrated into interdisciplinary acute Commission for Accreditation of Rehabilitation Facilities-accredited inpatient TBI rehabilitation programs with another 2 to 2.5 hours daily of occupational and physical therapy. Duration of protocol treatment varied from 20 to 60 days depending on the clinical needs and progress of each participant.
MAIN OUTCOME MEASURES
The 2 primary outcome measures were functional independence in living and return to work and/or school assessed by independent evaluators at 1-year follow-up. Secondary outcome measures consisted of the FIM, Disability Rating Scale score, and items from the Present State Exam, Apathy Evaluation Scale, and Neurobehavioral Rating Scale.
RESULTS
The cognitive-didactic and functional-experiential treatments did not result in overall group differences in the broad 1-year primary outcomes. However, analysis of secondary outcomes found differentially better immediate posttreatment cognitive function (mean+/-SD cognitive FIM) in participants randomized to cognitive-didactic treatment (27.3+/-6.2) than to functional treatment (25.6+/-6.0, t332=2.56, P=.01). Exploratory subgroup analyses found that younger participants in the cognitive arm had a higher rate of returning to work or school than younger patients in the functional arm, whereas participants older than 30 years and those with more years of education in the functional arm had higher rates of independent living status at 1 year posttreatment than similar patients in the cognitive arm.
CONCLUSIONS
Results from this large multicenter randomized controlled trial comparing cognitive-didactic and functional-experiential approaches to brain injury rehabilitation indicated improved but similar long-term global functional outcome. Participants in the cognitive treatment arm achieved better short-term functional cognitive performance than patients in the functional treatment arm. The current increase in war-related brain injuries provides added urgency for rigorous study of rehabilitation treatments. (http://ClinicalTrials.gov ID# NCT00540020.). |
New York University 2016 System for KBP Event Nugget: A Deep Learning Approach | This is the first time New York University (NYU) participates in the event nugget (EN) evaluation of the Text Analysis Conference (TAC). We developed EN systems for both subtasks of event nugget, i.e, EN Task 1: Event Nugget Detection and EN Task 2: Event Nugget Detection and Coreference. The systems are mainly based on our recent research on deep learning for event detection (Nguyen and Grishman, 2015a; Nguyen and Grishman, 2016a). Due to the limited time we could devote to system development this year, we only ran the systems on the English evaluation data. However, we expect that the adaptation of the current systems to new languages can be done quickly. The development experiments show that although our current systems do not rely on complicated feature engineering, they significantly outperform the reported systems last year for the EN subtasks on the 2015 evaluation data. |
Low-light image enhancement using variational optimization-based Retinex model | This paper presents a low-light image enhancement method using the variational-optimization-based Retinex algorithm. The proposed enhancement method first estimates the initial illumination and uses its gamma corrected version to constrain the illumination component. Next, the variational-based minimization is iteratively performed to separate the reflectance and illumination components. The color assignment of the estimated reflectance component is then performed to restore the color component using the input RGB color channels. Experimental results show that the proposed method can provide better enhanced result without saturation, noise amplification or color distortion. |
Early antiretroviral therapy and mortality among HIV-infected infants. | BACKGROUND
In countries with a high seroprevalence of human immunodeficiency virus type 1 (HIV-1), HIV infection contributes significantly to infant mortality. We investigated antiretroviral-treatment strategies in the Children with HIV Early Antiretroviral Therapy (CHER) trial.
METHODS
HIV-infected infants 6 to 12 weeks of age with a CD4 lymphocyte percentage (the CD4 percentage) of 25% or more were randomly assigned to receive antiretroviral therapy (lopinavir-ritonavir, zidovudine, and lamivudine) when the CD4 percentage decreased to less than 20% (or 25% if the child was younger than 1 year) or clinical criteria were met (the deferred antiretroviral-therapy group) or to immediate initiation of limited antiretroviral therapy until 1 year of age or 2 years of age (the early antiretroviral-therapy groups). We report the early outcomes for infants who received deferred antiretroviral therapy as compared with early antiretroviral therapy.
RESULTS
At a median age of 7.4 weeks (interquartile range, 6.6 to 8.9) and a CD4 percentage of 35.2% (interquartile range, 29.1 to 41.2), 125 infants were randomly assigned to receive deferred therapy, and 252 infants were randomly assigned to receive early therapy. After a median follow-up of 40 weeks (interquartile range, 24 to 58), antiretroviral therapy was initiated in 66% of infants in the deferred-therapy group. Twenty infants in the deferred-therapy group (16%) died versus 10 infants in the early-therapy groups (4%) (hazard ratio for death, 0.24; 95% confidence interval [CI], 0.11 to 0.51; P<0.001). In 32 infants in the deferred-therapy group (26%) versus 16 infants in the early-therapy groups (6%), disease progressed to Centers for Disease Control and Prevention stage C or severe stage B (hazard ratio for disease progression, 0.25; 95% CI, 0.15 to 0.41; P<0.001). Stavudine was substituted for zidovudine in four infants in the early-therapy groups because of neutropenia in three infants and anemia in one infant; no drugs were permanently discontinued. After a review by the data and safety monitoring board, the deferred-therapy group was modified, and infants in this group were all reassessed for initiation of antiretroviral therapy.
CONCLUSIONS
Early HIV diagnosis and early antiretroviral therapy reduced early infant mortality by 76% and HIV progression by 75%. (ClinicalTrials.gov number, NCT00102960.) |
A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data | Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set. |
The theoretical cognitive process of visualization for science education | The use of visual models such as pictures, diagrams and animations in science education is increasing. This is because of the complex nature associated with the concepts in the field. Students, especially entrant students, often report misconceptions and learning difficulties associated with various concepts especially those that exist at a microscopic level, such as DNA, the gene and meiosis as well as those that exist in relatively large time scales such as evolution. However the role of visual literacy in the construction of knowledge in science education has not been investigated much. This article explores the theoretical process of visualization answering the question "how can visual literacy be understood based on the theoretical cognitive process of visualization in order to inform the understanding, teaching and studying of visual literacy in science education?" Based on various theories on cognitive processes during learning for science and general education the author argues that the theoretical process of visualization consists of three stages, namely, Internalization of Visual Models, Conceptualization of Visual Models and Externalization of Visual Models. The application of this theoretical cognitive process of visualization and the stages of visualization in science education are discussed. |
What you look at is what you get: eye movement-based interaction techniques | In seeking hitherto-unused methods by which users and computers can communicate, we investigate the usefulness of eye movements as a fast and convenient auxiliary user-to-computer communication mode. The barrier to exploiting this medium has not been eye-tracking technology but the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a natural and unobtrusive way. This paper discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, and reports our experiences and observations on them. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.