title
stringlengths
8
300
abstract
stringlengths
0
10k
Adaptive Fragmentation for Latency Control and Energy Management in Wireless Real-time Environments
Wireless environments are typically characterized by unpredictable and unreliable channel conditions. In such environments, fragmentation of network-bound data is a commonly adapted technique to improve the probability of successful data transmissions and reduce the energy overheads incurred due to re-transmissions. The overall latencies involved with fragmentation and consequent re-assembly of fragments are often neglected which bear significant effects on the real-time guarantees of the participating applications. This work studies the latencies introduced as a result of the fragmentation performed at the link layer (MAC layer in IEEE 802.11) of the source device and their effects on end-to-end delay constraints of mobile applications (e.g., media streaming). Based on the observed effects, this work proposes a feedback-based adaptive approach that chooses an optimal fragment size to (a) satisfy end-to-end delay requirements of the distributed application and (b) minimize the energy consumption of the source device by increasing the probability of successful transmissions, thereby reducing re-transmissions and their associated costs.
Impact of IEEE 802.11n/ac PHY/MAC High Throughput Enhancements over Transport/Application Layer Protocols - A Survey
Since the inception of Wireless Local Area Networks (WLANs) in the year 1997, it has tremendously grown in the last few years. IEEE 802.11 is popularly known as WLAN. To provide the last mile wireless broadband connectivity to users, IEEE 802.11 is enriched with IEEE 802.11a, IEEE 802.11b and IEEE 802.11g. More recently, IEEE 802.11n, IEEE 802.11ac and IEEE 802.11ad are introduced with enhancements to the physical (PHY) layer and medium access control (MAC) sublayer to provide much higher data rates and thus these amendments are called High Throughput WLANs (HT-WLANs). In IEEE 802.11n, the data rate has increased up to 600 Mbps, whereas IEEE 802.11ac/ad is expected to support a maximum throughput of 1 to 7 Gbps over wireless media. For both standards, PHY is enhanced with multiple-input multiple-output (MIMO) antenna technologies, channel bonding, short guard intervals (SGI), enhanced modulation and coding schemes (MCS). At the same time, MAC layer overhead is reduced by introducing frame aggregation and block acknowledgement technologies. However, existing studies reveal that although PHY and MAC enhancements promise to improve physical data rate significantly, they yield negative impact over upper layer protocols – mainly for reliable end-to-end transport/application layer protocols. As a consequence, a large number of schools have focused researches on HT-WLANs to improve the coordination among PHY/MAC and upper layer protocols and thus, boost up the performance benefit. In this survey, we discuss the impact of enhancements of PHY/MAC layer in HT-WLANs over transport/application layer protocols. Several measures have been reported in the literature that boost up data rate of WLANs and use aforesaid enhancements effectively for performance benefit of end-to-end protocols. We also point out limitations of existing researches and list down different open challenges that can be explored for the development of next generation HT-WLAN technologies. Keywords-IEEE 802.11n, IEEE 802.11ac, MIMO, MU-MIMO, Channel bonding, Short-guard interval (SGI), Frame aggregation, Block Acknowledgement, TCP/UDP Throughput.
SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance
Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.
Efficient Exploration in Reinforcement Learning with Hidden State
Undoubtedly, efficient exploration is crucial for the success of a learning agent. Previous approaches to exploration in reinforcement learning exclusively address exploration in Markovian domains, i.e. domains in which the state of the environment is fully observable. If the environment is only partially observable, they cease to work because exploration statistics are confounded between aliased world states. This paper presents Fringe Exploration, a technique for efficient exploration in partially observable domains. The key idea, (applicable to many exploration techniques), is to keep statistics in the space of possible shortterm memories, instead of in the agent’s current state space. Experimental results in a partially observable maze and in a difficult driving task with visual routines show dramatic performance improvements.
Anaesthesia: The influence of two different dental local anaesthetic solutions on the haemo-dynamic responses of children undergoing restorative dentistry: a randomised, single-blind, split-mouth study
Objectives This investigation was designed to study the haemodynamic effects of two different local anaesthetic solutions during restorative dental treatment in children.Design A randomised, single-blind, split-mouth cross-over design was employed using children undergoing bilaterally similar restorative treatments over two visits.Setting The study was performed in a dental hospital paediatric dentistry department.Methods Ten children participated. At one visit the local anaesthetic was 2% lidocaine (lignocaine) with 1:80,000 epinephrine (adrenaline); at the other the anaesthetic was 3% prilocaine with 0.03IU/ml felypressin. Local anaesthetic was administered at a dose of 0.5 ml/10 kg body weight. Blood pressure and heart rate were measured before and during treatment with an automatic blood pressure recorder. Data were analysed by ANOVA and Student's paired t test.Results Significant differences between treatments in diastolic blood pressure (F = 2.37; P = 0.05) and heart rate (F = 2.98; P < 0.02) were noted. The heart rate increased ten minutes following the injection of the epinephrine-containing solution. The diastolic blood pressure fell 20 minutes after injection of lidocaine with epinephrine.Conclusion The choice of local anaesthetic solution influences the haemodynamic response during restorative treatment in children.
Low correlation MIMO antenna for LTE 700MHz band
A key feature of upcoming 4G wireless communication standards is multiple-input-multiple-output (MIMO) technology. To make the best use of MIMO, the antenna correlation between adjacent antennas must be low (&#60; 0.5). In this context, we propose a new correlation reduction technique suitable for closely spaced antennas (distance, d &#60; λ/40). This technique reduces mutual coupling between antennas and concurrently uncorrelates the antennas' radiation characteristic by inducing negative group delay in the target frequency band. The validity of the technique is demonstrated with a USB dongle MIMO antenna designed for LTE 700MHz band. Measurement results show that the antenna correlation is reduced more than 37% using the proposed technique.
Retarded electric and magnetic fields of a moving charge: Feynman's derivation of Li\'enard-Wiechert potentials revisited
Retarded electromagnetic potentials are derived from Maxwell's equations and the Lorenz condition. The difference found between these potentials and the conventional Liénard-Wiechert ones is explained by neglect, for the latter, of the motion-dependence of the effective charge density. The corresponding retarded fields of a point-like charge in both uniform and accelerated motion are compared with those given by the formulae of Heaviside, Feynman, Jefimenko and other authors. Contrary to claims in the pedagogical literature, the fields of an accelerated charge given by the Feynman or Jefimenko formulae, or derived from the Liénard-Wiechert potentials are all different from each other as well as from those presented in the present paper. A mathematical error concerning partial space and time derivatives in the derivation of the Jefimenko equations is pointed out.
On the construction of global models describing isolated rotating charged bodies; uniqueness of the exterior gravitational field
A relatively recent study by Mars and Senovilla provided us with a uniqueness result for the exterior vacuum gravitational field of global models describing finite isolated rotating bodies in equilibrium in General Relativity (GR). The generalisation to exterior electrovacuum gravitational fields, to include charged rotating objects, is presented here.
Recommending collaboration with social networks: a comparative evaluation
Studies of information seeking and workplace collaboration often find that social relationships are a strong factor in determining who collaborates with whom. Social networks provide one means of visualizing existing and potential interaction in organizational settings. Groupware designers are using social networks to make systems more sensitive to social situations and guide users toward effective collaborations. Yet, the implications of embedding social networks in systems have not been systematically studied. This paper details an evaluation of two different social networks used in a system to recommend individuals for possible collaboration. The system matches people looking for expertise with individuals likely to have expertise. The effectiveness of social networks for matching individuals is evaluated and compared. One finding is that social networks embedded into systems do not match individuals' perceptions of their personal social network. This finding and others raise issues for the use of social networks in groupware. Based on the evaluation results, several design considerations are discussed.
Acting with technology: Activity theory and interaction design
Activity theory holds that the human mind is the product of our interaction with people and artifacts in the context of everyday activity. Acting with Technology makes the case for activity theory as a basis for...
Successful treatment of bacillary angiomatosis with oral doxycycline in an HIV-infected child with skin lesions mimicking Kaposi sarcoma
ART: antiretroviral therapy BA: bacillary angiomatosis HHV-8: human herpes virus 8 KS: Kaposi sarcoma CASE REPORT A 12-year-old boy presented for treatment of widespread papules and fungating, ulcerating nodules (Fig 1). Seven months before referral, HIV was diagnosed, and he started antiretroviral therapy (ART), with a baseline CD4 count of 76 cells per microliter (5%). Per history, the rash began 1 month after ART initiation and progressively worsened in the 6 months after ART therapy started. The patient was referred to the Baylor Tanzania Center of Excellence in Mbeya, Tanzania because of concern of possible Kaposi sarcoma (KS). A skin biopsy was performed. The patient underwent chest radiography, abdominal ultrasound scan, and liver/renal function tests, the results of which were normal. A complete blood count was notable for severe anemia (hemoglobin level of 4.0 g/dL) requiring blood transfusion. The clinical diagnosis of bacillary angiomatosis (BA) was favored, and the patient was prescribed doxycycline, 100 mg twice daily. Although KS was also in the clinical differential diagnosis, chemotherapy was withheld pending biopsy results. Histologically, the lesion biopsy result was consistent with BA, and bacilli were observed with Warthin-Starry staining. At the 1-week follow-up appointment, a dramatic improvement in the appearance of the lesions was noted (Fig 2). The patient continued doxycycline with continued improvement over 7 additional weeks, with near complete resolution of the sores and lesions (Fig 3).
Evaluation of sentence embeddings in downstream and linguistic probing tasks
Despite the fast developmental pace of new sentence embedding methods, it is still challenging to find comprehensive evaluations of these different techniques. In the past years, we saw significant improvements in the field of sentence embeddings and especially towards the development of universal sentence encoders that could provide inductive transfer to a wide variety of downstream tasks. In this work, we perform a comprehensive evaluation of recent methods using a wide variety of downstream and linguistic feature probing tasks. We show that a simple approach using bag-of-words with a recently introduced language model for deep contextdependent word embeddings proved to yield better results in many tasks when compared to sentence encoders trained on entailment datasets. We also show, however, that we are still far away from a universal encoder that can perform consistently across several downstream tasks.
An Agent-based Indoor Wayfinding Based on Digital Sign System
Developing effective urban planning approaches is a major challenge for outdoor and indoor environments. Wayfinding in unfamiliar indoor environments is a prime task which everyone would be encountered. The development of assistive technologies to aid wayfinding is hampered by the lack of reliable and cost-efficient methods providing location information in the environment. We have applied RFID technology as a low-cost and operative approach. Our contribution is the suitable design and placement of digital sign system that can be readily detected by a handheld device. We have simulated the task with an agent-based modeling. The hypothesis of the research is that the wayfinder has a handheld device such as a PDA or a mobile system which receives and responses the signals from digital signs through passive tags. Performance of the simulation showed that the appropriate design of digital signs in an unfamiliar environment would result in a more efficient wayfinding process.
Effective explanations of recommendations: user-centered design
This paper characterizes general properties of useful, or Effective, explanations of recommendations. It describes a methodology based on focus groups, in which we elicit what helps moviegoers decide whether or not they would like a movie. Our results highlight the importance of personalizing explanations to the individual user, as well as considering the source of recommendations, user mood, the effects of group viewing, and the effect of explanations on user expectations.
Evaluation of information technology investment: a data envelopment analysis approach
The increasing use of information technology (IT) has resulted in a need for evaluating the productivity impacts of IT. The contemporary IT evaluation approach has focused on return on investment and return on management. IT investment has impacts on different stages of business operations. For example, in the banking industry, IT plays a key role in effectively generating (i) funds from the customer in the forms of deposits and then (ii) profits by using deposits as investment funds. Existing approaches based upon data envelopment analysis (DEA) only measure the IT efficiency or impact on one specific stage when a multi-stage business process is present. A detailed model is needed to characterize the impact of IT on each stage of the business operation. The current paper develops a DEA non-linear programming model to evaluate the impact of IT on multiple stages along with information on how to distribute the IT-related resources so that the efficiency is maximized. It is shown that this non-linear program can be treated as a parametric linear program. It is also shown that if there is only one intermediate measure, then the non-linear DEA model becomes a linear program. Our approach is illustrated with an example taken from previous studies. 2004 Elsevier Ltd. All rights reserved.
1 A Weakly Supervised Learning Framework For Detecting Social Anxiety And Depression ASIF
Although social anxiety and depression are common, they are often underdiagnosed and undertreated, in part due to difficulties identifying and accessing individuals in need of services. Current assessments rely on client self-report and clinician judgment, which are vulnerable to social desirability and other subjective biases. Identifying objective, nonburdensome markers of these mental health problems, such as features of speech, could help advance assessment, prevention, and treatment approaches. Prior research examining speech detection methods has focused on fully supervised learning approaches employing strongly labeled data. However, strong labeling of individuals high in symptoms or state affect in speech audio data is impractical, in part because it is not possible to identify with high confidence which regions of a long speech indicate the person’s symptoms or affective state. We propose a weakly supervised learning framework for detecting social anxiety and depression from long audio clips. Specifically, we present a novel feature modeling technique named NN2Vec that identifies and exploits the inherent relationship between speakers’ vocal states and symptoms/affective states. Detecting speakers high in social anxiety or depression symptoms using NN2Vec features achieves F-1 scores 17% and 13% higher than those of the best available baselines. In addition, we present a new multiple instance learning adaptation of a BLSTM classifier, named BLSTM-MIL. Our novel framework of using NN2Vec features with the BLSTM-MIL classifier achieves F-1 scores of 90.1% and 85.44% in detecting speakers high in social anxiety and depression symptoms.
Natural Language Processing in PROLOG
Natural language processing in Prolog started with Pro-log itself, when Colmerauer in the early 1970's developed a Prolog-based natural language system called metamorphosis grammar (Colmerauer,1978). The ideas developed here have since been applied in various logic grammar formalisms and systems, e.g. by Dahl (1977), who made a natural language system for Spanish. The most influential formalism, Definite Clause Grammar , was developed by Warren and Pereira and applied to make a knowledge based system CHAT-80 (Pereira,1980) for geography with a natural language interface. CHAT-80 can answer the following question in extensio: " Which country bordering the Mediterranean borders a country that is bordered by a country whose population exceeds the population of India " The solution, Turkey, was found within 405 milliseconds (1980).
DeftNN: addressing bottlenecks for DNN execution on GPUs via synapse vector elimination and near-compute data fission
Deep neural networks (DNNs) are key computational building blocks for emerging classes of web services that interact in real time with users via voice, images and video inputs. Although GPUs have gained popularity as a key accelerator platform for deep learning workloads, the increasing demand for DNN computation leaves a significant gap between the compute capabilities of GPU-enabled datacenters and the compute needed to service demand. The state-of-the-art techniques to improve DNN performance have significant limitations in bridging the gap on real systems. Current network pruning techniques remove computation, but the resulting networks map poorly to GPU architectures, yielding no performance benefit or even slowdowns. Meanwhile, current bandwidth optimization techniques focus on reducing off-chip bandwidth while overlooking on-chip bandwidth, a key DNN bottleneck. To address these limitations, this work introduces DeftNN, a GPU DNN execution framework that targets the key architectural bottlenecks of DNNs on GPUs to automatically and transparently improve execution performance. DeftNN is composed of two novel optimization techniques - (1) synapse vector elimination, a technique that identifies non-contributing synapses in the DNN and carefully transforms data and removes the computation and data movement of these synapses while fully utilizing the GPU to improve performance, and (2) near-compute data fission, a mechanism for scaling down the on-chip data movement requirements within DNN computations. Our evaluation of DeftNN spans 6 state-of-the-art DNNs. By applying both optimizations in concert, DeftNN is able to achieve an average speedup of 2.1X on real GPU hardware. We also introduce a small additional hardware unit per GPU core to facilitate efficient data fission operations, increasing the speedup achieved by DeftNN to 2.6X.
Similarities and differences of e-commerce and e-government: Insights from a pilot study
So far, empirically grounded studies, which compare the phenomena of e-commerce and e-government, have been in short supply. However, such studies it has been argued would most likely deepen the understanding of the sector-specific similarities and differences leading to potential cross-fertilization between the two sectors as well as to the establishment of performance measures and success criteria. This paper reports on the findings of an empirical research pilot, which is the first in a series of planned exploratory and theory-testing studies on the subject
Errors due to measuring voltage on current-carrying electrodes in electric current computed tomography
It is shown that when a multiplicity of electrodes are attached to a body's surface, the voltage data are most sensitive to changes in resistivity in the body's interior when voltages are measured from all electrodes, including those carrying current. This assertion, which is true despite the presence of significant levels of skin impedance at the electrodes, is supported both theoretically and by experiment. Data were first taken for current and voltage at all electrodes. Then current was applied only at a pair of electrodes, with voltages measured on all other electrodes. Targets could be detected with better signal-to-noise ratio by using the reconstructed data than by using the directly measured voltages on noncurrent-carrying electrodes. Images made from voltage data using only non-current-carrying electrodes had higher noise levels. It is concluded that in multiple electrode systems for electric-current-computed tomography, current should be applied and voltage should be measured from all available electrodes.<<ETX>>
ADAMTS-18: a metalloproteinase with multiple functions.
ADAMTS-18 is a member of a disintegrin and metalloproteinase with thrombospondin motifs (ADAMTS) family of proteases, which are known to play important roles in development, angiogenesis and coagulation; dysregulation and mutation of these enzymes have been implicated in many disease processes, such as inflammation, cancer, arthritis and atherosclerosis. Mutations of ADAMTS-18 have been linked to abnormal early eye development and reduced bone mineral density. In this review, we briefly summarize the structural organization and the expression of ADAMTS-18. We will also focus on the emerging role of ADAMTS-18 in several pathophysiological conditions which include: hematological diseases, tumorgenesis, osteogenesis, eye-related diseases, central nervous system disorders, and last but not least a research perspective of ADAMTS-18 and its potential as a promising diagnostic and therapeutic target.
Intracerebroventricular enzyme infusion corrects central nervous system pathology and dysfunction in a mouse model of metachromatic leukodystrophy.
Arylsulfatase A (ASA) catalyzes the desulfation of sulfatide, a major lipid component of myelin. Inherited functional deficiencies of ASA cause the lysosomal storage disease (LSD) metachromatic leukodystrophy (MLD), which is characterized by intralysosomal accumulation of sulfatide, progressive neurological symptoms and early death. Enzyme replacement therapy (ERT) using intravenous injection of active enzyme is a treatment option for many LSDs as exogenous lysosomal enzymes are delivered to lysosomes of patient's cells via receptor-mediated endocytosis. Efficient treatment of MLD and other LSDs with central nervous system (CNS) involvement is, however, hampered by the blood-brain barrier (BBB), which limits transfer of therapeutic enzymes from the circulation to the brain parenchyma. To bypass the BBB, we infused recombinant human ASA (rhASA) by implanted miniature pumps into the cerebrospinal fluid (CSF) of a conventional and a novel, genetically aggravated ASA knockout mouse model of MLD. rhASA continuously delivered to the lateral ventricle for 4 weeks penetrated the brain parenchyma and was targeted to the lysosomes of brain cells. Histological analysis revealed complete reversal of lysosomal storage in the infused hemisphere. rhASA concentrations and sulfatide clearance declined with increasing distance from the infusion site. Correction of the ataxic gait indicated reversal of central nervous system dysfunctions. The profound histopathological and functional improvements, the requirement of low enzyme doses and the absence of immunological side effects suggest intracerebroventricular ERT to be a promising treatment option for MLD and other LSDs with prevailing CNS disease.
BIRNet: Brain Image Registration Using Dual-Supervised Fully Convolutional Networks
In this paper, we propose a deep learning approach for image registration by predicting deformation from image appearance. Since obtaining ground-truth deformation fields for training can be challenging, we design a fully convolutional network that is subject to dual-guidance: (1) Coarse guidance using deformation fields obtained by an existing registration method; and (2) Fine guidance using image similarity. The latter guidance helps avoid overly relying on the supervision from the training deformation fields, which could be inaccurate. For effective training, we further improve the deep convolutional network with gap filling, hierarchical loss, and multi-source strategies. Experiments on a variety of datasets show promising registration accuracy and efficiency compared with state-of-the-art methods.
Development and evaluation of a questionnaire for assessment of health-related quality of life in cats with cardiac disease.
OBJECTIVE To develop, validate, and evaluate a questionnaire (Cats' Assessment Tool for Cardiac Health [CATCH] questionnaire) for assessing health-related quality of life in cats with cardiac disease. DESIGN Prospective study. ANIMALS 275 cats with cardiac disease. PROCEDURES The questionnaire was developed on the basis of clinical signs of cardiac disease in cats. A CATCH score was calculated by summing responses to questionnaire items; possible scores ranged from 0 to 80. For questionnaire validation, owners of 75 cats were asked to complete the questionnaire (10 owners completed the questionnaire twice). Disease severity was assessed with the International Small Animal Cardiac Health Council (ISACHC) classification for cardiac disease. Following validation, the final questionnaire was administered to owners of the remaining 200 cats. RESULTS Internal consistency of the questionnaire was good, and the CATCH score was significantly correlated with ISACHC classification. For owners that completed the questionnaire twice, scores were significantly correlated. During the second phase of the study, the CATCH score ranged from 0 to 74 (median, 7) and was significantly correlated with ISACHC classification. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that the CATCH questionnaire is a valid and reliable method for assessing health-related quality of life in cats with cardiac disease. Further research is warranted to test the tool's sensitivity to changes in medical treatment and its potential role as a clinical and research tool.
Justifying Shadow IT Usage
Employees and/or functional managers increasingly adopt and use IT systems and services that the IS management of the organization does neither provide nor approve. To effectively counteract such shadow IT in organizations, the understanding of employees’ motivations and drivers is necessary. However, the scant literature on this topic primarily focused on various governance approaches at firm level. With the objective to open the black box of shadow IT usage at the individual unit of analysis, we develop a research model and propose a laboratory experiment to examine users’ justifications for violating implicit and explicit IT usage restrictions based on neutralization theory. To be precise, in this research-in-progress, we posit positive associations between shadow IT usage and human tendencies to downplay such kind of rule-breaking behaviors due to necessity, no injury, and injustice. We expect a lower impact of these neutralization effects in the presence of behavioral IT guidelines that explicitly prohibit users to employ exactly those shadow IT systems.
Millimeter-wave antennas for radio access and backhaul in 5G heterogeneous mobile networks
Millimeter-wave communications are expected to play a key role in future 5G mobile networks to overcome the dramatic traffic growth expected over the next decade. Such systems will severely challenge antenna technologies used at mobile terminal, access point or backhaul/fronthaul levels. This paper provides an overview of the authors' recent achievements in the design of integrated antennas, antenna arrays and high-directivity quasi-optical antennas for high data-rate 60-GHz communications.
Boosting Algorithms for Parallel and Distributed Learning
The growing amount of available information and its distributed and heterogeneous nature has a major impact on the field of data mining. In this paper, we propose a framework for parallel and distributed boosting algorithms intended for efficient integrating specialized classifiers learned over very large, distributed and possibly heterogeneous databases that cannot fit into main computer memory. Boosting is a popular technique for constructing highly accurate classifier ensembles, where the classifiers are trained serially, with the weights on the training instances adaptively set according to the performance of previous classifiers. Our parallel boosting algorithm is designed for tightly coupled shared memory systems with a small number of processors, with an objective of achieving the maximal prediction accuracy in fewer iterations than boosting on a single processor. After all processors learn classifiers in parallel at each boosting round, they are combined according to the confidence of their prediction. Our distributed boosting algorithm is proposed primarily for learning from several disjoint data sites when the data cannot be merged together, although it can also be used for parallel learning where a massive data set is partitioned into several disjoint subsets for a more efficient analysis. At each boosting round, the proposed method combines classifiers from all sites and creates a classifier ensemble on each site. The final classifier is constructed as an ensemble of all classifier ensembles built on disjoint data sets. The new proposed methods applied to several data sets have shown that parallel boosting can achieve the same or even better prediction accuracy considerably faster than the standard sequential boosting. Results from the experiments also indicate that distributed boosting has comparable or slightly improved classification accuracy over standard boosting, while requiring much less memory and computational time since it uses smaller data sets.
Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices.
Attention can be focused volitionally by "top-down" signals derived from task demands and automatically by "bottom-up" signals from salient stimuli. The frontal and parietal cortices are involved, but their neural activity has not been directly compared. Therefore, we recorded from them simultaneously in monkeys. Prefrontal neurons reflected the target location first during top-down attention, whereas parietal neurons signaled it earlier during bottom-up attention. Synchrony between frontal and parietal areas was stronger in lower frequencies during top-down attention and in higher frequencies during bottom-up attention. This result indicates that top-down and bottom-up signals arise from the frontal and sensory cortex, respectively, and different modes of attention may emphasize synchrony at different frequencies.
Are species shade and drought tolerance reflected in leaf-level structural and functional differentiation in Northern Hemisphere temperate woody flora?
Leaf-level determinants of species environmental stress tolerance are still poorly understood. Here, we explored dependencies of species shade (T(shade)) and drought (T(drought)) tolerance scores on key leaf structural and functional traits in 339 Northern Hemisphere temperate woody species. In general, T(shade) was positively associated with leaf life-span (L(L)), and negatively with leaf dry mass (M(A)), nitrogen content (N(A)), and photosynthetic capacity (A(A)) per area, while opposite relationships were observed with drought tolerance. Different trait combinations responsible for T(shade) and T(drought) were observed among the key plant functional types: deciduous and evergreen broadleaves and evergreen conifers. According to principal component analysis, resource-conserving species with low N content and photosynthetic capacity, and high L(L) and M(A), had higher T(drought), consistent with the general stress tolerance strategy, whereas variation in T(shade) did not concur with the postulated stress tolerance strategy. As drought and shade often interact in natural communities, reverse effects of foliar traits on these key environmental stress tolerances demonstrate that species niche differentiation is inherently constrained in temperate woody species. Different combinations of traits among key plant functional types further explain the contrasting bivariate correlations often observed in studies seeking functional explanation of variation in species environmental tolerances.
Omega-3 fatty acids plus rosuvastatin improves endothelial function in South Asians with dyslipidemia
BACKGROUND The present study was undertaken to investigate the effect of statins plus omega-3 polyunsaturated fatty acids (PUFAs) on endothelial function and lipid profile in South Asians with dyslipidemia and endothelial dysfunction, a population at high risk for premature coronary artery disease. METHODS Thirty subjects were randomized to rosuvastatin 10 mg and omega-3-PUFAs 4 g or rosuvastatin 10 mg. After 4 weeks, omega-3-PUFAs were removed from the first group and added to subjects in the second group. All subjects underwent baseline, 4-, and 8-week assessment of endothelial function and lipid profile. RESULTS Compared to baseline, omega-3-PUFAs plus rosuvastatin improved endothelial-dependent vasodilation (EDV: -1.42% to 11.36%, p = 0.001), and endothelial-independent vasodilation (EIV: 3.4% to 17.37%, p = 0.002). These effects were lost when omega-3-PUFAs were removed (EDV: 11.36% to 0.59%, p = 0.003). In the second group, rosuvastatin alone failed to improve both EDV and EIV compared to baseline. However, adding omega-3-PUFAs to rosuvastatin, significantly improved EDV (-0.66% to 14.73%, p = 0.001) and EIV (11.02% to 24.5%, p = 0.001). Addition of omega-3-PUFAs further improved the lipid profile (triglycerides 139 to 91 mg/dl, p = 0.006, low-density lipoprotein cholesterol 116 to 88 mg/dl, p = 0.014). CONCLUSIONS Combined therapy with omega-3-PUFAs and rosuvastatin improves endothelial function in South Asian subjects with dyslipidemia and endothelial dysfunction.
Towards Predicting the Best Answers in Community-based Question-Answering Services
Community-based question-answering (CQA) services contribute to solving many difficult questions we have. For each question in such services, one best answer can be designated, among all answers, often by the asker. However, many questions on typical CQA sites are left without a best answer even if when good candidates are available. In this paper, we attempt to address the problem of predicting if an answer may be selected as the best answer, based on learning from labeled data. The key tasks include designing features measuring important aspects of an answer and identifying the most importance features. Experiments with a Stack Overflow dataset show that the contextual information among the answers should be the most important factor to consider.
UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild
We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user-uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 43.9%. To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.
HDL-Cholesterol as a Risk Factor in Coronary Heart Disease
The aim of the Helsinki Heart Study, a 5-year primary prevention placebo-controlled study involving 4081 dyslipidaemic men (aged 40 to 55 years), was to investigate if increasing high density lipoprotein (HDL)-cholesterol plasma levels and decreasing low density lipoprotein (LDL)-cholesterol levels would reduce the incidence of coronary heart disease. Gemfibrozil 600mg twice daily was administered to induce these changes in lipoprotein levels. Baseline HDL-cholesterol levels in the study group were similar to those in the general population. Data from patients treated with placebo were analysed to investigate the influence of HDL-cholesterol levels on the incidence of coronary heart disease. Using the number of cardiac end-points per 1000 person-years to indicate the risk of coronary heart disease, it was clear that elevated HDL-cholesterol levels reduced the risk of coronary heart disease while the incidence increased at low HDL-cholesterol levels. This relationship was not altered when the effect of HDL-cholesterol levels was analysed jointly with other coronary risk factors (age, smoking or blood pressure). A weaker association was seen between LDL-cholesterol and risk of coronary heart disease, and triglycerides appeared to have no significant effect on the incidence of the disease. The data clearly suggest that HDL-cholesterol is a strong predictor of the incidence of coronary heart disease in the placebo group of the Helsinki Heart Study.
Therapeutic effects of the long-term use of PAN membrane dialyzer in hemodialysis patients: efficacy in old dialysis patients with mild PAD.
BACKGROUND AN69 dialyzer, a plate type dialyzer with a polyacrylonitrile membrane (PAN membrane) is reported to reduce symptoms in hemodialysis (HD) patients with complications such as poor nutritional status and peripheral arterial disease (PAD). Yet very few studies have investigated the long-term use of the PAN membrane or compared the solute-removal properties of the PAN membrane with those of the Type IV polysulfone membrane (PS membrane), the dialysis membrane most widely used. In the present study we compared the contaminant-removal properties of the AN69 membrane dialyzer with those of a Type IV PS membrane dialyzer and investigated the clinical effects of the long-term use of the former for elderly hemodialysis patients with mild PAD. METHODS Cross-over trials with 2 week intervals for solute were conducted in 6 patients to compare the performance of the membranes in removing small molecular weight substances, β2 microglobulin (β2MG), amino acid (AA), and serum albumin (Alb). Next, the AN69 membrane was used for dialysis over a period of 72 weeks in 8 patients. The time course changes of Alb, the geriatric nutritional risk index (GNRI), the % creatinine generation rate (%CGR), the normalized protein catabolic rate (nPCR) and the dry weight (DW) were observed to evaluate the nutritional status. The time course changes of β2MG, C-reactive proteins (CRP), LDL cholesterol (LDL), fibrinogens (Fib), nitrogen oxide (NOx), hemoglobin (Hb), ferritins, transferrin saturation (TSAT), dose of erythropoiesis-stimulating agents (ESA), and dose of iron were observed to evaluate the therapeutic effects of long-term use. Skin perfusion pressure (SPP) was measured at two points: once at the switchover to the AN69 membrane and once 72 weeks later. RESULTS In cross-over trials, the AN69 membrane showed basically the same dialysis efficiency as the PS membrane in removing small molecular weight substances, but it removed significantly lower amounts of β2MG. The AN69 membrane also showed significantly lower rates of AA removal rate and Alb leakage. The nutritional status was stably maintained during long-term use after the switchover to the AN69 membrane, and no significant increase of β2MG was observed. Fib and NOx were both reduced, the latter to a significant degree. The Hb values showed a good time course, with relatively high TSAT levels and low ferritin levels overall. SPP remained generally stable for 72 weeks. CONCLUSION The cross-over trial show the AN69 membrane eliminates less AA and Alb compared with the PS membrane. Judging from the therapeutic effects of the long-term use of the AN69 membrane, the membrane is effective for dialysis and has good biocompatibility in the treatment of elderly HD patients with mild PAD.
Chapter 2 Normal Glucose Homeostasis
Arterial plasma glucose values throughout a 24-h period average approximately 90 mg/dl, with a maximal concentration usually not exceeding 165 mg/dl such as after meal ingestion1 and remaining above 55 mg/dl such as after exercise2 or a moderate fast (60 h).3 This relative stability contrasts with the situation for other substrates such as glycerol, lactate, free fatty acids, and ketone bodies whose fluctuations are much wider (Table 2.1).4 This narrow range defining normoglycemia is maintained through an intricate regulatory and counterregulatory neuro-hormonal system: A decrement in plasma glucose as little as 20 mg/dl (from 90 to 70 mg/dl) will suppress the release of insulin and will decrease glucose uptake in certain areas in the brain (e.g., hypothalamus where glucose sensors are located); this will activate the sympathetic nervous system and trigger the release of counterregulatory hormones (glucagon, catecholamines, cortisol, and growth hormone).5 All these changes will increase glucose release into plasma and decrease its removal so as to restore normoglycemia. On the other hand, a 10 mg/dl increment in plasma glucose will stimulate insulin release and suppress glucagon secretion to prevent further increments and restore normoglycemia. Glucose in plasma either comes from dietary sources or is either the result of the breakdown of glycogen in liver (glycogenolysis) or the formation of glucose in liver and kidney from other carbons compounds (precursors) such as lactate, pyruvate, amino acids, and glycerol (gluconeogenesis). In humans, glucose removed from plasma may have different fates in different tissues and under different conditions (e.g., postabsorptive vs. postprandial), but the pathways for its disposal are relatively limited. It (1) may be immediately stored as glycogen or (2) may undergo glycolysis, which can be non-oxidative producing pyruvate (which can be reduced to lactate or transaminated to form alanine) or oxidative through conversion to acetyl CoA which is further oxidized through the tricarboxylic acid cycle to form carbon dioxide and water. Non-oxidative glycolysis carbons undergo gluconeogenesis and the newly formed glucose is either stored as glycogen or released back into plasma (Fig. 2.1).
The Structure and Function of Complex Networks
Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.
Scientific co-authorship networks
The paper addresses the stability of the co-authorship networks in time. The analysis is done on the networks of Slovenian researchers in two time periods (1991-2000 and 2001-2010). Two researchers are linked if they published at least one scientific bibliographic unit in a given time period. As proposed by Kronegger et al. (2011), the global network structures are examined by generalized blockmodeling with the assumed multi-core--semi-periphery--periphery blockmodel type. The term core denotes a group of researchers who published together in a systematic way with each other. The obtained blockmodels are comprehensively analyzed by visualizations and through considering several statistics regarding the global network structure. To measure the stability of the obtained blockmodels, different adjusted modified Rand and Wallace indices are applied. Those enable to distinguish between the splitting and merging of cores when operationalizing the stability of cores. Also, the adjusted modified indices can be used when new researchers occur in the second time period (newcomers) and when some researchers are no longer present in the second time period (departures). The research disciplines are described and clustered according to the values of these indices. Considering the obtained clusters, the sources of instability of the research disciplines are studied (e.g., merging or splitting of cores, newcomers or departures). Furthermore, the differences in the stability of the obtained cores on the level of scientific disciplines are studied by linear regression analysis where some personal characteristics of the researchers (e.g., age, gender), are also considered.
Evidence for the role of holes in blinking: negative and oxidized CdSe/CdS dots.
Thin shell CdSe/CdS colloidal quantum dots with a small 3 nm core diameter exhibit typical blinking and a binary PL intensity distribution. Electrochemical charging with one electron suppresses the blinking. With a larger core of 5 nm, the blinking statistics of on and off states is identical to that of a smaller core but the dots also display a grey state with a finite duration time (~6 ms) on glass. However, the grey state disappears on the electron-accepting ZnO nanocrystals film. In addition, the grey state PL lifetime on glass is similar to the trion lifetime measured from electrochemically charged dots. Therefore, the grey state is assigned to the photocharged negative dots. It is concluded that a grey state is always present as the dots get negatively photocharged even though it might not be observed due to the brightness of the trion and/or the duration time of the negative charge. With thick shell CdSe/CdS dots under electrochemical control, multiple charging, up to four electrons per dot, is observed as sequential changes in the photoluminescence lifetime which can be described by the Nernst equation. The small potential increment confirms the weak electron confinement with the thick CdS shell. Finally, the mechanism of hole-trapping and surface oxidation by the hole is proposed to account for the grey state and off state in the blinking.
Gaussian Process Training with Input Noise
In standard Gaussian Process regression input locations are assumed to be noise free. We present a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise. To make computations tractable we use a local linear expansion about each input point. This allows the input noise to be recast as output noise proportional to the squared gradient of the GP posterior mean. The input noise variances are inferred from the data as extra hyperparameters. They are trained alongside other hyperparameters by the usual method of maximisation of the marginal likelihood. Training uses an iterative scheme, which alternates between optimising the hyperparameters and calculating the posterior gradient. Analytic predictive moments can then be found for Gaussian distributed test points. We compare our model to others over a range of different regression problems and show that it improves over current methods.
Study of Maternal Knowledge and Attitude toward Exclusive Breast Milk Feeding (BMF) in the First 6 Months of Infant in Yazd-Iran
Introduction: Breast milk is a complete food for growing children until 6 months of age, and mothers, as the most important child health care, play a decisive role in their growth. So promoting their attitude toward the benefits of breastfeeding ensures guarantee child health in the future. This study aimed to assess maternal knowledge and attitude of mothers toward exclusive Breast Milk Feeding (BMF) in the first 6 months of infant life. Materials and Methods: This cross-sectional descriptive-analytic study was conducted on 190 mothers who referring to Yazd health-care centers for monitoring their 6-24 month year old infants. They completed questionnaire. Participants were selected by cluster and simple random sampling. Data were analyzed by descriptiveanalytic tests and using SPSS 11.5. Results: Mean score of maternal attitude toward exclusive BMF was 10.14±2.00 (out of 14) and maternal knowledge score toward advantages of breast milk was 10.12±2.015 (out of 14). The incidence of exclusive BMF in the first 6 months of life study was 72.9%. Child growth was as follows: excellent growth (24.5%) and good growth (55.3%). ANOVA showed a significant difference between parents' education and maternal attitude and maternal knowledge towards exclusive BMF; whatever higher education of parents, more positive knowledge and attitude towards exclusive BMF (P<0.05). There was a significant direct relationship between knowledge and attitude (Spearman test, P-value= 0.000& r= 0.4). Conclusion: Maternal knowledge and attitude towards exclusive BMF was moderate. It is essential to plan for mothers by officials in order to promote breast-feeding in the first 6 months of baby's life to enhance positive maternal attitude in this regard.
Light Field Image Processing: An Overview
Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data.
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization
Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.
Correlation of Selenium and Zinc Levels to Antiretroviral Treatment Outcomes in Thai HIV-infected Children without Severe HIV Symptoms
Background/Objectives:Deficiencies in antioxidants contribute to immune dysregulation and viral replication. To evaluate the correlation of selenium (Se) and zinc (Zn) levels on the treatment outcomes in HIV-infected children.Subjects/Methods:HIV-infected Thai children 1–12 years old, CD4 15–24%, without severe HIV symptoms were included. Se and Zn levels were measured by graphite furnace atomic absorption spectrometry at baseline and 48 weeks. Deficiency cutoffs were Se <0.1 μmol/l and Zn <9.9 μmol/l. Serum ferritin and C-reactive protein (CRP) were measured every 24 weeks. No micronutrient supplement was prescribed.Results:In all, 141 children (38.3% male) with a median (interquartile range (IQR)) age of 7.3 (4.2–9.0) years were enrolled. Median baseline CD4% was 20%, HIV-RNA was 4.6 log10copies/ml. At baseline, median (IQR) Se and Zn levels were 0.9 (0.7–1.0) μmol/l and 5.9 (4.8–6.9) μmol/l, respectively. None had Se deficiency while all had Zn deficiency. Over 48 weeks, 97 initiated antiretroviral therapy (ART) and 81% achieved HIV-RNA <50 copies/ml with 11% median CD4 gain. The mean change of Se was 0.06 μmol/l (P=0.003) and Zn was 0.42 μmol/l (P=0.003), respectively. By multivariate analysis in children who received ART, predictors for greater increase of CD4% from baseline were lower baseline CD4% (P<0.01) and higher baseline Zn level (P=0.02). The predictors for greater decrease of HIV-RNA from baseline were higher baseline HIV-RNA and higher ferritin (both P<0.01). No association of CRP with the changes from baseline of CD4% or HIV-RNA was found.Conclusion:In HIV-infected Thai children without severe immune deficiency who commenced ART, no correlation between Se and ART treatment outcomes was found. Higher pre-ART Zn levels were associated with significant increases in CD4% at 48 weeks.
Study and analysis of data mining for healthcare
In this paper Data Mining is introduced as well as big data in the framework of Healthcare. Furthermore, the Data Mining for accumulated data is investigated. Especially, their complexities of the various areas health and medical research. Finally, machine learning algorithms have been used in order to study processing Healthcare data.
Impossibility Theorems for Domain Adaptation
The domain adaptation problem in machine learning occurs when the test data generating distribution differs from the one that generates the training data. It is clear that the success of learning under such circumstances depends on similarities between the two data distributions. We study assumptions about the relationship between the two distributions that one needed for domain adaptation learning to succeed. We analyze the assumptions in an agnostic PAC-style learning model for a the setting in which the learner can access a labeled training data sample and an unlabeled sample generated by the test data distribution. We focus on three assumptions: (i) similarity between the unlabeled distributions, (ii) existence of a classifier in the hypothesis class with low error on both training and testing distributions, and (iii) the covariate shift assumption. I.e., the assumption that the conditioned label distribution (for each data point) is the same for both the training and test distributions. We show that without either assumption (i) or (ii), the combination of the remaining assumptions is not sufficient to guarantee successful learning. Our negative results hold with respect to any domain adaptation learning algorithm, as long as it does not have access to target labeled examples. In particular, we provide formal proofs that the popular covariate shift assumption is rather weak and does not relieve the necessity of the other assumptions. We also discuss the intuitively appealing Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: W&CP 9. Copyright 2010 by the authors. paradigm of re-weighting the labeled training sample according to the target unlabeled distribution and show that, somewhat counter intuitively, we show that paradigm cannot be trusted in the following sense. There are DA tasks that are indistinguishable as far as the training data goes but in which re-weighting leads to significant improvement in one task while causing dramatic deterioration of the learning success in the other.
Safe semi-autonomous control with enhanced driver modeling
During semi-autonomous driving, threat assessment is used to determine when controller intervention that overwrites or corrects the driver's input is required. Since today's semi-autonomous systems perform threat assessment by predicting the vehicle's future state while treating the driver's input as a disturbance, controller intervention is limited to just emergency maneuvers. In order to improve vehicle safety and reduce the aggressiveness of maneuvers, threat assessment must occur over longer prediction horizons where driver's behavior cannot be neglected. We propose a framework that divides the problem of semi-autonomous control into two components. The first component reliably predicts the vehicle's potential behavior by using empirical observations of the driver's pose. The second component determines when the semi-autonomous controller should intervene. To quantitatively measure the performance of the proposed approach, we define metrics to evaluate the infor-mativeness of the prediction and the utility of the intervention procedure. A multi-subject driving experiment illustrates the usefulness, with respect to these metrics, of incorporating the driver's pose while designing a semi-autonomous system.
Measuring experiential avoidance and psychological inflexibility: The Spanish version of the Acceptance and Action Questionnaire - II.
BACKGROUND Experiential avoidance and psychological inflexibility have been recently found to be important constructs related to a wide range of psychological disorders and quality of life. The current study presents psychometric and factor structure data concerning the Spanish translation of a general measure of both constructs: the Acceptance and Action Questionnaire - II (AAQ-II). METHOD Six samples, with a total of 712 participants, from several independent studies were analyzed. RESULTS Data were very similar to the ones obtained in the original AAQ-II version. The internal consistency across the different samples was good (between a= .75 and a= .93). The differences between clinical and nonclinical samples were statistically significant and the overall factor analysis yielded to a one-factor solution. The AAQ-II scores were significantly related to general psychopathology and quality of life measures. CONCLUSIONS This Spanish translation of the AAQ-II emerges as a reliable and valid measure of experiential avoidance and psychological inflexibility.
Summarization using Neural Networks and Rhetorical Structure Theory
A new technique for summarization is presented here for summarizing articles known as text summarization using neural network and rhetorical structure theory. A neural network is trained to learn the relevant characteristics of sentences by using back propagation technique to train the neural network which will be used in the summary of the article. After training neural network is then modified to feature fusion and pruning the relevant characteristics apparent in summary sentences. Finally, the modified neural network is used to summarize articles and combining it with the rhetorical structure theory to form final summary of an article.
Recent Advances in Hierarchical Reinforcement Learning
Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.
Bartendr: a practical approach to energy-aware cellular data scheduling
Cellular radios consume more power and suffer reduced data rate when the signal is weak. According to our measurements, the communication energy per bit can be as much as 6x higher when the signal is weak than when it is strong. To realize energy savings, applications must preferentially communicate when the signal is strong, either by deferring non-urgent communication or by advancing anticipated communication to coincide with periods of strong signal. Allowing applications to perform such scheduling requires predicting signal strength, so that opportunities for energy-efficient communication can be anticipated. Furthermore, such prediction must be performed at little energy cost. In this paper, we make several contributions towards a practical system for energy-aware cellular data scheduling called Bartendr. First, we establish, via measurements, the relationship between signal strength and power consumption. Second, we show that location alone is not sufficient to predict signal strength and motivate the use of tracks to enable effective prediction. Finally, we develop energy-aware scheduling algorithms for different workloads - syncing and streaming - and evaluate these via simulation driven by traces obtained during actual drives, demonstrating energy savings of up to 60%. Our experiments have been performed on four cellular networks across two large metropolitan areas, one in India and the other in the U.S.
Assessment of postgraduate skin lesion education among Iowa family physicians
BACKGROUND Family medicine physicians play a pivotal role in the prevention and early detection of skin cancer. Our objective was to evaluate how family physicians believe their postgraduate training in skin cancer screening and prevention has prepared them for independent practice and to assess the need for enhanced skin lesion teaching in a family medicine residency setting. METHODS A descriptive, cross-sectional survey investigating provider demographics, confidence in providing dermatological care, residency training, current medical practice, and skin cancer prevention beliefs was mailed to all family medicine physicians in the state of Iowa as listed in the Iowa Academy of Family Physicians annual directory in 2006 (N = 1069). RESULTS A total of 575 family medicine physicians completed the survey for an overall response rate of 53.8%. Overall, family medicine physicians reported feeling confident in their ability to diagnose skin lesions (83.2%), differentiate between benign and malignant lesions (85.3%), and perform a biopsy of a lesion (94.3%). Only 65% of surveyed physicians felt that their residency program adequately trained them in diagnosing skin lesions and 65.7% of physicians agree that they could have benefited from additional training on skin lesions during residency training. Nearly 90% of clinicians surveyed believe that skin cancer screenings are the standard of care; however, only 51.8% perform skin cancer screening examinations during adult health maintenance visits more than 75% of the time. The primary reason listed by respondents who said they do not routinely perform skin cancer screenings was inadequate time (68.2%). CONCLUSION Family medicine physicians in the state of Iowa are confident in evaluating skin lesions. However, they reported a need for additional enhanced, targeted skin lesion education in family medicine residency training programs. Physicians believe that skin cancer screening examination is the standard of care, but find that inadequate time increasingly hinders skin cancer screening during routine health maintenance examinations.
Long-term TENS treatment improves tactile sensitivity in MS patients
Long-lasting increases in tactile sensitivity were achieved by repetitive stimulation of sensory afferents with TENS in MS patients but not in healthy subjects. This increased sensitivity was not only restricted to the median nerve area but also expanded to the ulnar nerve area. Remarkably, MS patients reached the same level of sensitivity as healthy subjects immediately after the intervention, and longterm effects were reported 3 weeks later.
Comparison of in-person and digital photograph assessment of stage III and IV pressure ulcers among veterans with spinal cord injuries.
Digital photographs are often used in treatment monitoring for home care of less advanced pressure ulcers. We investigated assessment agreement when stage III and IV pressure ulcers in individuals with spinal cord injury were evaluated in person and with the use of digital photographs. Two wound-care nurses assessed 31 wounds among 15 participants. One nurse assessed all wounds in person, while the other used digital photographs. Twenty-four wound description categories were applied in the nurses' assessments. Kappa statistics were calculated to investigate agreement beyond chance (p < or = 0.05). For 10 randomly selected "double-rated wounds," both nurses applied both assessment methods. Fewer categories were evaluated for the double-rated wounds, because some categories were chosen infrequently and agreement could not be measured. Interrater agreement with the two methods was observed for 12 of the 24 categories (50.0%). However, of the 12 categories with agreement beyond chance, agreement was only "slight" (kappa = 0-0.20) or "fair" (kappa = 0.21-0.40) for 6 categories. The highest agreement was found for the presence of undermining (kappa = 0.853, p < 0.001). Interrater agreement was similar to intramethod agreement (41.2% of the categories demonstrated agreement beyond chance) for the nurses' in-person assessment of the double-rated wounds. The moderate agreement observed may be attributed to variation in subjective perception of qualitative wound characteristics.
Sleep-wake disturbances after traumatic brain injury
Sleep-wake disturbances are extremely common after a traumatic brain injury (TBI). The most common disturbances are insomnia (difficulties falling or staying asleep), increased sleep need, and excessive daytime sleepiness that can be due to the TBI or other sleep disorders associated with TBI, such as sleep-related breathing disorder or post-traumatic hypersomnia. Sleep-wake disturbances can have a major effect on functional outcomes and on the recovery process after TBI. These negative effects can exacerbate other common sequelae of TBI-such as fatigue, pain, cognitive impairments, and psychological disorders (eg, depression and anxiety). Sleep-wake disturbances associated with TBI warrant treatment. Although evidence specific to patients with TBI is still scarce, cognitive-behavioural therapy and medication could prove helpful to alleviate sleep-wake disturbances in patients with a TBI.
Job embeddedness: a theoretical foundation for developing a comprehensive nurse retention plan.
OBJECTIVE Using a new construct, job embeddedness, from the business management literature, this study first examines its value in predicting employee retention in a healthcare setting and second, assesses whether the factors that influence the retention of nurses are systematically different from those influencing other healthcare workers. BACKGROUND The shortage of skilled healthcare workers makes it imperative that healthcare providers develop effective recruitment and retention plans. With nursing turnover averaging more than 20% a year and competition to hire new nurses fierce, many administrators rightly question whether they should develop specialized plans to recruit and retain nurses. METHODS A longitudinal research design was employed to assess the predictive validity of the job embeddedness concept. At time 1, surveys were mailed to a random sample of 500 employees of a community-based hospital in the Northwest region of the United States. The survey assessed personal characteristics, job satisfaction, organizational commitment, job embeddedness, job search, perceived alternatives, and intent to leave. One year later (time 2) the organization provided data regarding voluntary leavers from the hospital. RESULTS Hospital employees returned 232 surveys, yielding a response rate of 46.4 %. The results indicate that job embeddedness predicted turnover over and beyond a combination of perceived desirability of movement measures (job satisfaction, organizational commitment) and perceived ease of movement measures (job alternatives, job search). Thus, job embeddedness assesses new and meaningful variance in turnover in excess of that predicted by the major variables included in almost all the major models of turnover. CONCLUSIONS The findings suggest that job embeddedness is a valuable lens through which to evaluate employee retention in healthcare organizations. Further, the levers for influencing retention are substantially similar for nurses and other healthcare workers. Implications of these findings and recommendations for recruitment and retention policy development are presented.
High-resolution, real-time 3D absolute coordinate measurement based on a phase-shifting method.
We describe a high-resolution, real-time 3D absolute coordinate measurement system based on a phase-shifting method. It acquires 3D shape at 30 frames per second (fps), with 266K points per frame. A tiny marker is encoded in the projected fringe pattern, and detected by software from the texture image and the gamma map. Absolute 3D coordinates are obtained from the detected marker position and the calibrated system parameters. To demonstrate the performance of the system, we measure a hand moving over a depth distance of approximately 700 mm, and human faces with expressions. Applications of such a system include manufacturing, inspection, entertainment, security, medical imaging.
Peer support services for individuals with serious mental illnesses: assessing the evidence.
OBJECTIVE This review assessed the level of evidence and effectiveness of peer support services delivered by individuals in recovery to those with serious mental illnesses or co-occurring mental and substance use disorders. METHODS Authors searched PubMed, PsycINFO, Applied Social Sciences Index and Abstracts, Sociological Abstracts, Social Services Abstracts, Published International Literature on Traumatic Stress, the Educational Resources Information Center, and the Cumulative Index to Nursing and Allied Health Literature for outcome studies of peer support services from 1995 through 2012. They found 20 studies across three service types: peers added to traditional services, peers in existing clinical roles, and peers delivering structured curricula. Authors judged the methodological quality of the studies using three levels of evidence (high, moderate, and low). They also described the evidence of service effectiveness. RESULTS The level of evidence for each type of peer support service was moderate. Many studies had methodological shortcomings, and outcome measures varied. The effectiveness varied by service type. Across the range of methodological rigor, a majority of studies of two service types--peers added and peers delivering curricula--showed some improvement favoring peers. Compared with professional staff, peers were better able to reduce inpatient use and improve a range of recovery outcomes, although one study found a negative impact. Effectiveness of peers in existing clinical roles was mixed. CONCLUSIONS Peer support services have demonstrated many notable outcomes. However, studies that better differentiate the contributions of the peer role and are conducted with greater specificity, consistency, and rigor would strengthen the evidence.
Topography of projections from amygdala to bed nuclei of the stria terminalis
A collection of 125 PHAL experiments in the rat has been analyzed to characterize the organization of projections from each amygdalar cell group (except the nucleus of the lateral olfactory tract) to the bed nuclei of the stria terminalis, which surround the crossing of the anterior commissure. The results suggest three organizing principles of these connections. First, the central nucleus, and certain other amygdalar cell groups associated with the main olfactory system, innervate preferentially various parts of the lateral and medial halves of the bed nuclear anterior division, and these projections travel via both the stria terminalis and ansa peduncularis (ventral pathway). Second, in contrast, the medial nucleus, and the rest of the amygdalar cell groups associated with the accessory and main olfactory systems innervate preferentially the posterior division, and the medial half of the anterior division, of the bed nuclei. And third, the lateral and anterior basolateral nuclei of the amygdala (associated with the frontotemporal association cortical system) do not project significantly to the bed nuclei. For comparison, inputs to the bed nuclei from the ventral subiculum, infralimbic area, and endopiriform nucleus are also described. The functional significance of these projections is discussed with reference to what is known about the output of the bed nuclei.
Probabilistic Knowledge Graph Construction: Compositional and Incremental Approaches
Knowledge graph construction consists of two tasks: extracting information from external resources (knowledge population) and inferring missing information through a statistical analysis on the extracted information (knowledge completion). In many cases, insufficient external resources in the knowledge population hinder the subsequent statistical inference. The gap between these two processes can be reduced by an incremental population approach. We propose a new probabilistic knowledge graph factorisation method that benefits from the path structure of existing knowledge (e.g. syllogism) and enables a common modelling approach to be used for both incremental population and knowledge completion tasks. More specifically, the probabilistic formulation allows us to develop an incremental population algorithm that trades off exploitation-exploration. Experiments on three benchmark datasets show that the balanced exploitation-exploration helps the incremental population, and the additional path structure helps to predict missing information in knowledge completion.
Do patients with Parkinson’s disease benefit from embryonic dopamine cell transplantation?
Embryonic dopamine cell transplants survive in nearly all patients regardless of age and without immunosuppression. Transplants can improve Parkinson “off” symptoms up to the best effects of L-dopa observed preoperatively. They cannot improve the “best on” state. Transplants appear to survive indefinitely. In 10 to 15% of patients, transplants can reproduce the dyskinetic effects of L-dopa even after discontinuing all L-dopa. Neurotransplantation should be tried earlier in the clinical course of Parkinson’s to see if earlier intervention can prevent progression of the disease, particularly the dyskinetic responses seen after longterm L-dopa treatment.
Group-Sensitive Triplet Embedding for Vehicle Reidentification
The widespread use of surveillance cameras toward smart and safe cities poses the critical but challenging problem of vehicle reidentification (Re-ID). The state-of-the-art research work performed vehicle Re-ID relying on deep metric learning with a triplet network. However, most existing methods basically ignore the impact of intraclass variance-incorporated embedding on the performance of vehicle reidentification, in which robust fine-grained features for large-scale vehicle Re-ID have not been fully studied. In this paper, we propose a deep metric learning method, group-sensitive-triplet embedding (GS-TRE), to recognize and retrieve vehicles, in which intraclass variance is elegantly modeled by incorporating an intermediate representation “group” between samples and each individual vehicle in the triplet network learning. To capture the intraclass variance attributes of each individual vehicle, we utilize an online grouping method to partition samples within each vehicle ID into a few groups, and build up the triplet samples at multiple granularities across different vehicle IDs as well as different groups within the same vehicle ID to learn fine-grained features. In particular, we construct a large-scale vehicle database “PKU-Vehicle,” consisting of 10 million vehicle images captured by different surveillance cameras in several cities, to evaluate the vehicle Re-ID performance in real-world video surveillance applications. Extensive experiments over benchmark datasets VehicleID, VeRI, and CompCar have shown that the proposed GS-TRE significantly outperforms the state-of-the-art approaches for vehicle Re-ID.
Feedback-Mediated Upper Extremities Exercise: Increasing Patient Motivation in Poststroke Rehabilitation
PURPOSE This proof-of-concept study investigated whether feedback-mediated exercise (FME) of the affected arm of hemiplegic patients increases patient motivation and promotes greater improvement of motor function, compared to no-feedback exercise (NFE). METHOD We developed a feedback-mediated treatment that uses gaming scenarios and allows online and offline monitoring of both temporal and spatial characteristics of planar movements. Twenty poststroke hemiplegic inpatients, randomly assigned to the FME and NFE group, received therapy five days a week for three weeks. The outcome measures were evaluated from the following: (1) the modified drawing test (mDT), (2) received therapy time-RTT, and (3) intrinsic motivation inventory-IMI. RESULTS The FME group patients showed significantly higher improvement in the speed metric (P < 0.01), and smoothness metric (P < 0.01), as well as higher RTT (P < 0.01). Significantly higher patient motivation is observed in the FME group (interest/enjoyment subscale (P < 0.01) and perceived competence subscale (P < 0.01)). CONCLUSION Prolonged endurance in training and greater improvement in certain areas of motor function, as well as very high patient motivation and strong positive impressions about the treatment, suggest the positive effects of feedback-mediated treatment and its high level of acceptance by patients.
STOCK MARKET VOLATILITY IN OECD COUNTRIES: RECENT TRENDS. CONSEQUENCES FOR THE REAL ECONOMY. AND PROPOSALS FOR REFORM
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 I . Trends in stock index volatilities and return interrelationships . . . . . . . . . . . 33 A . Measurement issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 B . Stock returns volatility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 C . Interrelationships among national stock market returns . . . . . . . . . . . . 38 D . Stock index return correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 E . Correlations among stock index return volatilities . . . . . . . . . . . . . . . . 40 F . Return correlations during crisis periods . . . . . . . . . . . . . . . . . . . . . . . 40
Global Contrast Based Salient Region Detection
Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.
Automatic exploitation of multilingual information for military intelligence purposes
Intelligence plays an important role in supporting military operations. In the course of military intelligence a vast amount of textual data in different languages needs to be analyzed. In addition to information provided by traditional military intelligence, nowadays the internet offers important resources of potential militarily relevant information. However, we are not able to manually handle this vast amount of data. The science of natural language processing (NLP) provides technology to efficiently handle this task, in particular by means of machine translation and text mining. In our research project ISAF-MT we created a statistical machine translation (SMT) system for Dari to German. In this paper we describe how NLP technologies and in particular SMT can be applied to different intelligence processes. We therefore argue that multilingual NLP technology can strongly support military operations.
A generalized hidden Markov model with discriminative training for query spelling correction
Query spelling correction is a crucial component of modern search engines. Existing methods in the literature for search query spelling correction have two major drawbacks. First, they are unable to handle certain important types of spelling errors, such as concatenation and splitting. Second, they cannot efficiently evaluate all the candidate corrections due to the complex form of their scoring functions, and a heuristic filtering step must be applied to select a working set of top-K most promising candidates for final scoring, leading to non-optimal predictions. In this paper we address both limitations and propose a novel generalized Hidden Markov Model with discriminative training that can not only handle all the major types of spelling errors, including splitting and concatenation errors, in a single unified framework, but also efficiently evaluate all the candidate corrections to ensure the finding of a globally optimal correction. Experiments on two query spelling correction datasets demonstrate that the proposed generalized HMM is effective for correcting multiple types of spelling errors. The results also show that it significantly outperforms the current approach for generating top-K candidate corrections, making it a better first-stage filter to enable any other complex spelling correction algorithm to have access to a better working set of candidate corrections as well as to cover splitting and concatenation errors, which no existing method in academic literature can correct.
Comparison of distribution transformer losses and capacity under linear and harmonic loads
The significance of harmonics in power systems has increased substantially due to the use of solid state controlled loads and other high frequency producing devices. An important consideration when evaluating the impact of harmonics is their effect on power system components and loads. Transformers are major components in power systems. Supplying non-linear loads by transformer leads to higher losses, early fatigue of insulation, and reduction of the useful life of transformer. To prevent these problems rated capacity of transformer supplying non-linear loads must be reduced. This paper reviews the non linear loads effects on the transformers and the standard IEEE procedures for derating of the transformers which are under distorted currents. The equivalent losses and capacity of a typical 25 KVA single phase transformer is then evaluated using analysis and simulations in MATLAB/Simulink-based on useful model of transformer under harmonic condition- and results are compared.
Prevalence, severity, and unmet need for treatment of mental disorders in the World Health Organization World Mental Health Surveys.
CONTEXT Little is known about the extent or severity of untreated mental disorders, especially in less-developed countries. OBJECTIVE To estimate prevalence, severity, and treatment of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) mental disorders in 14 countries (6 less developed, 8 developed) in the World Health Organization (WHO) World Mental Health (WMH) Survey Initiative. DESIGN, SETTING, AND PARTICIPANTS Face-to-face household surveys of 60 463 community adults conducted from 2001-2003 in 14 countries in the Americas, Europe, the Middle East, Africa, and Asia. MAIN OUTCOME MEASURES The DSM-IV disorders, severity, and treatment were assessed with the WMH version of the WHO Composite International Diagnostic Interview (WMH-CIDI), a fully structured, lay-administered psychiatric diagnostic interview. RESULTS The prevalence of having any WMH-CIDI/DSM-IV disorder in the prior year varied widely, from 4.3% in Shanghai to 26.4% in the United States, with an interquartile range (IQR) of 9.1%-16.9%. Between 33.1% (Colombia) and 80.9% (Nigeria) of 12-month cases were mild (IQR, 40.2%-53.3%). Serious disorders were associated with substantial role disability. Although disorder severity was correlated with probability of treatment in almost all countries, 35.5% to 50.3% of serious cases in developed countries and 76.3% to 85.4% in less-developed countries received no treatment in the 12 months before the interview. Due to the high prevalence of mild and subthreshold cases, the number of those who received treatment far exceeds the number of untreated serious cases in every country. CONCLUSIONS Reallocation of treatment resources could substantially decrease the problem of unmet need for treatment of mental disorders among serious cases. Structural barriers exist to this reallocation. Careful consideration needs to be given to the value of treating some mild cases, especially those at risk for progressing to more serious disorders.
Minilaparotomy approach for colonic cancer: initial experience of 54 cases
The early outcomes of minilaparotomy for resection of colonic cancer were evaluated. In this study, 54 patients (34 Dukes’ A, 15 Dukes’ B, and 5 Dukes’ C) successfully underwent curative resection of colonic cancer via minilaparotomy (skin incision, =7 cm). The major exclusion criteria for this approach required a body mass index greater than 25 kg/m2, a tumor size exceeding 7 cm, a preoperative ileus, and tumor invading the adjacent organs. Patients (n = 54) who had undergone conventional open surgery before the introduction of this technique served as the control group by matching several clinicopathologic factors including body mass index. The passage of flatus (p < 0.01) and the beginning of oral intake (p = 0.02) were earlier, analgesic requirements were lower (p < 0.01), and postoperative serum C-reactive protein levels were lower in the minilaparotomy group (p < 0.01). The blood loss and frequency of postoperative complications did not differ between the groups. A minilaparotomy approach is a feasible, minimally invasive, and attractive alternative to conventional laparotomy for selected patients with colonic cancer.
Ultrasound-guided spinal accessory nerve blockade in the diagnosis and management of trapezius muscle-related myofascial pain.
We report the first description of ultrasound-guided spinal accessory nerve blockade using single-shot and subsequently continuous infusion (via a perineural catheter) local anaesthetic techniques, for the diagnosis and treatment of myofascial pain affecting the trapezius muscle. A 38-year-old man presented with a two-year history of incapacitating left suprascapular pain after a fall onto his outstretched hand. The history and clinical examination was suggestive of myofascial pain affecting the trapezius muscle. This had been unresponsive to pharmacological therapy, physiotherapy or suprascapular nerve blockade. Following identification of the spinal accessory nerve in the posterior triangle of the neck, we performed ultrasound-guided nerve blocks, first using a single injection of local anaesthetic and subsequently using a continuous infusion via a perineural catheter, to block the nerve and temporarily relieve the patient's pain. We have demonstrated that the spinal accessory nerve is identifiable in the posterior triangle of the neck and can be blocked successfully using ultrasound guidance. This technique can aid the diagnosis and treatment of myofascial pain originating from the trapezius muscle.
Developing an explanatory model for the process of online radicalisation and terrorism
While the use of the internet and social media as a tool for extremists and terrorists has been well documented, understanding the mechanisms at work has been much more elusive. This paper begins with a grounded theory approach guided by a new theoretical approach to power that utilizes both terrorism cases and extremist social media groups to develop an explanatory model of radicalization. Preliminary hypotheses are developed, explored and refined in order to develop a comprehensive model which is then presented. This model utilizes and applies concepts from social theorist Michel Foucault, including the use of discourse and networked power relations in order to normalize and modify thoughts and behaviors. The internet is conceptualized as a type of institution in which this framework of power operates and seeks to recruit and radicalize. Overall, findings suggest that the explanatory model presented is a well suited, yet still incomplete in explaining the process of online radicalization.
Satellite alerts track deforestation in real time
A satellite-based alert system could prove a potent weapon in the fight against deforestation. As few as eight hours after it detects that trees are being cut down, the system will send out e-mails warning that an area is endangered. That rapid response could enable environmental managers to catch illegal loggers before they damage large swathes of forest. " It's going to be very, very helpful, " says Brian Zutta Salazar, a remote-sensing scientist at the Peruvian Ministry of the Environment in Lima. Satellites are already valuable tools for monitoring deforestation; in recent decades, they have delivered consistent data on forest change over large and often remote areas. One such effort, the Real Time System for Detection of Deforestation, or DETER, has helped Brazil's government to reduce its deforestation rate by almost 80% since 2004, by alerting the country's environmental police to large-scale forest clearing. But DETER and other existing alert systems can be relatively slow to yield useful information. They use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, which at its top resolution produces images with pixels covering an area 250 metres on each side, roughly equivalent to 10 football pitches. This is too big to spot small changes in land cover, so it can take computer programs that process MODIS data weeks or even months to detect that a forest is being cleared. " By the time MODIS picks up on it, it's almost too late, " says Peter Ellis, a forest-carbon scientist at the Nature Conservancy, a conservation group in Arlington, Virginia. Seeking to provide a sharper view, geographer Matthew Hansen of the University of Maryland in College Park and his colleagues published maps showing year-to-year changes in global forest cover from 2000 to 2012 (M. C. Hansen et al. Science 342, 850–853; 2013). The researchers relied on data from NASA's two active Landsat satellites, which together photograph every spot on Earth every eight days. Each pixel in a Landsat image is 30 metres on each side — roughly the size of a baseball diamond. This means that an area covered by just one MODIS pixel is captured in roughly 70 smaller Landsat pixels. Hansen and his team wrote data-processing software that can use these higher-resolution images to recognize a disturbance as small as a road snaking its way through a previously untouched forest, something that often appears before clear-cutting begins. " …
Fibrosis in Chronic Hepatitis C Acquired in Infancy: Is It Only a Matter of Time?
OBJECTIVE:The natural history of chronic hepatitis C acquired in infancy is not well understood. The progression of fibrosis was analyzed in untreated children with chronic hepatitis C virus infection and no other hepatotoxic cofactors.METHODS:A total of 112 pediatric patients (13 with paired liver biopsies) were considered. Fibrosis was assessed by METAVIR score (i.e., stage F1 to F4). The ratio between the stage of fibrosis (METAVIR units) and the presumed duration of infection represented the “estimated” rate of fibrosis progression per year. In patients with paired biopsies, the “observed” rate of fibrosis progression was defined as the difference between the stage of fibrosis in the two biopsies divided by the time interval between them.RESULTS:Both age of patients at biopsy and duration of infection correlated with stage of fibrosis (p < 0.002 and p < 0.0005, respectively). Stage of fibrosis differed significantly between patients with infection lasting less or more than 10 yr (p < 0.0006). Sex, hepatitis C virus genotype, and route of infection did not correlate with stage of fibrosis. Among the 13 patients with paired biopsies, stage of fibrosis increased in seven and did not change in six; the median rate of estimated fibrosis progression per year was 0.142. The difference between estimated and observed fibrosis progression rates was significant (coefficient of determination, r2 = 0.031), which demonstrated that the prediction of the fibrosis progression was unreliable in 97% of patients.CONCLUSIONS:Chronic hepatitis C acquired in childhood is a progressive, slow-moving, fibrotic disease. Fibrosis progression inferred on the basis of linear mathematical models should be critically evaluated in the clinical practice.
Comparative Study of Cyber Security Policies among Malaysia, Australia, Indonesia: A Responsibility Perspective
The significant growth of information and communication technology (ICT) has the potential to promote economic growth. On the other hand, it also cause an increase in cyber threat and hence must be handled properly. Comprehensive and collective approach in handling cyber threat should be considered as importance. One of the most important aspect that should be considered is organizational structures. It plays a significant role in organizing responsibility among cyber related agencies. This paper provides a comparison study about a delegation scheme of responsibilities in cyber related role among different agencies among three different countries Malaysia, Australia and Indonesia. This study has shown that Indonesia has a more complex partition scheme of delegating responsibilities.
Engineering a successful partnership between academia and the financial industry: A software engineering program for IT professionals
This paper describes an ongoing partnership between the Seidenberg School of Computer Science and Information Systems at Pace University, a Registered Education Provider (REP) for IEEE and the Information Technology Division at Bank of New York Mellon. The goal of the project is to deliver at the bank's location, a high quality, customized, graduate program in software engineering as well as to reflect on its strong and weak points and improve it. More importantly, through intense interaction with the Bank's IT Management, to ensure that the academic program matches their actual needs. The technology transfer, the customized training and the challenges are discussed.
Aprotinin decreases reperfusion injury and allograft dysfunction in clinical lung transplantation.
OBJECTIVE Primary graft dysfunction caused by ischemia-reperfusion injury is one of the most frequent causes of early morbidity and death after lung transplantation. We hypothesized that the perioperative management with aprotinin decreases the incidence of allograft reperfusion injury and dysfunction after clinical lung transplantation. METHODS Lung transplant databases of two transplant centers were used to investigate the incidence of severe post-transplant reperfusion injury (PTRI). We examined data of 142 patients who underwent either single lung (81) or bilateral sequential lung (61) transplantation for COPD, idiopathic pulmonary fibrosis, cystic fibrosis, and miscellaneous lung disorders between 1997 and 2000. Thirty patients were excluded due to heart-lung transplantation or lung transplantation for Eisenmenger's disease, re-transplantation, rejection, or deviation from the standardized triple immunosuppression protocol. The data of remaining 112 patients (control group, 64% single lung, 36% sequential bilateral lung transplants) were compared to the prospectively collected data of 59 lung transplant patients over the last 5 years. All of these 59 patients were managed perioperatively with aprotinin infusion. In addition, Euro-Collins-aprotinin procurement solution (Apt-EC group) was used for 50 donor lungs (58% single lung, 42% sequential bilateral lung transplants). Aprotinin in combination with low-potassium dextran (LPD) flush solution (Apt-LPD group) was used for the procurement of 34 lungs (59% single lung, 41% sequential bilateral lung transplants). The International Society of Heart and Lung Transplantation (ISHLT) grade III injury score was used for the diagnosis of severe PTRI, which is based on a PaO(2)-FIO(2) ratio of less than 200 mmHg. RESULTS Severe reperfusion injury grade III was observed in 18% of the control group. ECMO support was required in 25% of these patients. The associated mortality rate was 40%. Correlating factors for PTRI were donor age greater than 35 years (45%, p=0.01, mean age 38+/-8) and recipient pulmonary artery systolic pressure greater than 60 mmHg (48%, p<0.05). Lung graft ischemic times (231+/-14 min) and intraoperative techniques (cardiopulmonary bypass in 12%) were not associated with negative outcomes. Despite longer ischemic times (258+/-36 min and 317+/-85 min, respectively) and older donors (42+/-12 years and 46+/-12 years, respectively) in the aprotinin patient groups (Apt-EC and Apt-LPD group), the incidence of PTRI was markedly lower (6% and 9%, respectively). There was no mortality in the Apt-EC group and one patient died in the Apt-LPD group due to PTRI-induced graft failure. CONCLUSIONS Severe PTRI increased short-term morbidity and mortality. The incidence of reperfusion injury was not dependent upon the duration of donor organ ischemia. The use of aprotinin in the perioperative patient management in lung transplantation had strong beneficial effects on the patient outcomes and decreased the incidence of post-transplant ischemia-reperfusion injury significantly.
An All-Elastomeric Transparent and Stretchable Temperature Sensor for Body-Attachable Wearable Electronics.
A transparent stretchable (TS) gated sensor array with high optical transparency, conformality, and high stretchability of up to 70% is demonstrated. The TS-gated sensor array has high responsivity to temperature changes in objects and human skin. This unprecedented TS-gated sensor array, as well as the integrated platform of the TS-gated sensor with a transparent and stretchable strain sensor, show great potential for application to wearable skin electronics for recognition of human activity.
Visual localization and loop closing using decision trees and binary features
In this paper we present an approach for efficiently retrieving the most similar image, based on point-to-point correspondences, within a sequence that has been acquired through continuous camera movement. Our approach is entailed to the use of standardized binary feature descriptors and exploits the temporal form of the input data to dynamically adapt the search structure. While being straightforward to implement, our method exhibits very fast response times and its Precision/Recall rates compete with state of the art approaches. Our claims are supported by multiple large scale experiments on publicly available datasets.
Information Retrieval with Verbose Queries
Recently, the focus of many novel search applications shifted from short keyword queries to verbose natural language queries. Examples include question answering systems and dialogue systems, voice search on mobile devices and entity search engines like Facebook's Graph Search or Google's Knowledge Graph. However the performance of textbook information retrieval techniques for such verbose queries is not as good as that for their shorter counterparts. Thus, effective handling of verbose queries has become a critical factor for adoption of information retrieval techniques in this new breed of search applications. Over the past decade, the information retrieval community has deeply explored the problem of transforming natural language verbose queries using operations like reduction, weighting, expansion, reformulation and segmentation into more effective structural representations. However, thus far, there was not a coherent and organized tutorial on this topic. In this tutorial, we aim to put together various research pieces of the puzzle, provide a comprehensive and structured overview of various proposed methods, and also list various application scenarios where effective verbose query processing can make a significant difference.
Day-Ahead Deregulated Electricity Market Price Forecasting Using Recurrent Neural Network
This paper proposes a recurrent neural network model for the day ahead deregulated electricity market price forecasting that could be realized using the Elman network. In a deregulated market, electricity price is influenced by many factors and exhibits a very complicated and irregular fluctuation. Both power producers and consumers need a single compact and robust price forecasting tool for maximizing their profits and utilities. In order to validate the chaotic characteristic of electricity price, an Elman network is modeled. The proposed Elman network is a single compact and robust architecture (without hybridizing the various hard and soft computing models). It has been observed that a nearly state of the art Elman network forecasting accuracy can be achieved with less computation time. The proposed Elman network approach is compared with autoregressive integrated moving average (ARIMA), mixed model, neural network, wavelet ARIMA, weighted nearest neighbors, fuzzy neural network, hybrid intelligent system, adaptive wavelet neural network, neural networks with wavelet transform, wavelet transform and a hybrid of neural networks and fuzzy logic, wavelet-ARIMA radial basis function neural networks, cascaded neuro-evolutionary algorithm, and wavelet transform, particle swarm optimization, and adaptive-network-based fuzzy inference system approaches to forecast the electricity market of mainland Spain. Finally, the accuracy of the price forecasting is also applied to the electricity market of New York in 2010, which shows the effectiveness of the proposed approach.
Globally Trained Handwritten Word Recognizer Using Spatial Representation, Convolutional Neural Networks, and Hidden Markov Models
Yann Le Cun AT&T Bell Labs Holmdel NJ 07733 We introduce a new approach for on-line recognition of handwritten words written in unconstrained mixed style. The preprocessor performs a word-level normalization by fitting a model of the word structure using the EM algorithm. Words are then coded into low resolution "annotated images" where each pixel contains information about trajectory direction and curvature. The recognizer is a convolution network which can be spatially replicated. From the network output, a hidden Markov model produces word scores. The entire system is globally trained to minimize word-level errors.
Measuring McKibben actuator shrinkage using fiber sensor
This paper focuses on sensing of McKibben actuator shrinkage using a fiber sensor. Control of McKibben actuators requires sensing their shrinkage without preventing their deformation. We applied electro-conductive yarn, with resistance related to its extensional strain, suggesting that measuring the resistance of the yarn can determine its length. The yarn is also bendable and light in weight, thus not preventing the deformation of a McKibben actuator. A fiber sensor consisting of these electro-conductive yarns was attached onto the membrane of a McKibben actuator to measure the distance between both ends of the sensor along the membrane. This paper describes a fiber sensor model that can calculate McKibben actuator shrinkage from the fiber sensor measurement. Based on the model, we investigated the arrangement of a fiber sensor so that the fiber extensional stress is within its allowable range. We then show a prototype of the fiber sensor that was utilized to measure McKibben actuator shrinkage. Experimentally, we found that the shrinkage of an actuator could be measured within 20 % error of its length.
Fuzzy logic control of buck converter for photo voltaic emulator
This paper analyses the digital simulation of a buck converter to emulate the photovoltaic (PV) system with focus on fuzzy logic control of buck converter. A PV emulator is a DC-DC converter (buck converter in the present case) having same electrical characteristics as that of a PV panel. The emulator helps in the real analysis of PV system in an environment where using actual PV systems can produce inconsistent results due to variation in weather conditions. The paper describes the application of fuzzy algorithms to the control of dynamic processes. The complete system is modelled in MATLAB® Simulink SimPowerSystem software package. The results obtained from the simulation studies are presented and the steady state and dynamic stability of the PV emulator system is discussed.
Topology control in mobile Ad Hoc networks with cooperative communications
Cooperative communication has received tremendous interest for wireless networks. Most existing works on cooperative communications are focused on link-level physical layer issues. Consequently, the impacts of cooperative communications on network-level upper layer issues, such as topology control, routing and network capacity, are largely ignored. In this article, we propose a Capacity-Optimized Cooperative (COCO) topology control scheme to improve the network capacity in MANETs by jointly considering both upper layer network capacity and physical layer cooperative communications. Through simulations, we show that physical layer cooperative communications have significant impacts on the network capacity, and the proposed topology control scheme can substantially improve the network capacity in MANETs with cooperative communications.
Good stability but high periprosthetic bone mineral loss and late-occurring periprosthetic fractures with use of uncemented tapered femoral stems in patients with a femoral neck fracture
BACKGROUND AND PURPOSE We previously evaluated a new uncemented femoral stem designed for elderly patients with a femoral neck fracture and found stable implant fixation and good clinical results up to 2 years postoperatively, despite substantial periprosthetic bone mineral loss. We now present the medium-term follow-up results from this study. PATIENTS AND METHODS In this observational prospective cohort study, we included 50 patients (mean age 81 (70-92) years) with a femoral neck fracture. All patients underwent surgery with a cemented cup and an uncemented stem specifically designed for fracture treatment. Outcome variables were migration of the stem measured with radiostereometry (RSA) and periprosthetic change in bone mineral density (BMD), measured with dual-energy X-ray absorptiometry (DXA). Hip function and health-related quality of life were assessed using the Harris hip score (HHS) and the EuroQol-5D (EQ-5D). DXA and RSA data were collected at regular intervals up to 4 years, and data concerning reoperations and hip-related complications were collected during a mean follow-up time of 5 (0.2-7.5) years. RESULTS At 5 years, 19 patients had either passed away or were unavailable for further participation and 31 could be followed up. Of the original 50 patients, 6 patients had suffered a periprosthetic fracture, all of them sustained after the 2-year follow-up. In 19 patients, we obtained complete RSA and DXA data and no component had migrated after the 2-year follow-up. We also found a continuous total periprosthetic bone loss amounting to a median of -19% (-39 to 2). No changes in HHS or EQ-5D were observed during the follow-up period. INTERPRETATION In this medium-term follow-up, the stem remained firmly fixed in bone despite considerable periprosthetic bone mineral loss. However, this bone loss might explain the high number of late-occurring periprosthetic fractures. Based on these results, we would not recommend uncemented femoral stems for the treatment of femoral neck fractures in the elderly.
STREAMCUBE: Hierarchical spatio-temporal hashtag clustering for event exploration over the Twitter stream
What is happening around the world? When and where? Mining the geo-tagged Twitter stream makes it possible to answer the above questions in real-time. Although a single tweet can be short and noisy, proper aggregations of tweets can provide meaningful results. In this paper, we focus on hierarchical spatio-temporal hashtag clustering techniques. Our system has the following features: (1) Exploring events (hashtag clusters) with different space granularity. Users can zoom in and out on maps to find out what is happening in a particular area. (2) Exploring events with different time granularity. Users can choose to see what is happening today or in the past week. (3) Efficient single-pass algorithm for event identification, which provides human-readable hashtag clusters. (4) Efficient event ranking which aims to find burst events and localized events given a particular region and time frame. To support aggregation with different space and time granularity, we propose a data structure called STREAMCUBE, which is an extension of the data cube structure from the database community with spatial and temporal hierarchy. To achieve high scalability, we propose a divide-and-conquer method to construct the STREAMCUBE. To support flexible event ranking with different weights, we proposed a top-k based index. Different efficient methods are used to speed up event similarity computations. Finally, we have conducted extensive experiments on a real twitter data. Experimental results show that our framework can provide meaningful results with high scalability.
Social Comparison 2.0: Examining the Effects of Online Profiles on Social-Networking Sites
Through their features--such as profile photographs or the personal vita--online profiles on social-networking sites offer a perfect basis for social comparison processes. By looking at the profile photograph, the user gains an impression of a person's physical attractiveness, and the user's vita shows which career path the person is pursuing. Against the background of Festinger's Social Comparison Theory, the focus of this research is on the effects of online profiles on their recipients. Therefore, qualitative interviews (N = 12) and two online experiments were conducted in which virtual online profiles of either physically attractive or unattractive persons (N = 93) and profiles of users with either high or low occupational attainment (N = 103) were presented to the participants. Although qualitative interviews did not initially give reason to expect online profiles to constitute a basis for comparison processes, results of the experiments proved otherwise. The first study indicates that recipients have a more negative body image after looking at beautiful users than persons who were shown the less attractive profile pictures. Male participants of the second study, who were confronted with profiles of successful males, showed a higher perceived discrepancy between their current career status and an ideal vita than male participants who looked at profiles of less successful persons.
Automatic audio sentiment extraction using keyword spotting
Most existing methods for audio sentiment analysis use automatic speech recognition to convert speech to text, and feed the textual input to text-based sentiment classifiers. This study shows that such methods may not be optimal, and proposes an alternate architecture where a single keyword spotting system (KWS) is developed for sentiment detection. In the new architecture, the text-based sentiment classifier is utilized to automatically determine the most powerful sentiment-bearing terms, which is then used as the term list for KWS. In order to obtain a compact yet powerful term list, a new method is proposed to reduce text-based sentiment classifier model complexity while maintaining good classification accuracy. Finally, the term list information is utilized to build a more focused language model for the speech recognition system. The result is a single integrated solution which is focused on vocabulary that directly impacts classification. The proposed solution is evaluated on videos from YouTube.com and UT-Opinion corpus (which contains naturalistic opinionated audio collected in real-world conditions). Our experimental results show that the KWS based system significantly outperforms the traditional architecture in difficult practical tasks.
Priming of semantic autobiographical knowledge: A case study of retrograde amnesia
The case of a 36-year-old man who suffers dense retrograde and anterograde amnesia as a result of closed-head injury that caused extensive damage to his left frontal-parietal and right parieto-occipital lobes is described. Patient K.C. has normal intelligence and relatively well-preserved perceptual, linguistic, short-term memory, and reasoning abilities. He possesses some fragmentary general knowledge about his autobiographical past, but he does not remember a single personal event or happening from any time of his life. He has some preserved expert knowledge related to the work he did for 3 years before the onset of amnesia, although he has no personal recollections from that period. Some features of K.C.'s retrograde amnesia can be interpreted in terms of the distinction between episodic and semantic memory, and in terms of the distinction between episodic and semantic autobiographical knowledge. K.C.'s semantic knowledge, but not his episodic knowledge, showed progressive improvement, or priming, in the course of the investigation.
RANDy-A True-Random Generator Based On Radioactive Decay
This paper presents a physical random number generator for mainly cryptographical applications based on alpha decay of Americium 241. A simple and low-cost implementation is shown to detect the decay events of a radioactive source often found in common household smoke detectors. Three different algorithms for the extraction of random bits from the exponentially distributed impulses are discussed. In the concrete application a speed optimized method was chosen to gain a reasonable high data rate from a moderate radiation source (0,1 μCi). To the author’s best knowledge this technique has not been applied so far in the context of radiation-based random generators. A tentative application of statistical suits of tests indicates a high quality of the data delivered by the device.
Hacking the Nintendo Wii Remote
Since its introduction, the Nintendo Wii remote has become one of the world's most sophisticated and common input devices. Combining its impressive capability with a low cost and high degree of accessibility make it an ideal platform for exploring a variety of interaction research concepts. The author describes the technology inside the Wii remote, existing interaction techniques, what's involved in creating custom applications, and several projects ranging from multiobject tracking to spatial augmented reality that challenge the way its developers meant it to be used.
Chapter 52 – 1979 Exaptations
Nearly all phenotypic features can be understood as adaptations to particular evolutionary challenges. Gould and Lewontin (1979) derisively referred to this dogma as the Panglossian paradigm of the adaptationist programme in evolutionary biology.
Exploring the Use of Autoencoders for Botnets Traffic Representation
Botnets are a significant threat to cyber security. Compromised, a.k.a. malicious hosts in a network have, of late, been detected by machine learning from hand-crafted features directly sourced from different types of network logs. Our interest is in automating feature engineering while examining flow data from hosts labeled to be malicious or not. To automatically express full temporal character and dependencies of flow data requires time windowing and a very high dimensional set of features, in our case 30,000. To reduce dimensionality, we generate a lower dimensional embedding (64 dimensions) via autoencoding. This improves detection. We next increase the volume in the flows originating from hosts in our dataset known to be malicious or not by injecting noise we mix in from background traffic. The resulting lower metaphorical signal to noise ratio makes the presence of a bot even more challenging to detect so we resort to a filter encoder or an off-the-shelf denoising autoencoder. Both the filter encoding and denoising autoencoder improve upon detection compared to when hand-crafted features are used and are comparable in performance to the autoencoder.
Management of splenic abscess: report on 16 cases from a single center.
OBJECTIVES Splenic abscess is an uncommon disease, with a reported incidence of 0.14-0.7% in autoptic series. The best treatment option remains unclear. We report our experience of percutaneous drainage of splenic abscess under ultrasound (US) guidance. METHODS From 1979 to 2005, 16 consecutive patients (12 male and four female; mean age 39.9 years, range 16-72 years) were diagnosed with splenic abscess by means of US, and were treated with medical therapy alone or combined with US-guided percutaneous aspiration or catheter drainage. RESULTS Ten of 16 patients had bacterial abscesses (including one case of tubercular abscess), two had an amebic abscess, and four had fungal abscesses. Seven of ten patients with bacterial abscesses were successfully treated with fine needle aspiration alone, one patient was successfully treated with fine needle aspiration for one abscess and catheter drainage for another, and one patient, who subsequently required a splenectomy for an abdominal trauma, successfully underwent percutaneous catheter drainage alone. Four patients with fungal lesions were treated with medical therapy alone, and two patients later required a splenectomy. One patient with a bacterial abscess due to endocarditis was treated with medical therapy alone, and his recovery was uneventful. CONCLUSIONS US-guided percutaneous aspiration of splenic abscesses is a safe and effective procedure. It can be used as a bridge to surgery in patients who are critically ill or have several comorbidities. Percutaneous aspiration may allow complete non-operative healing of splenic abscesses or temporize patients at risk for surgery.
Three Options Are Optimal for Multiple-Choice Items : A Meta-Analysis of 80 Years of Research
Multiple-choice items are a mainstay of achievement testing. The need to adequately cover the content domain to certify achievement proficiency by producing meaningful precise scores requires many high-quality items. More 3-option items can be administered than 4or 5-option items per testing time while improving content coverage, without detrimental effects on psychometric quality of test scores. Researchers have endorsed 3-option items for over 80 years with empirical evidence—the results of which have been synthesized in an effort to unify this endorsement and encourage its adoption.
Sentiment Analysis of Big Data: Methods, Applications, and Open Challenges
The development of IoT technologies and the massive admiration and acceptance of social media tools and applications, new doors of opportunity have been opened for using data analytics in gaining meaningful insights from unstructured information. The application of opinion mining and sentiment analysis (OMSA) in the era of big data have been used a useful way in categorizing the opinion into different sentiment and in general evaluating the mood of the public. Moreover, different techniques of OMSA have been developed over the years in different data sets and applied to various experimental settings. In this regard, this paper presents a comprehensive systematic literature review, aims to discuss both technical aspect of OMSA (techniques and types) and non-technical aspect in the form of application areas are discussed. Furthermore, this paper also highlighted both technical aspects of OMSA in the form of challenges in the development of its technique and non-technical challenges mainly based on its application. These challenges are presented as a future direction for research.
A case of onychomatricoma: Classic clinical, dermoscopic, and nail-clipping histologic findings.
the Department of Dermatology, Thomas Jefferson Univerty Hospital. ication of this article was supported by 3Gen Inc. ing sources: None. licts of interest: None declared. int requests: Jason B. Lee, MD, Department of Dermatology, omas Jefferson University Hospital, 833 Chestnut St, Suite 0, Philadelphia, PA 19107. E-mail: [email protected]. J Am Acad Dermatol 2017;76:S19-21. 0190-9622/$36.00 a 2016 by the American Academy of Dermatology, Inc. http://dx.doi.org/10.1016/j.jaad.2016.04.041
Heteropagus (parasitic) twins: a review.
Heteropagus, or "parasitic," twins are asymmetric conjoined twins in which the tissues of a severely defective twin (parasite) are dependent on the cardiovascular system of the other, largely intact twin (autosite) for survival. The estimated incidence of heteropagus twins is approximately 1 per 1 million live births. Isolated case reports comprise most of published work on this rare congenital anomaly. In the past, review articles have focused narrowly on one particular anatomical subtype of parasitic twin and/or on the anatomicopathology observed. Here, we present the epidemiology, proposed pathoembryogenic origins, anatomical abnormalities, management, and outcomes of the wide array of heteropagus twins described in the English language literature.
The psychology of cosmetic surgery: a review and reconceptualization.
This article discusses the psychology of cosmetic surgery. A review of the research on the psychological characteristics of individuals who seek cosmetic surgery yielded contradictory findings. Interview-based investigations revealed high levels of psychopathology in cosmetic surgery patients, whereas studies that used standardized measurements reported far less disturbance. It is difficult to fully resolve the discrepancy between these two sets of findings. We believe that investigating the construct of body image in cosmetic surgery patients will yield more useful findings. Thus, we propose a model of the relationship between body image dissatisfaction and cosmetic surgery and outline a research agenda based upon the model. Such research will generate information that is useful to the medical and mental health communities and, ultimately, the patients themselves.
Mental health and gender dysphoria: A review of the literature.
Studies investigating the prevalence of psychiatric disorders among trans individuals have identified elevated rates of psychopathology. Research has also provided conflicting psychiatric outcomes following gender-confirming medical interventions. This review identifies 38 cross-sectional and longitudinal studies describing prevalence rates of psychiatric disorders and psychiatric outcomes, pre- and post-gender-confirming medical interventions, for people with gender dysphoria. It indicates that, although the levels of psychopathology and psychiatric disorders in trans people attending services at the time of assessment are higher than in the cis population, they do improve following gender-confirming medical intervention, in many cases reaching normative values. The main Axis I psychiatric disorders were found to be depression and anxiety disorder. Other major psychiatric disorders, such as schizophrenia and bipolar disorder, were rare and were no more prevalent than in the general population. There was conflicting evidence regarding gender differences: some studies found higher psychopathology in trans women, while others found no differences between gender groups. Although many studies were methodologically weak, and included people at different stages of transition within the same cohort of patients, overall this review indicates that trans people attending transgender health-care services appear to have a higher risk of psychiatric morbidity (that improves following treatment), and thus confirms the vulnerability of this population.