title
stringlengths
8
300
abstract
stringlengths
0
10k
Shoulder pain in swimmers: a 12-month prospective cohort study of incidence and risk factors.
OBJECTIVE To investigate shoulder pain incidence rates and selected risk factors for shoulder pain in competitive swimmers. DESIGN 12-month prospective cohort study. SETTING Five swimming clubs in Melbourne, Australia. PARTICIPANTS 74 (37 M, 37 F) competitive swimmers ranging in age from 11 to 27 years and performing at least five swim sessions per week. ASSESSMENT OF RISK FACTORS Swimmers completed a baseline questionnaire regarding demographics, anthropometric features, swimming characteristics and training and injury history. Active shoulder internal (IR) and external rotation (ER) range of motion and passive joint laxity were measured. MAIN OUTCOME MEASUREMENTS Shoulder pain was self-reported over 12 months with significant interfering shoulder pain (SIP) defined as pain interfering (causing cessation or modification) with training or competition, or progression in training. A significant shoulder injury (SSI) was any SIP episode lasting for at least 2 weeks. RESULTS 28/74 (38%) participants reported SIP while 17/74 (23%) reported SSI. Exposure-adjusted incidence rates were 0.3 injuries and 0.2 injuries per 1000 swim km for SIP and SSI, respectively. Swimmers with both high and low ER range were at 8.1 (1.5, 42.0) and 12.5 (2.5, 62.4) times greater risk of sustaining a subsequent SIP, respectively and 35.4 (2.8, 441.4) and 32.5 (2.7, 389.6) times greater risk of sustaining a SSI, respectively than those with mid-range ER. Similarly swimmers with a history of shoulder pain were 4.1 (95% CI: 1.3, 13.3) and 11.3 (95% CI: 2.6, 48. 4) times more likely to sustain a SIP and SSI, respectively. CONCLUSION Shoulder pain is common in competitive swimmers. Preventative programs should be particularly directed at those swimmers identified as being at risk of shoulder pain.
A novel feature selection algorithm for text categorization
With the development of the web, large numbers of documents are available on the Internet. Digital libraries, news sources and inner data of companies surge more and more. Automatic text categorization becomes more and more important for dealing with massive data. However the major problem of text categorization is the high dimensionality of the feature space. At present there are many methods to deal with text feature selection. To improve the performance of text categorization, we present another method of dealing with text feature selection. Our study is based on Gini index theory and we design a novel Gini index algorithm to reduce the high dimensionality of the feature space. A new measure function of Gini index is constructed and made to fit text categorization. The results of experiments show that our improvements of Gini index behave better than other methods of feature selection. 2006 Elsevier Ltd. All rights reserved.
Rectangling panoramic images via warping
Stitched panoramic images mostly have irregular boundaries. Artists and common users generally prefer rectangular boundaries, which can be obtained through cropping or image completion techniques. In this paper, we present a content-aware warping algorithm that generates rectangular images from stitched panoramic images. Our algorithm consists of two steps. The first local step is mesh-free and preliminarily warps the image into a rectangle. With a grid mesh placed on this rectangle, the second global step optimizes the mesh to preserve shapes and straight lines. In various experiments we demonstrate that the results of our approach are often visually plausible, and the introduced distortion is often unnoticeable.
Maximum Performance Computing with Dataflow Engines
Multidisciplinary dataflow computing is a powerful approach to scientific computing that has led to orders-of-magnitude performance improvements for a wide range of applications.
Performance analysis for automated gait extraction and recognition in multi-camera surveillance
Many studies have confirmed that gait analysis can be used as a new biometrics. In this research, gait analysis is deployed for people identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of walking directions. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real video sequences, with subjects walking freely along different walking directions, have been performed. Since the choice of the cameras’ characteristics is a key-point for the development of a smart surveillance system, the performance of the proposed approach is measured with respect to different video properties: spatial resolution, frame-rate, data compression and image quality. The obtained results show that markerless gait analysis can be achieved without any knowledge of camera’s position and subject’s pose. The extracted gait parameters allow recognition of people walking from different views with a mean recognition rate of 92.2% and confirm that gait can be effectively used for subjects’ identification in a multi-camera surveillance scenario.
Cognitive effectiveness of visual instructional design languages
The introduction of learning technologies into education is making the design of courses and instructional materials an increasingly complex task. Instructional design languages are identified as conceptual tools for achieving more standardized and, at the same time, more creative design solutions, as well as enhancing communication and transparency in the design process. In this article we discuss differences in cognitive aspects of three visual instructional design languages (E2ML, PoEML, coUML), based on user evaluation. Cognitive aspects are of relevance for learning a design language, creating models with it, and understanding models created using it. The findings should enable language constructors to improve the usability of visual instructional design languages in the future. The paper concludes with directions with regard to how future research on visual instructional design languages could strengthen their value and enhance their actual use by educators and designers by synthesizing existing efforts into a unified modeling approach for VIDLs.
Hegel's critique of Kant
In this paper we present a reconstruction of Hegel's critique of Kant. We try to show the congruence of that critique in both theoretical and practical philosophy. We argue that this congruence is to be found in Hegel's criticism of Kant's hylemorphism in his theoretical and practical philosophy. Hegel is much more sympathetic to Kant's response to the distinction between matter and form in his theoretical philosophy and he credits Kant with ‘discovering’ here that thinking is an activity that always takes place within a greater whole. He, however, argues that the consequences of this are much more significant than Kant suspects and that, most importantly, the model of cognition in which thought (form) confronts something non-thought (matter) is unsustainable. This leads to Hegel's appropriation of Kantian reflective judgements, arguing that the greater whole in which thinking takes place is a socially shared set of meanings, something resembling what Kant calls a sensus communis. From here, it is not far...
A Review of Wideband Wide-Angle Scanning 2-D Phased Array and Its Applications in Satellite Communication
In this review, research progress on the wideband wide-angle scanning two-dimensional phased arrays is summarized. The importance of the wideband and the wide-angle scanning characteristics for satellite communication is discussed. Issues like grating lobe avoidance, active reflection coefficient suppression and gain fluctuation reduction are emphasized in this review. Besides, techniques to address these issues and methods to realize the wideband wide-angle scanning phased array are reviewed.
Self-Compassion and Psychological Well-Being in Older Adults
Self-compassion refers to a kind and nurturing attitude toward oneself during situations that threaten one’s adequacy, while recognizing that being imperfect is part of being human. Although growing evidence indicates that selfcompassion is related to a wide range of desirable psychological outcomes, little research has explored self-compassion in older adults. The present study investigated the relationships between self-compassion and theoretically based indicators of psychological adjustment, as well as the moderating effect of self-compassion on self-rated health. A sample of 121 older adults recruited from a community library and a senior day center completed self-report measures of self-compassion, self-esteem, psychological well-being, anxiety, and depression. Results indicated that self-compassion is positively correlated with age, self-compassion is positively and uniquely related to psychological well-being, and self-compassion moderates the association between self-rated health and depression. These results suggest that interventions designed to increase self-compassion in older adults may be a fruitful direction for future applied research.
Impact Models to Assess Regional Acidification
Sensitive receptors such as soil and fresh bodies of water are at the end of a long chain of events in the process of regional acidification. This chain begins thousands of kilometers upwind at the emitters of acidifying pollutants. The topics covered in this book are important in the study of regional acidification for two reasons. First, it is important to assess the sensitivity of terrestrial and aquatic ecosystems to the deposition of acidifying pollutants. If the sensitivity of an ecosystem is known, then international control strategies can be developed to reduce deposition in the receptor areas of greatest importance. This is an important factor in designing the most effective strategies because of the very high costs of reducing emissions of acidifying pollutants. Second, it is important to be able to predict changes in ecosystems for decades into the future, whether it be an improvement owing to decreases in acidifying emissions or, alas, a further deterioration because control strategies are nonexistent or inadequate. In either event, it is important to be able to judge the results of our actions. Decision makers tend to be mistrustful of models unless they can judge their reliability. The application and testing of the models in Part III of this book cover, therefore, an important facet of model building. This book is an ideal companion to another book that is forthcoming from the Transboundary Air Pollution Project at IIASA: The RAINS Model of Acidification: Science and Strategies in Europe. The latter book is a description of the development and use of the Regional Acidification INformation and Simulation (RAINS) model, an integrated assessment model for developing and determining control strategies to reduce regional acidification in Europe. Much of the research described in this book forms part of the foundation of the RAINS model. These two books cover a great deal of the present knowledge about assessing and dealing with a very important environmental problem in Europe regional acidification.
Shorter adult stature increases the impact of risk factors for cognitive impairment: a comparison of two Nordic twin cohorts.
We analyzed the association between mean height and old age cognition in two Nordic twin cohorts with different childhood living conditions. The cognitive performance of 4720 twin individuals from Denmark (mean age 81.6 years, SD = 4.59) and Finland (mean age 74.4 years, SD = 5.26) was measured using validated cognitive screens. Taller height was associated with better cognitive performance in Finland (beta-estimates 0.18 SD/10cm, p value < .001, for men and 0.13 SD, p = .008, for women), but this association was not significant in Denmark (beta-estimates 0.0093 SD, p value = .16, for men and 0.0075 SD, p value = .016, for women) when adjusted for age and education/social class. Among Finnish participants higher variability of cognitive performance within shorter height quintiles was observed. Analysis using gene-environment interaction models showed that environmental factors exerted a greater impact on cognitive performance in shorter participants, whereas in taller participants' it was explained mainly by genetic factors. Our results suggest that shorter participants with childhood adversity are more vulnerable to environmental risk factors for cognitive impairment.
COMPUTATION OF CONDITIONAL PROBABILITY STATISTICS BY 8-MONTH-OLD INFANTS
321 Abstract— A recent report demonstrated that 8-month-olds can segment a continuous stream of speech syllables, containing no acoustic or prosodic cues to word boundaries, into wordlike units after only 2 min of listening experience (Saffran, Aslin, & Newport, 1996). Thus, a powerful learning mechanism capable of extracting statistical information from fluent speech is available early in development. The present study extends these results by documenting the particular type of statistical computation—transitional (conditional) probability—used by infants to solve this word-segmentation task. An artificial language corpus, consisting of a continuous stream of trisyllabic nonsense words, was presented to 8-month-olds for 3 min. A postfamiliarization test compared the infants' responses to words versus part-words (tri-syllabic sequences spanning word boundaries). The corpus was constructed so that test words and part-words were matched in frequency, but differed in their transitional probabilities. Infants showed reliable discrimination of words from part-words, thereby demonstrating rapid segmentation of continuous speech into words on the basis of transitional probabilities of syllable pairs. Many aspects of the patterns of human languages are signaled in the speech stream by what is called distributional evidence, that is, regularities in the relative positions and order of elements over a corpus of utterances (Bloomfield, 1933; Maratsos & Chalkley, 1980). This type of evidence, along with linguistic theories about the characteristics of human languages, is what comparative linguists use to discover the structure of exotic languages (Harris, 1951). Similarly, this type of evidence , along with tendencies to perform certain kinds of analyses on language input (Chomsky, 1957), could be used by human language learners to acquire their native languages. However, using such evidence would require rather complex distributional and statistical computations , and surprisingly little is known about the abilities of human infants and young children to perform these computations. By using the term computation, we do not mean, of course, that infants are consciously performing a mathematical calculation, but rather that they might be sensitive to and able to store quantitative aspects of distribu-tional information about a language corpus. Recently, we have begun studying this problem by investigating the abilities of human learners to use statistical information to dis-Words are known to vary dramatically from one language to another, so finding the words of a language is clearly a problem that must involve learning from the linguistic environment. Moreover, the beginnings and ends of the sequences of sounds that form words in a …
Two Methods for Semi-automatic Image Segmentation based on Fuzzy Connectedness and Watersheds
At the present time, one of the best methods for semiautomatic image segmentation seems to be the approach based on the fuzzy connectedness principle. First, we identify some deficiencies of this approach and propose a way to improve it, through the introduction of competitive learning. Second, we propose a different approach, based on watersheds. We show that the competitive fuzzy connectedness-based method outperforms the noncompetitive variant and generally (but not always) outperforms the watershed-based approach. The competitive variant of the fuzzy connectedness-based method can be a good alternative to the watersheds.
Effect of synbiotic in constipated adult women - a randomized, double-blind, placebo-controlled study of clinical response.
BACKGROUND & AIMS Synbiotic intake may selectively change microbiota composition, restore microbial balance in the gut and improve gastrointestinal functions. We have assessed the clinical response of chronically constipated women to a commercially available synbiotic, combining fructooligosaccharides with Lactobacillus and Bifidobacterium strains (LACTOFOS®). METHODS Following 1 week of non-interventional clinical observation, 100 constipated adult women, diagnosed by ROME III criteria, were randomized to receive two daily doses (6 g) of synbiotic or maltodextrin (placebo group), for 30 days. Treatment response was evaluated by patient's daily record of evacuation (stool frequency, consistency and shape, according to Bristol scale), abdominal symptoms (abdominal pain, bloating and flatulence) and constipation intensity (Constipation Scoring System AGACHAN). RESULTS Patients treated with synbiotic had increased frequency of evacuation, as well as stool consistency and shape nearer normal parameters than the placebo group, with significant benefits starting during the second and third weeks, respectively (interaction group/time, P<0.0001). There were no significant differences in abdominal symptoms, but AGACHAN score was better in the synbiotic than in the placebo group. CONCLUSIONS Dietary supplementation with a synbiotic composed of fructooligosaccharides with Lactobacillus and Bifidobacterium improved evacuation parameters and constipation intensity of chronically constipated women, without influencing abdominal symptoms.
Monte Carlo Strength Evaluation: Fast and Reliable Password Checking
Modern password guessing attacks adopt sophisticated probabilistic techniques that allow for orders of magnitude less guesses to succeed compared to brute force. Unfortunately, best practices and password strength evaluators failed to keep up: they are generally based on heuristic rules designed to defend against obsolete brute force attacks. Many passwords can only be guessed with significant effort, and motivated attackers may be willing to invest resources to obtain valuable passwords. However, it is eminently impractical for the defender to simulate expensive attacks against each user to accurately characterize their password strength. This paper proposes a novel method to estimate the number of guesses needed to find a password using modern attacks. The proposed method requires little resources, applies to a wide set of probabilistic models, and is characterised by highly desirable convergence properties. The experiments demonstrate the scalability and generality of the proposal. In particular, the experimental analysis reports evaluations on a wide range of password strengths, and of state-of-the-art attacks on very large datasets, including attacks that would have been prohibitively expensive to handle with existing simulation-based approaches.
High dose alkylator therapy for extracranial malignant rhabdoid tumors in children.
BACKGROUND Extracranial malignant rhabdoid tumor (MRT) is a rare pediatric cancer with a poor prognosis. The kidney is the most common site. Isolated reports have shown improvements in patient survival, but no specific treatment regimen has shown efficacy over others. PROCEDURE Retrospective review of patients diagnosed with extracranial MRT at Children's Hospital Los Angeles between 1983 and 2012. RESULTS The median age at presentation for the 21 patients was 13 months (range, 0-108 months). Ten patients had renal primary tumors. The median time to progression was 4 months (range, 0.4-7 months). The 5-year event free survival (EFS) and overall survival (OS) of the entire cohort was 38 ± 10.6%. After 2002, patients diagnosed with extracranial MRT were administered a chemotherapy regimen of vincristine, doxorubicin and high dose cyclophosphamide (VDC). The OS for the patients diagnosed before and after 2002 were 20 ± 12% and 54 ± 15%, respectively. Of the 13 patients who received VDC containing regimen, eight patients achieved a complete radiological remission; five of these patients are long-term survivors. Four patients who received autologous bone marrow transplantation were alive at last follow-up. All patients with unresectable primary tumors died. Patients who had disease progression or relapse did not survive. CONCLUSIONS Patients with extracranial MRT have a poor prognosis. Treatment with high dose alkylator therapy followed by consolidation with high dose chemotherapy and autologous bone marrow transplant for those patients in radiographic complete remission appears to have a beneficial effect on survival.
Word Equations with Length Constraints: What's Decidable?
We prove several decidability and undecidability results for the satisfiability and validity problems for languages that can express solutions of word equations with length constraints. The atomic formulas over this language are equality over string terms (word equations), linear inequality over the length function (length constraints), and membership in regular sets. These questions are important in logic, program analysis, and formal verification. Variants of these questions have been studied for many decades and practical satisfiability procedures (aka SMT solvers) for these formulas have become increasingly important in the analysis of string-manipulating programs such as web applications and scripts. We prove three main theorems. First, we give a new proof of undecidability for the validity problem for the set of sentences written as a ∀∃ quantifier alternation applied to positive word equations. A corollary of this undecidability result is that this set is undecidable even with sentences with at most two occurrences of a string variable. Second, we consider Boolean combinations of quantifier-free formulas constructed out of word equations and length constraints. We show that if word equations can be converted to a solved form, a form relevant in practice, then the satisfiability problem for Boolean combinations of word equations and length constraints is decidable. Third, we show that the satisfiability problem for quantifier-free formulas over word equations in regular solved form, length constraints, and the membership predicate over regular expressions is also decidable.
Information & Environment: IoT-Powered Recommender Systems
Internet of Things (IoT) infrastructure within the physical library environment is the basis for an integrative, hybrid approach to digital resource recommenders. The IoT infrastructure provides mobile, dynamic wayfinding support for items in the collection, which includes features for location-based recommendations. The evaluation and analysis herein clarified the nature of users’ requests for recommendations based on their location, and describes subject areas of the library for which users request recommendations. The results indicated that users of IoTbased recommendation are interested in a broad distribution of subjects, with a short-head distribution from this collection in American and English Literature. A long-tail finding showed a diversity of topics that are recommended to users in the library book stacks with IoT-powered recommendations.
Improving Semantic Segmentation via Video Propagation and Label Relaxation
Semantic segmentation requires large amounts of pixelwise annotations to learn accurate models. In this paper, we present a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks. We exploit video prediction models’ ability to predict future frames in order to also predict future labels. A joint propagation strategy is also proposed to alleviate mis-alignments in synthesized samples. We demonstrate that training segmentation models on datasets augmented by the synthesized samples leads to significant improvements in accuracy. Furthermore, we introduce a novel boundary label relaxation technique that makes training robust to annotation noise and propagation artifacts along object boundaries. Our proposed methods achieve state-of-the-art mIoUs of 83.5% on Cityscapes and 82.9% on CamVid. Our single model, without model ensembles, achieves 72.8% mIoU on the KITTI semantic segmentation test set, which surpasses the winning entry of the ROB challenge 2018. Our code and videos can be found at https://nv-adlr.github. io/publication/2018-Segmentation.
The NIDS Cluster: Scalable, Stateful Network Intrusion Detection on Commodity Hardware
In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS’s operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.
A stable multi-scale kernel for topological machine learning
Topological data analysis offers a rich source of valuable information to study vision problems. Yet, so far we lack a theoretically sound connection to popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In this work, we establish such a connection by designing a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data. We show that this kernel is positive definite and prove its stability with respect to the 1-Wasserstein distance. Experiments on two benchmark datasets for 3D shape classification/retrieval and texture recognition show considerable performance gains of the proposed method compared to an alternative approach that is based on the recently introduced persistence landscapes.
Advancing students' computational thinking skills through educational robotics: A study on age and gender relevant differences
This work investigates the development of students’ computational thinking (CT) skills in the context of educational robotics (ER) learning activity. The study employs an appropriate CT model for operationalising and exploring students’ CT skills development in two different age groups (15 and 18 years old) and across gender. 164 students of different education levels (Junior high: 89; High vocational: 75) engaged in ER learning activities (2 hours per week, 11 weeks totally) and their CT skills were evaluated at different phases during the activity, using different modality (written and oral) assessment tools. The results suggest that: (a) students reach eventually the same level of CT skills development independent of their age and gender, (b) CT skills inmost cases need time to fully develop (students’ scores improve significantly towards the end of the activity), (c) age and gender relevant differences appear when analysing students’ score in the various specific dimensions of the CT skills model, (d) the modality of the skill assessment instrumentmay have an impact on students’ performance, (e) girls appear inmany situations to need more training time to reach the same skill level compared to boys. © 2015 Elsevier B.V. All rights reserved.
Physician knowledge and appropriate utilization of computed tomographic colonography in colorectal cancer screening
To assess physician understanding of computed tomographic colonography (CTC) in colorectal cancer (CRC) screening guidelines in a pilot study. CTC is a sensitive and specific method of detecting colorectal polyps and cancer. However, several factors have limited its clinical availability, and CRC screening guidelines have issued conflicting recommendations. A web-based survey was administered to physicians at two institutions with and without routine CTC availability. 398 of 1655 (24%) participants completed the survey, 59% was from the institution with routine CTC availability, 52% self-identified as trainees, and 15% as gastroenterologists. 78% had no personal experience with CTC. Only 12% was aware of any current CRC screening guidelines that included CTC. In a multiple regression model, gastroenterologists had greater odds of being aware of guidelines (OR 3.49, CI 1.67–7.26), as did physicians with prior CTC experience (OR 4.81, CI 2.39–9.68), controlling for institution, level of training, sex, and practice type. Based on guidelines that recommend CTC, when given a clinical scenario, 96% of physicians was unable to select the appropriate follow-up after a CTC, which was unaffected by institution. Most physicians have limited experience with CTC and are unaware of recent recommendations concerning CTC in CRC screening.
Analysis of Socio-Economic Well-Being of Population in Khirthar National Park, Sindh: A Geographical Study
Abstract: Pakistan’s two-third population lives in rural areas where the dependence on natural resources is foremost. National parks are protected areas where the natural environment is preserved for the future generations. The purpose of this study is to investigate the socio-economic aspects of the people living in such areas. For this a comparative study has been designed by selecting two areas of Kirthar National Park (KNP), Sindh, one within the boundary of park; Core and the other at the transition zone of the park. The data have been collected through extensive field survey and analyzed using correlation technique. The study can be helpful in assessing the interaction that exists between humans and dry natural environment. The results indicate a clear difference in the standard of living of the people living in these two selected areas. Such studies are very important from the point of view of rural development of local communities.
Does high soy milk intake reduce prostate cancer incidence? The Adventist Health Study (United States)
Objectives: Recent experimental studies have suggested that isoflavones (such as genistein and daidzein) found in some soy products may reduce the risk of cancer. The purpose of this study was to evaluate the relationship between soy milk, a beverage containing isoflavones, and prostate cancer incidence. Methods: A prospective study with 225 incident cases of prostate cancer in 12,395 California Seventh-Day Adventist men who in 1976 stated how often they drank soy milk. Results: Frequent consumption (more than once a day) of soy milk was associated with 70 per cent reduction of the risk of prostate cancer (relative risk=0.3, 95 percent confidence interval 0.1-1.0, p-value for linear trend=0.03). The association was upheld when extensive adjustments were performed. Conclusions: Our study suggests that men with high consumption of soy milk are at reduced risk of prostate cancer. Possible associations between soy bean products, isoflavones and prostate cancer risk should be further investigated.
Media Richness Theory and New Electronic Communication Media: A Study of Voice Mail and Electronic Mail
In situations requiring the exchange of information to resolve equivocality or reduce uncertainty, can Media Richness Theory account for differences in individuals' preferences for electronic mail and voice mail relative to one another? The results of this study indicate that, as predicted, electronic mail was preferred over voice mail for the exchange of information to reduce uncertainty. However, contrary to the predictions of Media Richness Theory, voice mail was not preferred over electronic mail for the resolution of equivocality. These results suggest that Media Richness Theory in its current formulation may not be applicable to the study of the new media. Future research should redefine and extend the concept of richness and its elements to account for the nature and functionality of the new media and investigate alternative social dimensions to individuals' preferences for and usage of the new media
The Determinants of Capital Structure of Stock Exchange-listed Non-financial Firms in Pakistan
Capital structure refers to the different options used by a firm in financing its assets. Generally, a firm can go for different levels/mixes of debts, equity, or other financial arrangements. It can combine bonds, TFCs, lease financing, bank loans or many other options with equity in an overall attempt to boost the market value of the firm. In their attempt to maximise the overall value, firms differ with respect to capital structures. This has given birth to different capital structure theories that attempt to explain the variation in capital structures of firms over time or across regions. On the other hand, empirical evidence is also not sometime consistent in substantiating a particular capital structure theory. This paper attempts to answer the question of what determines the capital structure of Pakistani listed firms other than those in financial sector. According to the authors’ knowledge, it is the first thorough study to be conducted in Pakistan with regard to determinants of capital structure of listed non-financial firms. Though Booth, et al. (2001) have worked on the determinants of capital structure of 10 developing countries including Pakistan; however, their study analyses data only for the firms that were included in the KSE-100 Index from 1980 to 1987. The paper is organised as follows. Section 1 introduces the paper. In the next section, some of the theoretical literature concerning the determinants and effects of leverage is reviewed. In Section 3 we describe our data and we justify the choice of the variables used in our analysis. In Section 4 we estimate the model used in our analysis. The Fifth Section presents the results and conclusion.
Health and Safety Management in UK and Spanish SMEs: A Comparative Study
This paper reports on a survey carried out among United Kingdom (UK) and Spanish small and medium enterprises (SMEs) to establish their approach to health and safety management and determine views on participating in voluntary management accreditation schemes. The study revealed some key differences between the responding UK and Spanish SMEs. There was (a) an enhanced level of awareness of health and safety legislation; (b) a higher prevalence of safety and quality management systems, and (c) greater involvement of senior managers in managing health and safety in UK enterprises. Interest was expressed in a voluntary management accreditation scheme for health and safety by over half the UK and Spanish sample. Furthermore, those enterprises participating in a voluntary quality management accreditation scheme were more likely to be interested in a voluntary scheme for health and safety management. © 2000 National Safety Council and Elsevier Science Ltd.
Wind turbine underwater noise and marine mammals: implications of current knowledge and data needs
The demand for renewable energy has led to construction of offshore wind farms with high-power turbines, and many more wind farms are being planned for the shallow waters of the world’s marine habitats. The growth of offshore wind farms has raised concerns about their impact on the marine environment. Marine mammals use sound for foraging, orientation and communication and are therefore possibly susceptible to negative effects of man-made noise generated from constructing and operating large offshore wind turbines. This paper reviews the existing literature and assesses zones of impact from different noise-generating activities in conjunction with wind farms on 4 representative shallow-water species of marine mammals. Construction involves many types of activities that can generate high sound pressure levels, and pile-driving seems to be the noisiest of all. Both the literature and modeling show that pile-driving and other activities that generate intense impulses during construction are likely to disrupt the behavior of marine mammals at ranges of many kilometers, and that these activities have the potential to induce hearing impairment at close range. The reported noise levels from operating wind turbines are low, and are unlikely to impair hearing in marine mammals. The impact zones for marine mammals from operating wind turbines depend on the low-frequency hearing-abilities of the species in question, on sound-propagation conditions, and on the presence of other noise sources such as shipping. The noise impact on marine mammals is more severe during the construction of wind farms than during their operation.
Digital auto-tuning system for inductor current sensing in VRM applications
Inductor current sensing is becoming widely used in current programmed controllers for microprocessor applications. This method exploits a low-pass filter in parallel with the inductor to provide lossless current sense. A major drawback of inductor current sensing is that accurate sense the DC and AC components of the current signal requires precise matching between the low-pass filter time constant and the inductor time constant (L/RL). However, matching accuracy depends on the tolerance of the components and on the operating conditions; therefore it can hardly be guaranteed. To overcome this problem, a novel digital auto-tuning system is proposed that automatically compensates any time constant mismatch. This auto-tuning system has been developed for VRM current programmed controllers. It makes it possible to meet the adaptive voltage positioning requirements using conventional and low cost components, and to solve problems such as aging effects, temperature variations and process tolerances as well. A prototype of the auto-tuning system based on an FPGA and a commercial DC/DC controller has been designed and tested. The experimental results fully confirmed the effectiveness of the proposed method, showing an improvement of the current sense precision from about 30% up to 4%. This innovative solution is suitable to fulfill the challenging accuracy specifications required by the future VRM applications
Students' social media engagement and fear of missing out (FoMO) in a diverse classroom
With the growing attention paid to fear of missing out (FoMO) psychological phenomenon in explaining social media engagement (SME), this mixedmethod research measured the relative impact of FoMO on students’ SME for personal reasons during lectures. The moderating effect of culture (minority vs. nonminority students) on the connection between FoMO and SME was also considered. Quantitative data were gathered from 279-undergraduate students. The structural equation modeling results showed a positive moderate connection between the FoMO and SME variables. The bootstrapping result showed a significant indirect effect between the minority group of students and SME through increased levels of FoMO. A sequential explanatory strategy was used to refine and interpret the quantitative results. Accordingly, qualitative data were gathered by using semistructured interviews to assist in explaining the findings of the quantitative phase. The qualitative data suggested several explanations for students’ distractive behavior enabled by technology during class. The main recurrent theme was the frequently used instructional activities based on the teacher-centered pedagogical approach. This approach imposed greater challenges for minority students as they tend to grapple with a host of language barriers. These students reported using social media tools to seek help from friends during lectures and feared missing out a useful assistance. Another finding showed that mainly non-minority students who experienced FoMO admitted using social media during lessons regardless of the teaching method implemented.
A Wide-Band CMOS to Waveguide Transition at mm-Wave Frequencies With Wire-Bonds
In this paper a two-step transition between a CMOS chip and a WR-10 waveguide is investigated. The transition between a coplanar waveguide (CPW) on Duroid to W-band aluminum waveguide is studied and measured results show a bandwidth of 49% (67-110 GHz), an average insertion loss of 0.35 dB, and return loss below -10 dB throughout the entire band. Wire-bonding is then investigated as a connection between on-chip CMOS ground-signal-ground (GSG) pads to the Duroid CPW where the CMOS to printed circuit board transition measured results show an average 1-dB loss (including the CMOS GSG launcher) in the W-band. The full transition is also used to demonstrate packaging of a 100-GHz voltage-controlled oscillator in 90-nm CMOS where the RF signal was transmitted successfully with an average 2.5-dB loss (which includes a 2-cm Duroid CPW line) compared to on-chip probing measurement results.
Gotta Learn Fast: A New Benchmark for Generalization in RL
In this report, we present a new reinforcement learning (RL) benchmark based on the Sonic the HedgehogTM video game franchise. This benchmark is intended to measure the performance of transfer learning and few-shot learning algorithms in the RL domain. We also present and evaluate some baseline algorithms on the new benchmark.
Chainspace: A Sharded Smart Contracts Platform
Chainspace is a decentralized infrastructure, known as a distributed ledger, that supports user defined smart contracts and executes user-supplied transactions on their objects. The correct execution of smart contract transactions is verifiable by all. The system is scalable, by sharding state and the execution of transactions, and using S-BAC, a distributed commit protocol, to guarantee consistency. Chainspace is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT), and extremely highauditability, non-repudiation and ‘blockchain’ techniques. Even when BFT fails, auditing mechanisms are in place to trace malicious participants. We present the design, rationale, and details of Chainspace; we argue through evaluating an implementation of the system about its scaling and other features; we illustrate a number of privacy-friendly smart contracts for smart metering, polling and banking and measure their performance.
Asthma therapy modulates priming-associated blood eosinophil responsiveness in allergic asthmatics.
Eosinophils play an important role in the pathogenesis of asthma. Several pro-inflammatory responses of eosinophils are primed in vivo in this disease. The aim of the present study was to investigate whether regular antiasthma treatment could modulate priming-sensitive cytotoxic mechanisms of human eosinophils. In a randomized, two-centre, double-blind parallel group study, the effect of 8 weeks of treatment with salmeterol xinafoate 50 microg b.i.d., beclomethasone dipropionate 400 microg b.i.d. or both on pulmonary function and the activation of priming-sensitive cytotoxic mechanisms of eosinophils, i.e. degranulation of eosinophil cationic protein (ECP) in serum, and activation of isolated eosinophils in the context of induction of the respiratory burst and release of platelet-activating factor (PAF) were tested. These effects were evaluated in 40 allergic asthmatics before and 24 h after allergen inhalation challenge. Whereas baseline forced expiratory volume in one second (FEV1) improved in all treatment groups, only treatment with a combination of salmeterol and beclomethasone significantly inhibited the allergen-induced increase in serum ECP, and (primed/unprimed) PAF-release, suggesting inhibition of eosinophil priming after allergen challenge. In contrast to the combination therapy, monotherapy with beclomethasone had no influence on allergen-induced PAF-release, suggesting an additional anti-inflammatory effect of salmeterol during combination therapy. Monotherapy with beclomethasone inhibited the prechallenge serum-treated zymosan (STZ) (0.1 mg mL(-1))-induced respiratory burst and the allergen-induced increase in serum ECP levels, reflecting pre- and postchallenge anti-inflammatory effects. During monotherapy with salmeterol, an allergen-induced increase in serum ECP concentration and STZ (0.1 mg x mL(-1))-induced respiratory burst was observed, suggesting that treatment with salmeterol alone had no effect on priming-sensitive eosinophil cytotoxic mechanisms. In conclusion, this study shows that standard asthma therapy leads to inhibition of eosinophil priming of cytotoxic mechanisms in vivo.
Efficient Learning of Pre-attentive Steering in a Driving School Framework
Autonomous driving is an extremely challenging problem and existing driverless cars use non-visual sensing to palliate the limitations of machine vision approaches. This paper presents a driving school framework for learning incrementally a fast and robust steering behaviour from visual gist only. The framework is based on an autonomous steering program interfacing in real time with a racing simulator: hence the teacher is a racing program having perfect insight into its position on the road, whereas the student learns to steer from visual gist only. Experiments show that (i) such a framework allows the visual driver to drive around the track successfully after a few iterations, demonstrating that visual gist is sufficient input to drive the car successfully; and (ii) the number of training rounds required to drive around a track reduces when the student has experienced other tracks, showing that the learnt model generalises well to unseen tracks.
Hepatitis B and Schistosoma co-infection in a non-endemic area
Schistosomiasis is related to the development of liver fibrosis and portal hypertension. Chronic co-infection with HBV and Schistosoma has been associated in endemic areas with a higher risk for a more severe liver disease. However, no studies have assessed the real importance of this co-infection in non-endemic regions. This is a retrospective observational study of Sub-Saharan immigrants attending between October 2004 and February 2014. Patients with chronic HBV infection with and without evidence of schistosomal infection were compared. Epidemiological, analytical, and microbiological data were analysed. Likelihood of liver fibrosis based on APRI and FIB-4 indexes was established. A total of 507 patients were included in the study, 170 (33.5 %) of them harbouring evidence of schistosome infection. No differences were found in transaminase, GGT, and ALP levels. In fibrosis tests, a higher proportion of patients with HVB and S. mansoni detection reached possible fibrosis scores (F > 2) when compared to patients without schistosomiasis: 17.4 vs 14.2 % and 4.3 % vs 4.2 % (using high sensitivity and high specificity cut-offs respectively), although differences were not statistically significant (p = 0.69, p = 0.96). For possible cirrhosis (F4) score, similar results were observed: 4.3 % of co-infected patients vs 2.1 % of mono-infected ones, p = 0.46. According to these datas, in non-endemic regions the degree of hepatic fibrosis in patients with chronic hepatitis B is not substantially modified by schistosome co-infection.
A review of refrigerant maldistribution
In recent years, conservation of energy has become a challenging issue in air-conditioning applications. In order to overcome this issue, many researchers have recommended the use of a Parallel Flow Condenser and a low Global Warming Potential and Ozone Depletion Potential refrigerant, such as R32, in air conditioning systems. However, PFC faces the critical challenge of flow maldistribution in the tubes. This literature review mainly examines the refrigerant maldistribution problem which has been investigated by previous researchers. It was found that many of the researchers did not properly analyse the influence of flow maldistribution profiles on the performance degradation of heat exchangers. In order to have a comprehensive analysis of tube-side maldistribution in Parallel Flow microchannel heat exchangers, it is recommended that the influence of the higher statistical moments of the probability density function of the flow maldistribution profiles on performance degradation be quantified. Additionally, R32 maldistribution should be analysed and compared with R410A, which is the current commonly used refrigerant in air conditioning units. Moreover, in order to have a realistic simulation of the effect of refrigerant flow maldistribution profiles on performance degradation of heat exchangers, the effect of superheat and sub-cooling must be analysed.
EMPLOYEE MOTIVATION AND PERFORMANCE Ultimate Companion Limited Douala-Cameroon
The subject matter of this research; employee motivation and performance seeks to look at how best employees can be motivated in order to achieve high performance within a company or organization. Managers and entrepreneurs must ensure that companies or organizations have a competent personnel that is capable to handle this task. This takes us to the problem question of this research “why is not a sufficient motivation for high performance?” This therefore establishes the fact that money is for high performance but there is need to look at other aspects of motivation which is not necessarily money. Four theories were taken into consideration to give an explanation to the question raised in the problem formulation. These theories include: Maslow’s hierarchy of needs, Herzberg two factor theory, John Adair fifty-fifty theory and Vroom’s expectancy theory. Furthermore, the performance management process as a tool to measure employee performance and company performance. This research equally looked at the various reward systems which could be used by a company. In addition to the above, culture and organizational culture and it influence on employee behaviour within a company was also examined. An empirical study was done at Ultimate Companion Limited which represents the case study of this research work. Interviews and questionnaires were conducted to sample employee and management view on motivation and how it can increase performance at the company. Finally, a comparison of findings with theories, a discussion which raises critical issues on motivation/performance and conclusion constitute the last part of the research. Subject headings, (keywords) Motivation, Performance, Intrinsic, Extrinsic, Incentive, Tangible and Intangible, Reward
AutoHair: fully automatic hair modeling from a single image
We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval.
A review of affective computing: From unimodal analysis to multimodal fusion
Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.
Arithmetics of extensional fuzzy numbers - part II: Algebraic framework
In the first part of this contribution, we proposed extensional fuzzy numbers and a working arithmetic for them that may be abstracted to so-called many identities algebras (MI-algebras, for short). In this second part, we show that the proposed MI-algebras give a framework not only for the arithmetic of extensional fuzzy numbers, but also for other arithmetics of fuzzy numbers and even more general sets of real vectors used in mathematical morphology. This entitles us to develop a theory of MI-algebras to study general properties of structures for which the standard algebras are not appropriate. Some of the basic concepts and properties are presented here.
Cimetidine for chronic calcifying tendinitis of the shoulder.
BACKGROUND AND OBJECTIVES Calcium deposits of the shoulder may persist for many years with resulting pain and impairment of mechanical function. The effects of different treatments vary significantly and do not show consistent and reliable long-term results. Cimetidine decreases calcium levels and improves symptoms in patients with hyperparathyroidism. We evaluated cimetidine as a treatment for chronic calcifying tendinitis of the shoulder in patients who did not respond to conservative treatment. METHODS Cimetidine, 200 mg twice daily, was given orally for 3 months in 16 patients who did not respond to more than 6 months of conservative treatment. We recorded subjective, functional, and radiologic findings at 1 day before, at 2 weeks after, and at 2 and 3 months after the start of cimetidine. We also performed a follow-up study (4 to 24 months). RESULTS After treatment, peak pain score (visual analogue scale: 0 - 100) decreased significantly from 63 +/- 13 to 14 +/- 19 (mean +/- SD, P <.01) and 10 patients (63%) became pain free. Physical impairment was also significantly improved. Calcium deposits disappeared in 9 patients (56%), decreased in 4 patients (25%), and did not change in 3 patients (19%). Follow-up data showed that improvement of symptoms was sustained. No recurrence or enlargement of calcium deposits was observed. Plasma concentrations of calcium and parathyroid hormone did not change significantly. CONCLUSIONS Our results indicate that cimetidine is effective in treating chronic calcifying tendinitis of the shoulder; however, the mechanism by which cimetidine improves the symptoms is unknown.
Self-Adaptive Anytime Stream Clustering
Clustering streaming data requires algorithms which are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. Moreover, we are capable of detecting concept drift, novelty and outliers in the stream. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering.
Precipitation Titrations Determination of Chloride by the Mohr Method
and signals the end point. The color goes from a yellow to a brownish-yellow. The change can be detected most precisely when the color in the titration flask is compared to a reference color. A mixture containing CrO4 2indicator in a suspension of CaCO3, simulating AgCl precipitate, is used for this purpose. An indicator blank is prepared using a portion of this mixture. The blank provides a correction for the slight difference between the end point and the equivalence point, a systematic error.
Alveolar bone development after decoronation of ankylosed teeth
Decoronation of ankylosed teeth in infraposition was introduced in 1984 by Malmgren and co-workers (1). This method is used all over the world today. It has been clinically shown that the procedure preserves the alveolar width and rebuilds lost vertical bone of the alveolar ridge in growing individuals. The biological explanation is that the decoronated root serves as a matrix for new bone development during resorption of the root and that the lost vertical alveolar bone is rebuilt during eruption of adjacent teeth. First a new periosteum is formed over the decoronated root, allowing vertical alveolar growth. Then the interdental fibers that have been severed by the decoronation procedure are reorganized between adjacent teeth. The continued eruption of these teeth mediates marginal bone apposition via the dental-periosteal fiber complex. The erupting teeth are linked with the periosteum covering the top of the alveolar socket and indirectly via the alveolar gingival fibers, which are inserted in the alveolar crest and in the lamina propria of the interdental papilla. Both structures can generate a traction force resulting in bone apposition on top of the alveolar crest. This theoretical biological explanation is based on known anatomical features, known eruption processes and clinical observations.
Unsupervised Domain Adaptation by Domain Invariant Projection
Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.
RelEx - Relation extraction using dependency parse trees
MOTIVATION The discovery of regulatory pathways, signal cascades, metabolic processes or disease models requires knowledge on individual relations like e.g. physical or regulatory interactions between genes and proteins. Most interactions mentioned in the free text of biomedical publications are not yet contained in structured databases. RESULTS We developed RelEx, an approach for relation extraction from free text. It is based on natural language preprocessing producing dependency parse trees and applying a small number of simple rules to these trees. We applied RelEx on a comprehensive set of one million MEDLINE abstracts dealing with gene and protein relations and extracted approximately 150,000 relations with an estimated performance of both 80% precision and 80% recall. AVAILABILITY The used natural language preprocessing tools are free for use for academic research. Test sets and relation term lists are available from our website (http://www.bio.ifi.lmu.de/publications/RelEx/).
Awareness , Interest , and Purchase : The Effects of User-and Marketer-Generated Content on Purchase Decision Processes
Companies use Facebook fan pages to promote their products or services. Recent research shows that user(UGC) and marketer-generated content (MGC) created on fan pages affect online sales. But it is still unclear how exactly they affect consumers during their purchase process. We analyze field data from a large German e-tailer to investigate the effects of UGC and MGC in a multi-stage model of purchase decision processes: awareness creation, interest stimulation, and final purchase decision. We find that MGC and UGC create awareness by attracting users to the fan page. Increased numbers of active users stimulate user interest, and more users visit the e-tailer’s online shop. Neutral UGC increase the conversion rate of online shop visitors. Comparisons between one-, twoand three-stage modes show that neglecting one or two stages hides several important effects of MGC and UGC on consumers and ultimately leads to inaccurate predictions of key business figures.
Diverticulitis in transplant patients and patients on chronic corticosteroid therapy: a systematic review.
BACKGROUND The clinical course of diverticular disease in immunosuppressed patients is widely believed to be more severe than in the general population. In this study we systematically reviewed the literature regarding the epidemiology and clinical course of diverticulitis in immunosuppressed patients. Our goal was to develop recommendations regarding the care of this group of patients. METHODS Using PubMed and Web of Knowledge we systematically reviewed all studies published between 1970 and 2009 that analyzed the epidemiology, clinical manifestation, or outcomes of treatment of diverticulitis in immunosuppressed patients. Keywords of "transplantation," "corticosteroid," "HIV," "AIDS," and "chemotherapy" were used. RESULTS Twenty-five studies met our inclusion criteria. All of these studies focused on the impact of diverticulitis in patients with transplants or on chronic corticosteroid therapy. The reported incidence of acute diverticulitis in these patients was approximately 1% (variable follow-up periods). Among patients with known diverticular disease the incidence was 8%. Mortality from acute diverticulitis in these patients was 23% when treated surgically and 56% when treated medically. Overall mortality was 25%. CONCLUSIONS Our study summarizes evidence that patients with transplants or patients on chronic corticosteroid therapy 1) have a rate of acute diverticulitis that is higher than the baseline population and 2) a mortality rate with acute diverticulitis that is high. Further research is needed to define whether these risks constitute a mandate for screening and prophylactic sigmoid colectomy.
Effects of growth hormone on circulating cytokine network, and left ventricular contractile performance and geometry in patients with idiopathic dilated cardiomyopathy.
BACKGROUND Recent experimental and clinical data indicate that abnormal central and peripheral immune reactions contribute to the progression of chronic heart failure, and that immunomodulation may be an important therapeutic approach in this syndrome. Aims We sought to study the effects of growth hormone (GH) administration on circulating pro-inflammatory/anti-inflammatory cytokine balance, and to investigate whether these GH-induced immunomodulatory effects are associated with the improvement of left ventricular (LV) contractile performance in idiopathic dilated cardiomyopathy (DCM) patients. METHODS Plasma pro-inflammatory cytokines tumour necrosis factor-alpha (TNF-alpha), interleukin-6 (IL-6), granulocyte-macrophage colony-stimulating factor (GM-CSF) and its soluble receptor (sGM-CSFR), chemotactic cytokine macrophage chemoattractant protein-1 (MCP-1), soluble adhesion molecules intercellular adhesion molecule-1 (sICAM-1) and vascular cell adhesion molecule-1 (sVCAM-1), and, finally, anti-inflammatory cytokines interleukin-10 (IL-10) and transforming growth factor-beta2 (TGF-beta2) were measured (ELISA method) in 12 patients with DCM (NYHA class III; LV ejection fraction: 23.6+/-1.7%) before and after a 3-month subcutaneous administration of GH 4IU every other day (randomized crossover design). Peak oxygen uptake (VO2 max), LV dimensions, LV mass index, end-systolic wall stress (ESWS), mean velocity of circumferential fibre shortening (Vcfc), and contractile reserve (change of ratio Vcfc/ESWS after dobutamine administration) were also determined at the same period. RESULTS Treatment with GH produced a significant reduction in plasma TNF-alpha (7.8+/-1.1 vs 5.5+/-0.9pg/ml, P=0.013), IL-6 (5.7+/-0.5 vs 4.7+/-0.4pg/ml, P=0.043), GM-CSF (27.3+/-1.7 vs 23.3+/-1.8pg/ml, P=0.042), sGM-CSFR (4.0+/-0.4 vs 3.2+/-0.4ng/ml, P=0.039), MCP-1 (199+/-5 vs 184+/-6pg/ml, P=0.048), sICAM-1 (324+/-34 vs 274+/-27ng/ml, P=0.008) and sVCAM-1 (1238+/-89 vs 1043+/-77ng/ml, P=0.002) in DCM patients. A significant increase in ratios IL-10/TNF-alpha (1.9+/-0.3 vs 3.5+/-0.9, P=0.049), IL-10/IL-6(2.6+/-0.6 vs 3.2+/-0.5, P=0.044) and TGF-beta2/TNF-alpha (3.1+/-0.6 vs 4.4+/-0.6, P=0.05) was alsofound with GH therapy. A significant reduction in ESWS (841+/-62 vs 634+/-48gr/cm(2), P=0.0026) and LV end-systolic volume index (LVESVI, 128+/-12 vs 102+/-12ml, P=0.035) as well as a significant increase in posterior wall thickness (PWTH, 9.2+/-0.5 vs 10.3+/-0.6mm, P=0.034), contractile reserve (0.00029+/-0.0001 vs 0.00054+/-0.0001circ*cm(2)/gr*s, P=0.00028) and VO2max (15.3+/-0.7 vs 17.1+/-0.9ml/kg/min, P=0.002) were observed after GH administration. Good correlations were found between GH-induced increase in contractile reserve and the increases in VO2max (r=0.63, P=0.028), IL-10/TNF-alpha (r=0.69, P=0.011) and TGF-beta2/TNF-alpha (r=0.58, P=0.046) ratios, as well as the reduction in plasma TNF-alpha levels (r=-0.86, P=0.0004). CONCLUSIONS GH administration modulates beneficially circulating cytokine network and soluble adhesion molecules in patients with DCM, whilst enhancing contractile reserve and diminishing LV volumes. These GH-induced anti-inflammatory effects may be associated with the improvement in LV contractile performance and exercise capacity as well as with the reverse of LV remodelling of patients with DCM.
Leaf disease detection and grading using computer vision technology & fuzzy logic
In Agriculture, leaf diseases have grown to be a dilemma as it can cause significant diminution in both quality and quantity of agricultural yields. Thus, automated recognition of diseases on leaves plays a crucial role in agriculture sector. This paper imparts a simple and computationally proficient method used for leaf disease identification and grading using digital image processing and machine vision technology. The proposed system is divided into two phases, in first phase the plant is recognized on the basis of the features of leaf, it includes pre-processing of leaf images, and feature extraction followed by Artificial Neural Network based training and classification for recognition of leaf. In second phase the disease present in the leaf is classified, this process includes K-Means based segmentation of defected area, feature extraction of defected portion and the ANN based classification of disease. Then the disease grading is done on the basis of the amount of disease present in the leaf.
Pattern Discovery from Stock Time Series Using Self-Organizing Maps †
† This work was supported by the RGC CERG project PolyU 5065/98E and the Departmental Grant H-ZJ84 ‡ Corresponding author ABSTRACT Pattern discovery from time series is of fundamental importance. Particularly when the domain expert derived patterns do not exist or are not complete, an algorithm to discover specific patterns or shapes automatically from the time series data is necessary. Such an algorithm is noteworthy in that it does not assume prior knowledge of the number of interesting structures, nor does it require an exhaustive explanation of the patterns being described. In this paper, a clustering approach is proposed for pattern discovery from time series. In view of its popularity and superior clustering performance, the self-organizing map (SOM) was adopted for pattern discovery in temporal data sequences. It is a special type of clustering algorithm that imposes a topological structure on the data. To prepare for the SOM algorithm, data sequences are segmented from the numerical time series using a continuous sliding window. Similar temporal patterns are then grouped together using SOM into clusters, which may subsequently be used to represent different structures of the data or temporal patterns. Attempts have been made to tackle the problem of representing patterns in a multi-resolution manner. With the increase in the number of data points in the patterns (the length of patterns), the time needed for the discovery process increases exponentially. To address this problem, we propose to compress the input patterns by a perceptually important point (PIP) identification algorithm. The idea is to replace the original data segment by its PIP’s so that the dimensionality of the input pattern can be reduced. Encouraging results are observed and reported for the application of the proposed methods to the time series collected from the Hong Kong stock market.
Does smoking intervention influence adolescent substance use disorder treatment outcomes?
Although tobacco use is reported by the majority of substance use disordered (SUD) youth, little work has examined tobacco focused interventions with this population. The present study is an initial investigation of the effect of a tobacco use intervention on adolescent SUD treatment outcomes. Participants were adolescents in SUD treatment taking part in a cigarette smoking intervention efficacy study, assessed at baseline and followed up at 3- and 6-months post-intervention. Analyses compared treatment and control groups on days using alcohol and drugs and proportion abstinent from substance use at follow up assessments. Adolescents in the treatment condition reported significantly fewer days of substance use and were somewhat more likely to be abstinent at 3-month follow up. These findings suggest that tobacco focused intervention may enhance SUD treatment outcome. The present study provides further evidence for the value of addressing tobacco use in the context of treatment for adolescent SUD's.
Evolutionary Generative Adversarial Networks
Generative adversarial networks (GAN) have been effective for learning generative models for real-world data. However, existing GANs (GAN and its variants) tend to suffer from training problems such as instability and mode collapse. In this paper, we propose a novel GAN framework called evolutionary generative adversarial networks (E-GAN) for stable GAN training and improved generative performance. Unlike existing GANs, which employ a pre-defined adversarial objective function alternately training a generator and a discriminator, we utilize different adversarial training objectives as mutation operations and evolve a population of generators to adapt to the environment (i.e., the discriminator). We also utilize an evaluation mechanism to measure the quality and diversity of generated samples, such that only well-performing generator(s) are preserved and used for further training. In this way, E-GAN overcomes the limitations of an individual adversarial training objective and always preserves the best offspring, contributing to progress in and the success of GANs. Experiments on several datasets demonstrate that E-GAN achieves convincing generative performance and reduces the training problems inherent in existing GANs.
Interferring Discourse Relations in Context
We investigate various contextual effects on text interpretation, and account for them by providing contextual constraints in a logical theory of text interpretation. On the basis of the way these constraints interact with the other knowledge sources, we draw some general conclusions about the role of domain-specific information, top-down and bottom-up discourse information flow, and the usefulness of formalisation in discourse theory. Introduct ion: T i m e S w i t c h i n g and A m e l i o r a t i o n Two essential parts of discourse interpretation involve (i) determining the rhetorical role each sentence plays in the text; and (ii) determining the temporal relations between the events described. Preceding discourse context has significant effects on both of these aspects of interpretation. For example, text (1) in vacuo may be a non-iconic explanation; the pushing caused the falling and so explains why Max fell. But the same pair of sentences may receive an iconic, narrative interpretat ion in the discourse context provided by (2): John takes advantage of Max's vulnerability while he is lying the ground, to push him over the edge of the cliff. (1) Max fell. John pushed him. (2) John and Max came to the cliff's edge. John applied a sharp blow to the back of Max's neck. Max fell. John pushed him. Max rolled over the edge of the cliff. a The support of the Science and Engineering Research Council through project number GR/G22077 is gratefully acknowledged. HCRC is supported by the Economic and SociM Research Council. We thank two anonymous reviewers for their helpful comments. Moreover, the text in (3) in vacuo is incoherent, but becomes coherent in (4)'s context.
A Combined Model- and Learning-Based Framework for Interaction-Aware Maneuver Prediction
This paper presents a novel online-capable interaction-aware intention and maneuver prediction framework for dynamic environments. The main contribution is the combination of model-based interaction-aware intention estimation with maneuver-based motion prediction based on supervised learning. The advantages of this framework are twofold. On one hand, expert knowledge in the form of heuristics is integrated, which simplifies the modeling of the interaction. On the other hand, the difficulties associated with the scalability and data sparsity of the algorithm due to the so-called curse of dimensionality can be reduced, as a reduced feature space is sufficient for supervised learning. The proposed algorithm can be used for highly automated driving or as a prediction module for advanced driver assistance systems without the need of intervehicle communication. At the start of the algorithm, the motion intention of each driver in a traffic scene is predicted in an iterative manner using the game-theoretic idea of stochastic multiagent simulation. This approach provides an interpretation of what other drivers intend to do and how they interact with surrounding traffic. By incorporating this information into a Bayesian network classifier, the developed framework achieves a significant improvement in terms of reliable prediction time and precision compared with other state-of-the-art approaches. By means of experimental results in real traffic on highways, the validity of the proposed concept and its online capability is demonstrated. Furthermore, its performance is quantitatively evaluated using appropriate statistical measures.
Are variance components of exposure heterogeneous between time periods and factories in the European carbon black industry?
Occupational exposure to chemical agents can vary enormously within- and between-workers, even when carrying out the same jobs. When repeated measurements are available, the variance components can be estimated using random- or mixed-effects models. Pooling the variance components across the fixed effects, in mixed-effects models, reduces the complexity of the models; especially, when there are a large number of fixed effects. The analyses presented in this paper tested the assumptions of homogeneity in the variance components between factories and surveys for inhalable dust exposure in the European carbon black manufacturing industry. In total, 5296 measurements from 1771 workers were available collected during two surveys carried out between 1991 and 1995. Workers were grouped into eight job categories, and for each of these separate mixed-effects models were developed, including factory, survey and in some cases the interaction term as the fixed effects. The likelihood ratio test was used to test the assumptions of homogeneity of the variance components. Statistically significant heterogeneity of the variance components was observed for two of the eight job categories, 'Fitter/Welder' and 'Warehouseman'. The heterogeneity was due mainly to differences in variance between the factories. When estimating the probability of overexposure for all the factories combined, there was little difference between the models with and without heterogeneous variance components for 'Fitters/Welders'. For the 'Warehousemen' the probability of overexposure in the last survey changed marginally from 4% in the pooled model to 6% in the heterogeneous model. Larger differences between the models were observed when estimating the probability of overexposure for individual factories, which was due to over- or under-estimation of the variance components in the pooled models. In conclusion, for most job categories pooling of the variance components appears to be justified in this database. In addition, no large differences were found when determining the industry-wide probability of 'overexposure' when comparing the pooled with the heterogeneous models. However, when evaluating the factory-specific probability of 'overexposure' or when using the models to provide exposure estimates for epidemiological studies heterogeneity in the variance components should be investigated.
A Categorisation of Cloud Computing Business Models
This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In- House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. Using the Jericho Forum’s ‘Cloud Cube Model’ (CCM), the paper presents a summary of the eight business models. We discuss how the CCM fits into each business model, and then based on this discuss each business model’s strengths and weaknesses. We hope adopting an appropriate cloud computing business model will help organisations investing in this technology to stand firm in the economic downturn.
Pores resolving simulation of Darcy flows
A theoretical formulation and corresponding numerical solutions are presented for microscopic fluid flows in porous media with the domain sufficiently large to reproduce integral Darcy scale effects. Pore space geometry and topology influence flow through media, but the difficulty of observing the configurations of real pore spaces limits understanding of their effects. A rigorous direct numerical simulation (DNS) of percolating flows is a formidable task due to intricacies of internal boundaries of the pore space. Representing the grain size distribution by means of repelling body forces in the equations governing fluid motion greatly simplifies computational efforts. An accurate representation of pore-scale geometry requires that within the solid the repelling forces attenuate flow to stagnation in a short time compared to the characteristic time scale of the pore-scale flow. In the computational model this is achieved by adopting an implicit immersed-boundary method with the attenuation time scale smaller than the time step of an explicit fluid model. A series of numerical simulations of the flow through randomly generated media of different porosities show that computational experiments can be equivalent to physical experiments with the added advantage of nearly complete observability. Besides obtaining macroscopic measures of permeability and tortuosity, numerical experiments can shed light on the effect of the pore space structure on bulk properties of Darcy scale flows. 2009 Elsevier Inc. All rights reserved.
Raspberry Pi based interactive home automation system through E-mail
Home automation is becoming more and more popular day by day due to its numerous advantages. This can be achieved by local networking or by remote control. This paper aims at designing a basic home automation application on Raspberry Pi through reading the subject of E-mail and the algorithm for the same has been developed in python environment which is the default programming environment provided by Raspberry Pi. Results show the efficient implementation of proposed algorithm for home automation. LEDs were used to indicate the switching action.
Hierarchical Peer-To-Peer Systems
Structured peer-to-peer (P2P) lookup services—such as Chord, CAN, Pastry and Tapestry—organize peers into a flat overlay network and offer distributed hash table (DHT) functionality. In these systems, data is associated with keys and each peer is responsible for a subset of the keys. We study hierarchical DHTs, in which peers are organized into groups, and each group has its autonomous intra-group overlay network and lookup service. The groups themselves are organized in a top-level overlay network. To find a peer that is responsible for a key, the top-level overlay first determines the group responsible for the key; the responsible group then uses its intra-group overlay to determine the specific peer that is responsible for the key. After providing a general framework for hierarchical P2P lookup, we consider the specific case of a two-tier hierarchy that uses Chord for the top level. Our analysis shows that by designating the most reliable peers in the groups as superpeers, the hierarchical design can significantly reduce the expected number of hops in Chord. We also propose a scalable design for managing the groups and the superpeers.
Skew detection and correction in document images bsed on straight-line fitting
During document scanning, skew is inevitably introduced into the incoming document image. Since the algorithms for layout analysis and character recognition are generally very sensitive to the page skew, skew detection and correction in document images are the critical steps before layout analysis. In this paper, a novel skew detection method based on straight-line fitting is proposed. And a concept of eigen-point is introduced. After the relations between the neighboring eigen-points in every text line within a suitable sub-region were analyzed, the eigen-points most possibly laid on the baselines are selected as samples for the straight-line fitting. The average of these baseline directions is computed, which corresponds to the degree of skew of the whole document image. Then a fast skew correction method based on the scanning line model is also presented. Experiments prove that the proposed approaches are fast and accurate.
The Vienna Definition Language
The Vienna Definition Language (VDL) is a programming language for defining programming languages. It allows us to describe precisely the execution of the set of all programs of a programming language. However, the Vienna Definition Language is important not only as one definition technique among many others but as an illustration of a new i~dormation-structure-oriented approach to the study of programming languages. The paper may be regarded as a case study in the information structure modeling of programming languages, as well as an introduction to a specific modeling technique. Part 1 includes a brief review and comparison of techniques of language definition, and then introduces the basic data structures and data structure manipulation operators that are used to define programming languages. Part 2 considers the definition of sets of data structures. Part 3 introduces the notation for specifying VDL instructions, and illustrates the definition of a simple language for arithmetic expression evaluation by giving the complete sequence of states (snapshots) that arise during the evaluation of an expression. Part 4 defines a simple block structure language in VDL and considers certain basic design issues for such languages in a more general context. Part 5 introduces an alternative definition of the block structure language defined in Part 4, proves the equivalence of the two definitions, and briefly reviews recent work on proving the equivalence of interpreters. An Appendix considers the definition of VDL in VDL. This paper may be read at a number of different levels. The reader who is interested in a quick introduction to the basic definition techniques of VDL need read only Section 1.3 and Part 3. The reader who is interested in a deeper understanding of block structure languages should emphasize Part 4, while the reader concerned with proofs of equivalence of interpreters will be interested in Part 5.
Credit Assignment in Multiple Goal Embodied Visuomotor Behavior
The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain's abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.
DIEP flap with implant: a further option in optimising breast reconstruction.
Recent advances in breast reconstruction allow for high expectations regarding long-term symmetry and aesthetic appearance. The DIEP flap is currently considered as an ideal autologous reconstruction. However, there are situations in which the amount of tissue from a DIEP flap is not enough to achieve adequate symmetry. Indications and outcomes for a combined use of DIEP flap and implants are discussed in order to describe and examine a further scenario in optimising breast reconstruction. Between January 2004 and January 2006, all patients who underwent combined DIEP/implant breast reconstruction have been collected and followed prospectively. When clinical assessment demonstrated inadequate amount of tissue in the abdominal region to achieve a suitable unilateral or bilateral reconstruction with DIEP flaps, the patients were counselled about the opportunity of primary augmentation of the DIEP flaps. In cases where DIEP breast reconstruction has been done previously and there is a considerable asymmetry, delayed flap augmentation was considered. Patient's age, indication for surgery, preoperative and postoperative radiotherapy (RT), operative procedure, implant size, location and timing of insertion, complications, outcomes, and follow-up have been gathered. In all cases, textured round silicone gel implants have been used. After 12 months, four-point scales were used to analyse patients' satisfaction and aesthetic outcome. During the study period, 156 patients underwent breast reconstruction with 174 DIEP flaps. Fourteen patients (8.9%) had breast reconstruction with 19 DIEP flaps and 18 implants. The mean follow-up was 20.6 months (range 12-32 months). Fourteen implants were placed primarily at the time of DIEP reconstruction. The average implant weight was 167.2g with range between 100 and 230 g. Implant/flap weight ratio is about 1:5 corresponding to 20%. In six flaps, the patients had RT before the reconstruction, whilst in three cases of delayed DIEP flap augmentation the patients had RT after the DIEP post-mastectomy reconstruction. One infection and one haematoma, both followed by flap partial necrosis, occurred. After 12 months following the completion of reconstruction, aesthetic scores were all between good and excellent. Surgical indications and outcomes available from this series demonstrate that primary and delayed DIEP/implant augmentation can be a safe and effective option in optimising breast reconstruction with autologous tissue.
Analyzing Short-Term Noise Dependencies of Spike-Counts in Macaque Prefrontal Cortex Using Copulas and the Flashlight Transformation
Simultaneous spike-counts of neural populations are typically modeled by a Gaussian distribution. On short time scales, however, this distribution is too restrictive to describe and analyze multivariate distributions of discrete spike-counts. We present an alternative that is based on copulas and can account for arbitrary marginal distributions, including Poisson and negative binomial distributions as well as second and higher-order interactions. We describe maximum likelihood-based procedures for fitting copula-based models to spike-count data, and we derive a so-called flashlight transformation which makes it possible to move the tail dependence of an arbitrary copula into an arbitrary orthant of the multivariate probability distribution. Mixtures of copulas that combine different dependence structures and thereby model different driving processes simultaneously are also introduced. First, we apply copula-based models to populations of integrate-and-fire neurons receiving partially correlated input and show that the best fitting copulas provide information about the functional connectivity of coupled neurons which can be extracted using the flashlight transformation. We then apply the new method to data which were recorded from macaque prefrontal cortex using a multi-tetrode array. We find that copula-based distributions with negative binomial marginals provide an appropriate stochastic model for the multivariate spike-count distributions rather than the multivariate Poisson latent variables distribution and the often used multivariate normal distribution. The dependence structure of these distributions provides evidence for common inhibitory input to all recorded stimulus encoding neurons. Finally, we show that copula-based models can be successfully used to evaluate neural codes, e.g., to characterize stimulus-dependent spike-count distributions with information measures. This demonstrates that copula-based models are not only a versatile class of models for multivariate distributions of spike-counts, but that those models can be exploited to understand functional dependencies.
FaceTube: predicting personality from facial expressions of emotion in online conversational video
The advances in automatic facial expression recognition make possible to mine and characterize large amounts of data, opening a wide research domain on behavioral understanding. In this paper, we leverage the use of a state-of-the-art facial expression recognition technology to characterize users of a popular type of online social video, conversational vlogs. First, we propose the use of several activity cues to characterize vloggers based on frame-by-frame estimates of facial expressions of emotion. Then, we present results for the task of automatically predicting vloggers' personality impressions using facial expressions and the Big-Five traits. Our results are promising, specially for the case of the Extraversion impression, and in addition our work poses interesting questions regarding the representation of multiple natural facial expressions occurring in conversational video.
A TorPath to TorCoin: Proof-of-Bandwidth Altcoins for Compensating Relays
The Tor network relies on volunteer relay operators for relay bandwidth, which may limit its growth and scaling potential. We propose an incentive scheme for Tor relying on two novel concepts. We introduce TorCoin, an “altcoin” that uses the Bitcoin protocol to reward relays for contributing bandwidth. Relays “mine” TorCoins, then sell them for cash on any existing altcoin exchange. To verify that a given TorCoin represents actual bandwidth transferred, we introduce TorPath, a decentralized protocol for forming Tor circuits such that each circuit is privately-addressable but publicly verifiable. Each circuit’s participants may then collectively mine a limited number of TorCoins, in proportion to the end-to-end transmission goodput they measure on that circuit.
LIMSI$@$WMT'16: Machine Translation of News
This paper describes LIMSI’s submissions to the shared WMT’16 task “Translation of News”. We report results for Romanian-English in both directions, for English to Russian, as well as preliminary experiments on reordering to translate from English into German. Our submissions use mainly NCODE and MOSES along with continuous space models in a post-processing step. The main novelties of this year’s participation are the following: for the translation into Russian and Romanian, we have attempted to extend the output of the decoder with morphological variations and to use a CRF model to rescore this new search space; as for the translation into German, we have been experimenting with source-side pre-ordering based on a dependency structure allowing permutations in order to reproduce the tar-
Intentions in and relations among design drawings
in designing, with the goal of building computational environments that better support designing than those in current use. By diagram we mean a drawing that uses geometric elements to abstractly represent natural and artificial phenomena such as sound and light; building components such as walls and windows; and human behavior such as sight and circulation, as well as territorial boundaries of spaces. In contrast, a sketch is mainly about spatial arrangements of physical elements. Despite these general differences, we do not draw clear-cut distinctions between diagrams and sketches, as a particular drawing may combine the two representations. We describe here two distinct studies of architectural design
Caffeine administration at night during extended wakefulness effectively mitigates performance impairment but not subjective assessments of fatigue and sleepiness
The current study investigated the effects of repeated caffeine administration on performance and subjective reports of sleepiness and fatigue during 50h extended wakefulness. Twenty-four, non-smokers aged 22.5±2.9y (mean±SD) remained awake for two nights (50h) in a controlled laboratory environment. During this period, 200mg of caffeine or placebo gum was administered at 01:00, 03:00, 05:00 and 07:00 on both nights (total of 800mg/night). Neurobehavioral performance and subjective reports were assessed throughout the wake period. Caffeine improved performance compared to placebo, but did not affect overall ratings of subjective sleepiness and fatigue. Performance and sleepiness worsened with increasing time awake for both conditions. However, caffeine slowed performance impairments such that after 50h of wakefulness performance was better following caffeine administration compared to placebo. Caffeine also slowed the increase in subjective sleepiness and performance ratings, but only during the first night of wakefulness. After two nights of sleep deprivation, there was no difference in sleepiness ratings between the two conditions. These results demonstrate that strategic administration of caffeine effectively mitigates performance impairments associated with 50h wakefulness but does not improve overall subjective assessments of sleepiness, fatigue and performance. Results indicate that while performance impairment is alleviated, individuals may continue to report feelings of sleepiness. Individuals who use caffeine as a countermeasure in sustained operations may feel as though caffeine is not effective despite impairments in objective performance being largely mitigated.
Double Synchronous Reference Frame PLL for Power Converters Control
This paper deals with one of the most important issues in the control of grid-connected converters, which is the detection of the positive sequence fundamental component of the utility voltage. The study carried out in this paper conducts to a fast, precise, and robust positive sequence voltage detector offering a good behavior, even if unbalanced and distorted conditions are present in the grid. The proposed detector utilizes a new "double synchronous reference frame PLL" (DSRF-PLL), which completely eliminates the existing errors in conventional synchronous reference frame PLL systems (SRF-PLL) when operating under unbalanced utility voltages. In the study performed in this paper, the positive and negative sequence components of the unbalanced voltage vector are properly characterized. When this unbalanced vector is expressed on the DSRF, the analysis of the signals on the DSRF axes permits to design a decoupling network which isolates the positive and negative sequence components. This decoupling network gives rise to a new PLL structure which detects the positive sequence voltage component quickly and accurately. In this paper, conclusions of the analytical study are verified by simulation and experiment
Leadership Style and Staff Retention in Organisations
Organisations in all sectors are operating in highly competitive environment which requires that these institutions retain their core employees in order to gain and retain competitive advantage. Because of globalisation and new methods of management, different organisations have experienced competition both locally and globally in terms of market and staff. The role of leaders in employee retention is critical since their leadership styles impact directly on the employees’ feelings about the organization. The paper sought to find out the influence of leadership style on staff retention in organisations. The s tudy was purely based on l i tera ture review. From the review of several empirical studies it was established that leadership style significantly influences intention to leave of staff and hence there is need to embrace a leadership style that promotes staff retention.
Status of selected physicochemical properties of soils under different land use systems of Western Oromia, Ethiopia
Land use change particularly from natural ecosystem to agricultural lands in general and to crop cultivation under poor management practices in particular are among the major causes of decline in soil fertility followed by land degradation and low agricultural productivity. Achieving scientific information thereof is vital for planning management strategies; this study assessed the effects of land use on soil physicochemical properties describing soil fertility under three land use types (natural forest, grazing and cultivated land) in Guto Gida District of Oromia Region, Western Ethiopia. The natural forest land was used as a control to assess status of soil properties resulting from the shift of natural forest to other land uses. Disturbed and undisturbed surface soil samples (0-20 cm) were collected from each land use type and examined for their analysis of soils physicochemical properties. The study pointed out the difference between different land use type on soil water content, pH, Cation exchange capacity (CEC), organic carbon (OC), total nitrogen (TN), available phosphorus (P), and exchangeable bases. Correlation analysis also showed highly significant and insignificantly positive relationship of soil pH with exchangeable Mg2+ and Ca2+ ions but significantly with extractable Fe3+, Mn2+ ions and PAS of the soils among the land uses. Relative to forest land, when the percent OC contents in cultivated and grazing lands depleted respectively, by 54.62 and 49.89%, the percent Al saturation in cultivated land increased by 65.62% and 28.57% in grazing land. Land use changes also caused a decline in CEC, PBS, exchangeable bases and increased BD and Clay content exhibited poor soil physical conditions and deterioration of soil fertility. *Corresponding Author: Achalu Chimdi  [email protected] Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 2, No. 3, p. 57-71, 2012 http://www.innspub.net J. Bio. & Env. Sci. 2012 58 Introduction The management of tropical soils is crucial to address the issues of food security, soil degradation and environmental quality including the global carbon cycle. Sustaining soil and environmental qualities is the most effective method for ensuring sufficient food supply to support life (Soares et al., 2005). In an effort to this end, soil scientists developed the concept of soil quality to describe the fitness of soils to perform particular ecosystem functions and management. Maintaining soil quality mainly depends on the knowledge of the physicochemical properties of a given soil. The assumption of the sustainability of agricultural ecosystems also depends to a great extent on the maintenance of soil physicochemical properties. Its feasibility is based on the knowledge of the effects of management practices on soil properties, and how they affect soil-crop-water relations (Heluf and Wakene, 2006). Therefore, characterization and/or evaluation of soil properties is a master key for describing and understanding the status and qualities of the major nutrients in soils (Geissen et al., 2009). Assessing soil physicochemical properties are used to understand the potential status of nutrients in soils of different land uses (Wondowosen and Sheleme, 2011). This knowledge can ascertains whether the specified land use types are useful for a given production system and used to meet plants requirement for rapid growth and better crops production (Shishir and Sah, 2003). Over the past several decades, the conversion of native forest to agricultural land use has accelerated and featured in the development of Ethiopian landscapes and has apparently contributed to the widespread occurrence of degraded land across most part of the country. However, different land use practices have a varied impact on soil degradation and on both physical and chemical property of soil. Study by Wakene and Heluf (2003) have examined the impacts of different land uses on soil qualities and the study indicated, the rate of soil quality degradation depends on land use systems, soil types, topography, and climatic conditions. Assessing land-use-induced changes in soil properties is essential for addressing the issue of agro ecosystem transformation and sustainable land productivity. The selection of suitable indicators with well established ecological functions and high sensitivity to disturbances is of paramount importance. In this regard, soil organic carbon(SOC) is important for the overall carbon reservoir of the biosphere and plays a preponderant role in the global biogeochemistry cycle of major nutrients and it has been used extensively by authors to monitor landcover and land-use change patterns (Koutika et al., 2002; Sisti et al., 2004). Though they are sensitive to land-use changes, soil organic C and N have been proposed by some worker as indicators for assessing the effect of land use management (Alvarez and Alvarez, 2000). Thus, it is a key source of soil nutrients for plant growth and soil structural stability, as well as carbon stock levels (Sisti et al., 2004). However, its dynamics and composition are influenced by land-use changes, agricultural and management practices (Stevenson, 1994 and Barthes et al., 1999). Knowledge about an up-to-dated status of soil physical and chemical properties of different land use systems plays a vital role in enhancing production and productivity of the agricultural sectors on sustainable basis. However, practically oriented basic information on the status and management of soil physicochemical properties as well as their effect on soil quality to give recommendations for optimal and sustainable utilizations of land resources remains poorly understood. Therefore, this study was conducted with specific objectives to assess and explore the status of soil physicochemical characteristics of three different land use systems of representative area of Western Oromia Region. The results of this study are expected to add value to the up-to-date scientific documentation of the status of soil fertility and soil quality of different land uses of J. Bio. & Env. Sci. 2012 59 the study area and other similar agro-ecological environments in the country. Materials and methods The study area Geographically Guto Gida District is located in the Oromia Regional State, Western highlands of Ethiopia (Fig. 1) lying between 080 59’and 090 06` N latitude and 370 51`and 370 09`E longitude at an altitude of 1650 meters above sea level (masl). The study site is suited at a road distance of 310 km from the capital, Addis Ababa. According to the Ethiopian agro-climatic zonation (MOA, 1998), the study area falls in the highland (Baddaa) and mid altitude (Badda Darree). The ten years (1996-2007) climatic data from Nekemte Meteorological Station recorded an average annual rainfall 1780 mm which is characterized by unimodal rainfall pattern and its annual mean minimum and maximum monthly temperatures lies between 13.75 and 27.65 0C (Fig. 2). According to FAO, (1990) classification, the soil class of the study area is Nitosols and topographically characterized by mountainous and gentle sloping landscape. Subsistence agriculture is the main livelihood of the community and crop-livestock mixed farming system is predominant. The major crops commonly grown in the study area are coffee (Coffee arabica), teff (Eragrostis tef), barley (Hordeum vulgare), maize (Zea mays), potato (Solanum tubersoum) and hot pepper (Capsicum frutescence). Crop production are based on rainfed agriculture and harvested usually once in a year. Data source and analysis of soil samples In order to have general information about the land forms, land uses, topography and vegetation cover, a preliminary survey and field observation using the topographic map (1:50,000) of the study area was carried out during the year of 2010. Accordingly, three major representative land uses (natural forest, grazing and cultivated) lands were selected based on their history and occurrence. The natural vegetation of the study area is characterized by Indigenous natural forest and canopies, where as rainfed crop cultivation bounded by scattered settlements in the cultivated land and communal and private grazing land were the characteristic features of the land use types. The composite top soil (0-20 cm) samples from representative site of each land use in three replicates were collected, air dried, ground and passed through a 2 mm sieve for analysis. Analysis of soil samples were carried out at Chemistry laboratory of Ambo University and Holleta Soil Research Laboratory Center based on their standard laboratory procedure. Soil particle size distribution was analyzed by the Bouyoucos hydrometer method as described by Day (1965). The soil-water holding capacity (WHC) values were measured at -1/3 bar for field capacity (FC) and -15 bar for permanent wilting point (PWP) using the pressure plate apparatus method (Klute and Dirksen, 1986). Specific surface area was determined using ethylene glycol equilibrium method as described by Dipark and Sarkar (2003). Soil bulk density (Db) was measured from undisturbed soil samples collected using a core sampler which was weighed at field moisture after drying the pre-weighed soil core samples to constant weight in an oven at 105 0C as per the procedures described by (1965). Particle density (Dp) was determined by the pycnometer method (Devis and Freitans, 1984). Total porosity was estimated from the bulk and particle densities as described as: Total porosity (%) = (1Db/Pd) x 100 Where Db bulk density in (g cm-3) and Pd particle density (g cm-3) The pH of the soil was measured potentiometrically with a digital pH meter in the supernatant suspension of 1:2.5, soil: liquid ratio (Baruah and Barthakur, 1997). Organic carbon (OC) content was determined by the dichromate oxidation method (Walkely and Black, 1934). Total N was determined using the micro-Kjeldahl digestion, disti
Knowledge management infrastructure and processes on effectiveness of nursing care
The purpose of this study is to investigate the influence of knowledge management infrastructure and process on nursing care effectiveness in selected teaching hospitals in Southwest Nigeria. The organization of nursing knowledge resources is critical to health care organizations for providing safe and high quality care for patients. Therefore, it is necessary to study the role of knowledge management in clinical nursing practice and the outcomes on the effectiveness of nursing care. Despite the critical role of nursing care in defining high-performing health care delivery, knowledge management in this area is still at an early stage of development. Based on organizational capability theory and employing concurrent mixed-method research design, this study seeks to identify the components of knowledge infrastructure and processes that influence the efficiency and effectiveness of nursing care in clinical floor nursing of selected teaching hospitals in Nigeria, a developing country in Africa. The output from the research is expected to contribute to the domain of literature/knowledge; provide awareness of the knowledge management practices in nursing care in Nigeria; assist nursing administrators, health policy makers and hospital management to exploit and make effective use of knowledge-based resources to enhance nursing care which can improve the productivity of health care organizations.
User-Centered Design: An Integrated Approach
This book results from a decade of presenting the user-centered design (UCD) methodology for hundreds of companies (p. xxiii) and appears to be the book complement to the professional development short course. Its purpose is to encourage software developers to focus on the total user experience of software products during the whole of the development cycle. The notion of the “total user experience” is valuable because it focuses attention on the whole product-use cycle, from initial awareness through productive use.
Red Blood Cell Distribution Width is Independently Correlated With Diurnal QTc Variation in Patients With Coronary Heart Disease
To investigate the relationship between red blood cell distribution width (RDW) and diurnal corrected QT (QTc) variation in patients with coronary heart disease. This retrospective study included 203 patients who underwent coronary angiography between February 2013 and June 2014. RDW values and dynamic electrocardiography (Holter) results were collected to investigate the relationship between RDW and diurnal QTc variation. Patients were separated into three groups (A, B, and C) by binning their RDW values in an ascending order. RDW values, coronary artery scores and diurnal QTc variations were significantly different among these groups (P < 0.05). While coronary artery scores gradually rose with increased RDW, diurnal QTc variation decreased. Pearson's correlation analysis was applied to control for confounding factors, and multiple correlation analysis showed that coronary artery score was positively correlated with RDW (r = 0.130, P = 0.020), while it was not correlated with the diurnal QTc variation (r = -0.226, P = 0.681). RDW was negatively correlated with diurnal QTc variation (r = -0.197, P = 0.035). RDW is independently associated with diurnal QTc variation in patients with coronary heart disease.
Sky Detection in Hazy Image
Sky detection plays an essential role in various computer vision applications. Most existing sky detection approaches, being trained on ideal dataset, may lose efficacy when facing unfavorable conditions like the effects of weather and lighting conditions. In this paper, a novel algorithm for sky detection in hazy images is proposed from the perspective of probing the density of haze. We address the problem by an image segmentation and a region-level classification. To characterize the sky of hazy scenes, we unprecedentedly introduce several haze-relevant features that reflect the perceptual hazy density and the scene depth. Based on these features, the sky is separated by two imbalance SVM classifiers and a similarity measurement. Moreover, a sky dataset (named HazySky) with 500 annotated hazy images is built for model training and performance evaluation. To evaluate the performance of our method, we conducted extensive experiments both on our HazySky dataset and the SkyFinder dataset. The results demonstrate that our method performs better on the detection accuracy than previous methods, not only under hazy scenes, but also under other weather conditions.
Question Answering in Webclopedia
SHAPE ADJECTIVE COLOR DISEASE TEXT NARRATIVE* GENERAL-INFO DEFINITION USE EXPRESSION-ORIGIN HISTORY WHY-FAMOUS BIO ANTECEDENT INFLUENCE CONSEQUENT CAUSE-EFFECT METHOD-MEANS CIRCUMSTANCE-MEANS REASON EVALUATION PRO-CON CONTRAST RATING COUNSEL-ADVICE To create the QA Typology, we analyzed 17,384 questions and their answers (downloaded from answers.com); see (Gerber, 2001). The Typology contains 94 nodes, of which 47 are leaf nodes; a section of it appears in Figure 2. Each Typology node has been annotated with examples and typical patterns of expression of both Question and Answer, as indicated in Figure 3 for Proper-Person. Question examples Question templates Who was Johnny Mathis' high school track coach? who be <entity>'s <role> Who was Lincoln's Secretary of State? Who was President of Turkmenistan in 1994? who be <role> of <entity> Who is the composer of Eugene Onegin? Who is the CEO of General Electric? Actual answers Answer templates Lou Vasquez, track coach of...and Johnny Mathis <person>, <role> of <entity> Signed Saparmurad Turkmenbachy [Niyazov], <person> <role-title*> of <entity> president of Turkmenistan ...Turkmenistan’s President Saparmurad Niyazov... <entity>’s <role> <person> ...in Tchaikovsky's Eugene Onegin... <person>'s <entity> Mr. Jack Welch, GE chairman... <role-title> <person> ... <entity> <role> ...Chairman John Welch said ...GE's <subject>|<psv object> of related role-verb Figure 3. Portion of QA Typology node annotations for Proper-Person.
A high efficiency bridgeless flyback PFC converter for adapter application
This paper proposes a high efficiency low cost AC/DC converter for adapter application. In order to achieve high efficiency and low cost for adapter with universal AC input, a single stage bridgeless flyback PFC converter with peak current clamping technique was proposed. Compared with conventional flyback PFC converter, the conduction loss is reduced due to bridgeless structure. And the size of transformer can also be significantly reduced due to lower peak current, which results in lower cost and higher power density. Detailed operation principles and design considerations are illustrated. Experimental results from a 90W prototype with universal input and 20V/4.5A output are presented to verify the operation and performance of the proposed converter. The minimum efficiency at full load is above 91% over the entire input range.
Psychopathy and the DSM-IV criteria for antisocial personality disorder.
The Axis II Work Group of the Task Force on DSM-IV has expressed concern that antisocial personality disorder (APD) criteria are too long and cumbersome and that they focus on antisocial behaviors rather than personality traits central to traditional conceptions of psychopathy and to international criteria. We describe an alternative to the approach taken in the rev. 3rd ed. of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R; American Psychiatric Association, 1987), namely, the revised Psychopathy Checklist. We also discuss the multisite APD field trials designed to evaluate and compare four criteria sets: the DSM-III-R criteria, a shortened list of these criteria, the criteria for dyssocial personality disorder from the 10th ed. of the International Classification of Diseases (World Health Organization, 1990), and a 10-item criteria set for psychopathic personality disorder derived from the revised Psychopathy Checklist.
Using household survey data to inform policy decisions regarding the delivery of evidence-based parenting interventions.
BACKGROUND This study used household survey data on the prevalence of child, parent and family variables to establish potential targets for a population-level intervention to strengthen parenting skills in the community. The goals of the intervention include decreasing child conduct problems, increasing parental self-efficacy, use of positive parenting strategies, decreasing coercive parenting and increasing help-seeking, social support and participation in positive parenting programmes. METHODS A total of 4010 parents with a child under the age of 12 years completed a statewide telephone survey on parenting. RESULTS One in three parents reported that their child had a behavioural or emotional problem in the previous 6 months. Furthermore, 9% of children aged 2-12 years meet criteria for oppositional defiant disorder. Parents who reported their child's behaviour to be difficult were more likely to perceive parenting as a negative experience (i.e. demanding, stressful and depressing). Parents with greatest difficulties were mothers without partners and who had low levels of confidence in their parenting roles. About 20% of parents reported being stressed and 5% reported being depressed in the 2 weeks prior to the survey. Parents with personal adjustment problems had lower levels of parenting confidence and their child was more difficult to manage. Only one in four parents had participated in a parent education programme. CONCLUSIONS Implications for the setting of population-level goals and targets for strengthening parenting skills are discussed.
Gradient-based adaptive interpolation in super-resolution image restoration
This paper presents a super-resolution method based on gradient-based adaptive interpolation. In this method, in addition to considering the distance between the interpolated pixel and the neighboring valid pixel, the interpolation coefficients take the local gradient of the original image into account. The smaller the local gradient of a pixel is, the more influence it should have on the interpolated pixel. And the interpolated high resolution image is finally deblurred by the application of Wiener filter. Experimental results show that our proposed method not only substantially improves the subjective and objective quality of restored images, especially enhances edges, but also is robust to the registration error and has low computational complexity.
Untangling attention bias modification from emotion: A double-blind randomized experiment with individuals with social anxiety disorder.
BACKGROUND Uncertainty abounds regarding the putative mechanisms of attention bias modification (ABM). Although early studies showed that ABM reduced anxiety proneness more than control procedures lacking a contingency between cues and probes, recent work suggests that the latter performed just as well as the former did. In this experiment, we investigated a non-emotional mechanism that may play a role in ABM. METHODS We randomly assigned 62 individuals with a DSM-IV diagnosis of social anxiety disorder to a single-session of a non-emotional contingency training, non-emotional no-contingency training, or control condition controlling for potential practice effects. Working memory capacity and anxiety reactivity to a speech challenge were assessed before and after training. RESULTS Consistent with the hypothesis of a practice effect, the three groups likewise reported indistinguishably significant improvement in self-report and behavioral measures of speech anxiety as well as in working memory. Repeating the speech task twice may have had anxiolytic benefits. LIMITATIONS The temporal separation between baseline and post-training assessment as well as the scope of the training sessions could be extended. CONCLUSIONS The current findings are at odds with the hypothesis that the presence of visuospatial contingency between non-emotional cues and probes produces anxiolytic benefits. They also show the importance of including a credible additional condition controlling for practice effects.
Semantic Web Technologies for Business Intelligence
This chapter describes the convergence of two of the most influential technologies in the last decade, namely business intelligence (BI) and the Semantic Web (SW). Business intelligence is used by almost any enterprise to derive important business-critical knowledge from both internal and (increasingly) external data. When using external data, most often found on the Web, the most important issue is knowing the precise semantics of the data. Without this, the results cannot be trusted. Here, Semantic Web technologies come to the rescue, as they allow semantics ranging from very simple to very complex to be specified for any web-available resource. SW technologies do not only support capturing the “passive” semantics, but also support active inference and reasoning on the data. The chapter first presents a motivating running example, followed by an introduction to the relevant SW foundation concepts. The chapter then goes on to survey the use of SW technologies for data integration, including semantic DOI: 10.4018/978-1-61350-038-5.ch014
Mammalian hibernation: cellular and molecular responses to depressed metabolism and low temperature.
Mammalian hibernators undergo a remarkable phenotypic switch that involves profound changes in physiology, morphology, and behavior in response to periods of unfavorable environmental conditions. The ability to hibernate is found throughout the class Mammalia and appears to involve differential expression of genes common to all mammals, rather than the induction of novel gene products unique to the hibernating state. The hibernation season is characterized by extended bouts of torpor, during which minimal body temperature (Tb) can fall as low as -2.9 degrees C and metabolism can be reduced to 1% of euthermic rates. Many global biochemical and physiological processes exploit low temperatures to lower reaction rates but retain the ability to resume full activity upon rewarming. Other critical functions must continue at physiologically relevant levels during torpor and be precisely regulated even at Tb values near 0 degrees C. Research using new tools of molecular and cellular biology is beginning to reveal how hibernators survive repeated cycles of torpor and arousal during the hibernation season. Comprehensive approaches that exploit advances in genomic and proteomic technologies are needed to further define the differentially expressed genes that distinguish the summer euthermic from winter hibernating states. Detailed understanding of hibernation from the molecular to organismal levels should enable the translation of this information to the development of a variety of hypothermic and hypometabolic strategies to improve outcomes for human and animal health.
Building and sustaining profitable customer loyalty for the 21 st century
The concept of customer loyalty is conspicuous by it’s ubiquity. Therefore, there is no surprise that it is one of the most widely studied areas by researchers and one of the most widely implemented marketing initiatives by practitioners. This article draws upon past research to review important findings related to customer behavior and attitude in the context of customer loyalty. Further, research related to linking loyalty to profitability and forward looking metric such as the customer lifetime value is reviewed to propose a conceptual framework for building and sustaining loyalty and profitability simultaneously at individual customer level. A two-tiered rewards structure is presented as a means for marketers to operationalize the framework. The conceptual framework hopes to serve as a platform to understand the evolving dominant logic of loyalty programs for building and sustaining loyalty in the twenty first century as well as induce further research in that direction. © 2004 New York University. Published by Elsevier. All rights reserved.
Anticipation and next action forecasting in video: an end-to-end model with memory
Action anticipation and forecasting in videos do not require a hat-trick, as far as there are signs in the context to foresee how actions are going to be deployed. Capturing these signs is hard because the context includes the past. We propose an end-to-end network for action anticipation and forecasting with memory, to both anticipate the current action and foresee the next one. Experiments on action sequence datasets show excellent results indicating that training on histories with a dynamic memory can significantly improve forecasting performance.
Investigating the Non-Linear Relationships in the Expectancy Theory: The Case of Crowdsourcing Marketplace
Crowdsourcing marketplace as a new platform for companies or individuals to source ideas or works from the public has become popular in the contemporary world. A key issue about the sustainability of this type of marketplace relies on the effort that problem solvers expend on the online tasks. However, the predictors of effort investment in the crowdsourcing context is rarely investigated. In this study, based on the expectancy theory which suggests the roles of reward valence, trust and self efficacy, we develop a research model to study the factors influencing effort. Further, the non-linear relationships between self efficacy and effort is proposed. Based on a field survey, we found that: (1) reward valence and trust positively influence effort; (2) when task complexity is high, there will be a convex relationship between self efficacy and effort; and (3) when task complexity is low, there will be a concave relationship between self efficacy and effort. Theoretical and practical implications are also discussed.
An inexpensive Arduino-based LED stimulator system for vision research
Light emitting diodes (LEDs) are being used increasingly as light sources in life sciences applications such as in vision research, fluorescence microscopy and in brain-computer interfacing. Here we present an inexpensive but effective visual stimulator based on light emitting diodes (LEDs) and open-source Arduino microcontroller prototyping platform. The main design goal of our system was to use off-the-shelf and open-source components as much as possible, and to reduce design complexity allowing use of the system to end-users without advanced electronics skills. The main core of the system is a USB-connected Arduino microcontroller platform designed initially with a specific emphasis on the ease-of-use creating interactive physical computing environments. The pulse-width modulation (PWM) signal of Arduino was used to drive LEDs allowing linear light intensity control. The visual stimulator was demonstrated in applications such as murine pupillometry, rodent models for cognitive research, and heterochromatic flicker photometry in human psychophysics. These examples illustrate some of the possible applications that can be easily implemented and that are advantageous for students, educational purposes and universities with limited resources. The LED stimulator system was developed as an open-source project. Software interface was developed using Python with simplified examples provided for Matlab and LabVIEW. Source code and hardware information are distributed under the GNU General Public Licence (GPL, version 3).
Maximizing Cloud Providers' Revenues via Energy Aware Allocation Policies
Cloud providers, like Amazon, offer their data centers' computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users' experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.
Preferences, Homophily, and Social Learning
We study a sequential model of Bayesian social learning in networks in which agents have heterogeneous preferences, and neighbors tend to have similar preferences—a phenomenon known as homophily. We find that the density of network connections determines the impact of preference diversity and homophily on learning. When connections are sparse, diverse preferences are harmful to learning, and homophily may lead to substantial improvements. In contrast, in a dense network, preference diversity is beneficial. Intuitively, diverse ties introduce more independence between observations while providing less information individually. Homophilous connections individually carry more useful information, but multiple observations become redundant.
Sensory ataxia and muscle spindle agenesis in mice lacking the transcription factor Egr3
Muscle spindles are skeletal muscle sensory organs that provide axial and limb position information (proprioception) to the central nervous system. Spindles consist of encapsulated muscle fibers (intrafusal fibers) that are innervated by specialized motor and sensory axons. Although the molecular mechanisms involved in spindle ontogeny are poorly understood, the innervation of a subset of developing myotubes (type I) by peripheral sensory afferents (group Ia) is a critical event for inducing intrafusal fiber differentiation and subsequent spindle formation. The Egr family of zinc-finger transcription factors, whose members include Egr1 (NGFI-A), Egr2 (Krox-20), Egr3 and Egr4 (NGFI-C), are thought to regulate critical genetic programs involved in cellular growth and differentiation (refs 4, 5, 6, 7, 8 and W.G.T. et al., manuscript submitted). Mice deficient in Egr3 were generated by gene targeting and had gait ataxia, increased frequency of perinatal mortality, scoliosis, resting tremors and ptosis. Although extrafusal skeletal muscle fibers appeared normal, Egr3-deficient animals lacked muscle spindles, a finding that is consistent with their profound gait ataxia. Egr3 was highly expressed in developing muscle spindles, but not in Ia afferent neurons or their terminals during developmental periods that coincided with the induction of spindle morphogenesis by sensory afferent axons. These results indicate that type I myotubes are dependent upon Egr3-mediated transcription for proper spindle development.
Word Length Analysis of Jane Austen's Letters
This paper deals with word length in twenty of Jane Austen's letters and is part of a research project performed in Göttingen. Word length in English has so far only been studied in the context of contemporary texts (Hasse & Weinbrenner, 1995; Riedemann, 1994) and in the English dictionary (Rothschild, 1986). It has been ascertained that word length in texts abides by a law having the form of the mixed Poisson distribution -an assumption which in a language like English can easily be justified. However, in special texts other regularities can arise. Individual or genre-like factors can induce a systematic deviation in one or more frequency classes. We say that the phenomenon is on the way to another attractor. The first remedy in such cases is a local modification of the given frequency classes; the last remedy is the search for another model. THE DATA Letters were examined because it can be assumed that they are written down without interruption, and hence revised versions or the conscious use of stylistic means are the exception. The assumed natural rhythm governing word length in writing is thus believed to have remained mostly uninfluenced and constant. The length of the selected letters is between 126 and 494 words. They date from 1796 to 1817 and are partly businesslike and partly private. The letters to Jane Austen's sister Cassandra above all are written in an 'informal' style. In general, however, the letters are on a high stylistic level, which is not only characteristic of the use of language at that time, but also a main feature of Jane Austen's personal style. Thus contractions such as don't, can't, wouldn 't etc. do not occur. word depends on the number of vowels or diphthongs. Diphthongs and triphthongs can also be differentiated, both of these would count as one syllable. This paper only deals with diphthongs. The number of syllables of abbreviations is counted according to its fully spoken form. Thus addresses and titles such as 'Mrs', 'Mr', 'Md' and 'Capt' consist of two syllables; 'Lieut' consists of three syllables. The same holds for figures and for the abbreviations of months. MS is the common short form for 'Manuscript'; 'comps' (complements), 'G.Mama' (Grandmama), 'morn' (morning), 'c ' (could), 'w ' (would) or 'rec' (received) seem to be the writer's idiosyncratic abbreviations. In all cases length is determined by the spoken form. The analysis is based on the 'received pronunciation' of British English. Only the running text without address, date, or place has been considered. ANALYSING THE DATA General Criteria Length is determined by the number of syllables in each word. "Word" is defined as an orthographic unit. The number of syllables in a Findings As ascertained by the software tool 'AltmannFitter' (1994) the best model was found to be the positive Singh-Poisson distribution (= inflated zero truncated Poisson distribution), which has the following formula: *Address correspondence to: J. Frischen, Brüder-Grimm-Allee 2, 37075 Göttingen, Germany. D ow nl oa de d by [ K or ea U ni ve rs ity ] at 0 4: 53 1 0 Ja nu ar y 20 15 WORD LENGTH JANE AUSTEN'S LETTERS 81 aae' Table 3. Letter 16, Austen, 1798, to Cassandra Austen. fx NPx aae~ x\(l-e-)' x=2,3,... Distributions modified in this way indicate that the author tends to leave the basic model (in the case of English, the Poisson distribution) by local modification of the shortest class (here x 188 57 15 4 1 187.79 56.53 16.38 3.561 0.74J
New horizons for the methodology and physiology of training periodization.
The theory of training was established about five decades ago when knowledge of athletes' preparation was far from complete and the biological background was based on a relatively small amount of objective research findings. At that time, traditional 'training periodization', a division of the entire seasonal programme into smaller periods and training units, was proposed and elucidated. Since then, international sport and sport science have experienced tremendous changes, while the traditional training periodization has remained at more or less the same level as the published studies of the initial publications. As one of the most practically oriented components of theory, training periodization is intended to offer coaches basic guidelines for structuring and planning training. However, during recent decades contradictions between the traditional model of periodization and the demands of high-performance sport practice have inevitably developed. The main limitations of traditional periodization stemmed from: (i) conflicting physiological responses produced by 'mixed' training directed at many athletic abilities; (ii) excessive fatigue elicited by prolonged periods of multi-targeted training; (iii) insufficient training stimulation induced by workloads of medium and low concentration typical of 'mixed' training; and (iv) the inability to provide multi-peak performances over the season. The attempts to overcome these limitations led to development of alternative periodization concepts. The recently developed block periodization model offers an alternative revamped approach for planning the training of high-performance athletes. Its general idea proposes the sequencing of specialized training cycles, i.e. blocks, which contain highly concentrated workloads directed to a minimal number of targeted abilities. Unlike the traditional model, in which the simultaneous development of many athletic abilities predominates, block-periodized training presupposes the consecutive development of reasonably selected target abilities. The content of block-periodized training is set down in its general principles, a taxonomy of mesocycle blocks, and guidelines for compiling an annual plan.
TwitterReporter: Breaking News Detection and Visualization through the Geo-Tagged Twitter Network
The Twitter social network provides a constant stream of concise data, useful within both geospatial and temporal domains. Previous studies have attempted to mine and find recent news topics within the data set. Although the efforts were a step in the right direction, we believe more is needed to accomplish a useful and interesting task: using live Twitter data to automatically identify breaking news events in near real-time. In this paper, we present methods to collect data, identify breaking news topics, and display results in a geo-temporal visualization. Our goal is to provide a solution that discovers breaking news faster than traditional reporting mediums.
A Dynamic Pricing Model for Unifying Programmatic Guarantee and Real-Time Bidding in Display Advertising
There are two major ways of selling impressions in display advertising. They are either sold in spot through auction mechanisms or in advance via guaranteed contracts. The former has achieved a significant automation via real-time bidding (RTB); however, the latter is still mainly done over the counter through direct sales. This paper proposes a mathematical model that allocates and prices the future impressions between real-time auctions and guaranteed contracts. Under conventional economic assumptions, our model shows that the two ways can be seamless combined programmatically and the publisher's revenue can be maximized via price discrimination and optimal allocation. We consider advertisers are risk-averse, and they would be willing to purchase guaranteed impressions if the total costs are less than their private values. We also consider that an advertiser's purchase behavior can be affected by both the guaranteed price and the time interval between the purchase time and the impression delivery date. Our solution suggests an optimal percentage of future impressions to sell in advance and provides an explicit formula to calculate at what prices to sell. We find that the optimal guaranteed prices are dynamic and are non-decreasing over time. We evaluate our method with RTB datasets and find that the model adopts different strategies in allocation and pricing according to the level of competition. From the experiments we find that, in a less competitive market, lower prices of the guaranteed contracts will encourage the purchase in advance and the revenue gain is mainly contributed by the increased competition in future RTB. In a highly competitive market, advertisers are more willing to purchase the guaranteed contracts and thus higher prices are expected. The revenue gain is largely contributed by the guaranteed selling.
Preparing to work in the virtual organization
Forming virtual organizations (VOs) is a new workplace strategy that is also needed to prepare information, technology, and knowledge workers for functioning well in inter-organizational teams. University information studies programs can simulate VOs in courses and teach certain skill sets that are needed in VO work: critical thinking, analytical methods, ethical problem solving, stakeholder analysis, and writing policy are among the needed skills and abilities. Simulated virtual teams allow participants to learn to trust team members and to understand how communication and product development can work effectively in a virtual workspace. It is hoped that some of these methods could be employed in corporate training programs also. In an innovative course, inter-university VOs were created to develop information products. Groups in four geographically dispersed universities cooperated in the project; at its conclusion, students answered a self-administered survey about their experience. Each team's success or dif®culties were apparently closely related to issues of trust in the team process. Access to and ease of communication tools also played a role in the participants' perceptions of the learning experience and teamwork.
Simulation of merge junctions in a dynamically entrained automated guideway transit system
Merge junctions and intersections are the principal capacity limiters and sources of delay in automated guideway transit (AGT) networks. The capacity and delay performance of the merge junctions must be thoroughly understood before an AGT network can be designed. This paper describes the modeling and event-structured Monte Carlo simulation of a single merge junction, having two input lanes and one output lane, operating in a quasi-synchronous network. The simulation described here differs from previous merging models by permitting slots to be of variable length to accommodate trains of different lengths, and allowing unlimited maneuvering of trains (if needed). It has the unique ability to represent the operation of a special, newly devised concatenating merge, which permits trains arriving on separate input lanes to combine themselves into a longer train at the merge. The simulation logic which is used to represent special features of merge performance (flow compressibility, upstream propagation of disturbances, etc.) is explained, and the implementation using GASP IIA is described. The statistical considerations underlying the experimental design are discussed and some sample diagnostic outputs displayed.