title
stringlengths
8
300
abstract
stringlengths
0
10k
Pruning in Logic Programming
The logic programming community has a love{hate relationship with operators for pruning the search space of logic programs such as cut, commit, once, conditionals and variations on these. Pruning operators typically are not declarative, result in incom-pleteness and/or unsoundness, decrease readability and exibility of code and make program analysis and transformation more diicult. Despite this, nearly all non-trivial Prolog programs contain cuts, nearly all more recent logic programming languages have similar pruning operators and many languages insist on pruning operators in every clause. In practice, logic programming is less logical than functional programming. Why it this so? Do we really need pruning operators? Can we have suuciently powerful pruning operators which do not destroy the declarative semantics of programs? How are pruning operators related to logic, modes, functions and lazy evaluation? This paper attempts to answer some of these questions.
Collaborative writing: the effects of metacognitive prompting and structured peer interaction.
BACKGROUND The structured system for peer assisted learning in writing named Paired Writing (Topping, 1995) incorporates both metacognitive prompting and scaffolding for the interactive process. AIM This study sought to evaluate the relative contribution of these two components to student gain in quality of writing and attitudes to writing, while controlling for amount of writing practice and teacher effects. SAMPLE Participants were 28 ten- and eleven-year-old students forming a problematic mixed ability class. METHODS All received training in Paired Writing and its inherent metacognitive prompting. Students matched by gender and pre-test writing scores were assigned randomly to Interaction or No Interaction conditions. In the Interaction condition, the more able writers became 'tutors' for the less able. In the No Interaction condition, the more able writers acted as controls for the tutors and the less able as controls for the tutees. Over six weeks, the paired writers produced five pieces of personal writing collaboratively, while children in the No Interaction condition did so alone. RESULTS On pre- and post-project analyses of the quality of individual writing, all groups showed statistically significant improvements in writing. However, the pre-post gains of the children who wrote interactively were significantly greater than those of the lone writers. There was some evidence that the paired writers also had more positive self-esteem as writers. CONCLUSION The operation and durability of the Paired Writing system are discussed.
Recommender systems in mobile apps for health a systematic review
This paper aims to identify and analyze recommender systems developed for the health area, available in mobile applications. Therefore, literature was reviewed systematically from ACM, IEEE, Springer and Science Direct databases. 1006 studies were found, eight of which met the eligibility criteria. From the chosen studies, only one was not applied to the areas of nutrition and physical activity. The recommender techniques adopted were Collaborative Filtering and Content-based Filtering. The main mobile device identified was the Smartphone and the Operating System was Android. After analyzing the results, it was possible to realize that, although they are widely used in e­commerce, recommendation systems do not have many mobile healthcare applications, and the existing ones are recent.
Exploiting Argument Information to Improve Event Detection via Supervised Attention Mechanisms
This paper tackles the task of event detection (ED), which involves identifying and categorizing events. We argue that arguments provide significant clues to this task, but they are either completely ignored or exploited in an indirect manner in existing detection approaches. In this work, we propose to exploit argument information explicitly for ED via supervised attention mechanisms. In specific, we systematically investigate the proposed model under the supervision of different attention strategies. Experimental results show that our approach advances state-ofthe-arts and achieves the best F1 score on ACE 2005 dataset.
Dropout: MOOC participants’perspective
Massive Open Online Courses (MOOCs) open up learning opportunities to a large number of people. A small percentage (around 10%) of the large numbers of participants enrolling in MOOCs manage to finish the course by completing all parts. The term ‘dropout’ is commonly used to refer to ‘all who failed to complete’ a course, and is used in relation to MOOCs. Due to the nature of MOOCs, with students not paying enrolment and tuition fees, there is no direct financial cost incurred by a student. Therefore it is debatable whether the traditional definition of dropout in higher education could be directly applied to MOOCs. This paper reports ongoing exploratory work on MOOC participants’ perspectives based on six qualitative interviews. The findings show that MOOC participants are challenging the widely held view of dropout, suggesting that it is more about failing to achieve their personal aims. parisons (Balch, 2013). Considering the number of students in UK higher education who leave after one year of study: full time 7.4%; part time 35.1% and open universities 44.7%, Tait (Commonwealth of Learning, 2013) suggests that it could be qualification-related. For example 45% of Open University students in the UK have one A Level qualification or less and the open universities admit mature students, students with lower qualifications, and students from rural areas. Therefore he argues that dropouts “represent risks and challenges of openness and inclusion”. There is a debate whether dropout rates and progression should be causes of concern in MOOCs (Gee 2012; Yuan, & Powell 2013). In a traditional university when a student fails to complete a course that they have enrolled in, paying high fees, it is bad for all parties involved: the student (possibly even affecting their families), the lecturers and the university. For example, the Higher Education Funding Council for England keeps a close eye on the number of full-time PhD students completing within the allowed 4 years as a benchmark (HEFCE, 2013) and a student failing to do so may reflect adversely on the university’s research profile. Yuan, & Powell (2013) argue that whether these rates matter depends on the perceived purpose. They go on to say that if the main aim of offering a MOOC is to provide the opportunity to learn from high-quality courses (offered by world class universities and world experts of subjects) without incurring a charge, these rates should not be of primary concern. MOOCs inevitably attract many more enrolments than those that would have been on a fee-paying course because it is easy and free to register on a MOOC; sometimes it may be all too easy and by a student may register for a course by accident; there may not be an un-enrol button (author’s personal experience). Some participants who enrol on a MOOC may never return. Dropout: MOOC Participants’ Perspective Tharindu Rekha Liyanagunawardena, Patrick Parslow and Shirley Ann Williams 96 Research Track | Defining dropout Tinto (1975) argues that inadequate attention given to defining dropout in higher education has led researchers “to lump together, under the rubric of dropout, forms of leaving behaviour that are very different in character” (p89). He claims that research on dropout has failed to distinguish between various forms, for example dropout resulting from academic failure and voluntary withdrawal. This often seems to be the case with MOOCs; it is not clear what dropout means apart from ‘all who failed to complete’. MOOC participants could have joined the course to follow a specific topic and completion of this may have triggered them to voluntarily withdraw from the course. Categorising these participants as dropouts in MOOCs may give rise to misleading implications. There is also a concern whether the traditional definition of dropout could be directly applied to MOOCs (Liyanagunawardena, 2013). For example, paying enrolment and tuition fees in a traditional course makes a student commit themselves to participating in the programme. In a MOOC on the other hand, because both registration and enrolment are free, there is no binding commitment from a student. A definition used in distance education and/or eLearning could be a better candidate for defining dropout in a MOOC. In the context of eLearning, Levy (2007) defines “dropout students (or non-completers) as students that voluntarily withdraw from e-learning courses while acquiring financial penalties” (p.188) for his study. However, application of this definition to MOOCs is hindered by the use of financial penalties in the definition, because MOOCs generally do not require an upfront payment from registrants. Unlike most traditional courses and/or eLearning courses that freeze registration at the start of the course, MOOCs generally allow registration while the course is being offered (1). Effectively, then, a learner can join a MOOC that was running on the final week, which would still count as a registration, but this may not provide sufficient time for completion. There is also the possibility that some learners may enrol on a course to follow only a specific topic of their interest. Some participants may enrol to ‘audit’ MOOCs (Chung, 2013) while others may be ‘lurkers’, ‘drop-ins’, active or passive participants (Hill, 2013). Koller, et. al. (2013) show that “the ease of non-completion in MOOCs can be viewed as an opportunity for risk-free exploration”, a similar analogy would be a free taster or test drive. This makes it difficult to measure the dropout rate in a MOOC by only considering the enrolled number and ‘completed’ number. Furthermore, Koller et. al. (2013) show that in general a typical Coursera MOOC (in 2012) attracted 40,000 to 60,000 enrolments but only 50-60% of these students actually returned for the first lecture. Out of these huge enrolment numbers only about 5% of students earned an official statement of accomplishment. In contrast out of the students who registered for ‘Signature Track’ scheme, paying US$30-100, with the intention of obtaining an identity verified and university-branded certification, the completion rates are much higher. This seems to suggest that learners’ intention for the course, for example whether to use it as a taster class, drop-in and drop-out for interesting topics, or to earn a verified certification has had a profound effect on their ‘engagement’ in the course (2). Due to the nature of MOOCs discussed above, it is reasonable to question whether defining ‘completion’, ‘dropout’ and ‘success’ in a similar way to their equivalent in the traditional measurement or in fact eLearning counterpart is acceptable or appropriate. In fact, Koller, et. al. (2013) show that “retention in MOOCs should be evaluated within the context of learner intent” (p62). However, the word ‘dropout’ seems to be used very loosely when referring to MOOCs. In the realm of MOOCs, theorising about dropout processes can only be possible once a proper definition for the term is identified and accepted among scholars. The researchers believe that in identifying the meaning of dropout in the context of a MOOC, it is important to understand the participants’ perspective because of the voluntary nature of participation. However there has been no research to date exploring MOOC participants’ views on what success, completion and dropout mean to them in the context of a MOOC. This paper presents an overview of an ongoing research project exploring MOOC participants’ perspectives on the issue of dropout. The research team hopes to develop this exploratory view to understand the true nature of a MOOC dropout. Research Methodology This qualitative research project is investigating MOOC participants’ perspectives using an ethnographic approach, where researchers themselves are MOOC participants and they are exploring other MOOC participants’ perspectives on ‘dropout’, ‘completion’ and ‘success.’ Semi-structured interviews are used as the data collection instruments in this research. Structured interviews pose a pre-established set of questions in a sequence allowing little or no variation, expecting the interviewer to be neutral. In contrast, semi-structured interviews, which are guided by a set of questions but nevertheless place much interest on the participants’ views and where the overall direction of the interviews is influenced by the interviewees’ responses, was favoured in this research because of the constructivist standpoint of the researchers. Each face-to-face interview (30-35 minutes) was audio recorded with permission and later transcribed in full. The interview transcription was shared with the participant via email where clarifications were required. This respondent verification is hoped to have increased the quality of data used in the analysis. This paper presents some initial findings of an ongoing research and this paper focuses on participants’ perspectives of ‘dropout’ in a MOOC. Dropout: MOOC Participants’ Perspective Tharindu Rekha Liyanagunawardena, Patrick Parslow and Shirley Ann Williams 97 Research Track | Population The population for the research is MOOC participants, who have registered and/or participated in one or more MOOCs.
Familial amyotrophic lateral sclerosis is associated with a mutation in D-amino acid oxidase.
We report a unique mutation in the D-amino acid oxidase gene (R199W DAO) associated with classical adult onset familial amyotrophic lateral sclerosis (FALS) in a three generational FALS kindred, after candidate gene screening in a 14.52 cM region on chromosome 12q22-23 linked to disease. Neuronal cell lines expressing R199W DAO showed decreased viability and increased ubiquitinated aggregates compared with cells expressing the wild-type protein. Similarly, lentiviral-mediated expression of R199W DAO in primary motor neuron cultures caused increased TUNEL labeling. This effect was also observed when motor neurons were cocultured on transduced astrocytes expressing R199W, indicating that the motor neuron cell death induced by this mutation is mediated by both cell autonomous and noncell autonomous processes. DAO controls the level of D-serine, which accumulates in the spinal cord in cases of sporadic ALS and in a mouse model of ALS, indicating that this abnormality may represent a fundamental component of ALS pathogenesis.
Paradoxical hypertrichosis and terminal hair change after intense pulsed light hair removal therapy.
BACKGROUND Although complications such as blister formation, erosion, and post-inflammatory hypo- and hyperpigmentation are well-known side effects of intense pulsed light (IPL) photoepilation, little is known about the paradoxical hypertrichosis after therapy. OBJECTIVE To report the paradoxically increased hair density and coarseness after IPL photoepilation. METHODS Within a period of 23 months, a total of 991 hirsute female patients were treated with IPL for photoepilation. The IPL system used was the Vasculight-SR, a multifunctional laser and IPL system (Lumenis Inc., Santa Clara, CA, USA). The cut-off filters frequently used were 695, 755 and 645 nm. RESULTS Paradoxical hypertrichosis and terminal hair change were detected after a few sessions of IPL therapy among 51 out of 991 patients. Our serial digital photographs, schematic diagrams, and hair counts before and after treatment confirmed the patients' claims. The other more commonly seen complications were epidermal burning with blisters, erosion, and crust formation followed by post-inflammatory hypo- and/or hyperpigmentation. CONCLUSION Paradoxical hypertrichosis and terminal hair change is a common complication of IPL photoepilation.
Patient-specific computed tomography based instrumentation in total knee arthroplasty: a prospective randomized controlled study
The aim of this study was to compare radiological results of total knee arthroplasties (TKAs) performed with patient-specific computed tomography (CT)-based instrumentation and conventional technique. The main study hypothesis was that CT-based patient-specific instrumentation (PSI) increases the accuracy of TKA. A prospective, randomized controlled trial was carried out between January and December 2011. A group of 112 patients who met the inclusion and exclusion criteria were enrolled in this study and randomly assigned to an experimental or control group. The experimental group comprised 52 patients operated on with the aid of the Signature™ CT-based implant positioning system. The control group consisted of 60 patients operated on using conventional instrumentation. The radiographic evaluation of implant positioning and overall coronal alignment was performed 12 months after the surgery by using standing anteroposterior radiographs of the entire lower limb and standard lateral radiographs. Of the 112 patients initially enrolled for the study, 95 were included in the subsequent analyses. There were no statistically significant differences between groups in respect to coronal and sagittal component positioning and overall coronal alignment, except for frontal tibial component positioning. For this parameter, better results were obtained in the control group, with borderline statistical significance. Our study did not reveal superiority of the CT-based PSI system over conventional instrumentation. Further high-quality investigations of patient-specific systems are absolutely indispensable to assess their utility for TKA. In our opinion, the surgeon applying PSI technology is required to have advanced knowledge and considerable experience with the conventional method.
Drift and Cost Comparison of Different Structural Systems for Tall Buildings
The race towards new heights and architecture has not been without challenges. Tall structures have continued to climb higher and higher facing strange loading effects and very high loading values due to dominating lateral loads. The design criteria for tall buildings are strength, serviceability, stability and human comfort. But the factors govern the design of tall and slender buildings all the times are serviceability and human comfort against lateral loads. As a result, lateral stiffness is a major consideration in the design of tall buildings. The first parameter that is used to estimate the lateral stiffness of a tall building is drift index. Different lateral load resisting structural subsystems can be used to impart stiffness and reduce drift in the building. Lateral load resisting subsystems can take many forms depending upon the orientation, integration and addition of the various structural components. In this research, sixteen different lateral load resisting structural subsystems are used to design a tall building and finally the most economical structural system is selected amongst these. For this purpose a hundred and five storey square shaped prismatic steel building uniform through the height is selected, analyzed and designed for gravity and wind loads. Analysis and design of selected lateral load resisting structural subsystems reveals that, for the building configuration selected, the structural system containing composite super columns with portals subsystem is most efficient.
A decentralized algorithm for spectral analysis
In many large network settings, such as computer networks, social networks, or hyperlinked text documents, much information can be obtained from the network's spectral properties. However, traditional centralized approaches for computing eigenvectors struggle with at least two obstacles: the data may be difficult to obtain (both due to technical reasons and because of privacy concerns), and the sheer size of the networks makes the computation expensive. A decentralized, distributed algorithm addresses both of these obstacles: it utilizes the computational power of all nodes in the network and their ability to communicate, thus speeding up the computation with the network size. And as each node knows its incident edges, the data collection problem is avoided as well.Our main result is a simple decentralized algorithm for computing the top k eigenvectors of a symmetric weighted adjacency matrix, and a proof that it converges essentially in O(τMIXlog2 n) rounds of communication and computation, where τMIX is the mixing time of a random walk on the network. An additional contribution of our work is a decentralized way of actually detecting convergence, and diagnosing the current error. Our protocol scales well, in that the amount of computation performed at any node in any one round, and the sizes of messages sent, depend polynomially on k, but not on the (typically much larger) number n of nodes.
The value of personal health records for chronic disease management: what do we know?
BACKGROUND AND OBJECTIVES Electronic personal health records (PHRs) allow patients access to their medical records, self-management tools, and new avenues of communication with their health care providers. They will likely become a valuable component of the primary care Patient-centered Medical Home model. Primary care physicians, who manage the majority of chronic disease, will use PHRs to help patients manage their diabetes and other chronic diseases requiring continuity of care and enhanced information flow between patient and physician. In this brief report, we explore the evidence for the value of PHRs in chronic disease management. METHODS We used a comprehensive review of MEDLINE articles published in English between January 2000 and September 2010 on personal health records and related search terms. RESULTS Few published articles have described PHR programs designed for use in chronic disease management or PHR adoption and attitudes in the context of chronic disease management. Only three prospective randomized trials have evaluated the benefit of PHR use in chronic disease management, all in diabetes care. These trials showed small improvements in some but not all diabetes care measures. All three trials involved additional interventions, making it difficult to determine the influence of patient PHR use in improved outcomes. CONCLUSIONS The evidence remains sparse to support the value of PHR use for chronic disease management. With the current policy focus on meaningful use of electronic and personal health records, it is crucial to investigate and learn from new PHR products so as to maximize the clinical value of this tool.
INDUSTRIAL APPLICATIONS AND FUTURE PROSPECTS OF MICROBIAL XYLANASES: A REVIEW
Microbial enzymes such as xylanases enable new technologies for industrial processes. Xylanases (xylanolytic enzyme) hydrolyze complex polysaccharides like xylan. Research during the past few decades has been dedicated to enhanced production, purification, and characterization of microbial xylanase. But for commercial applications detailed knowledge of regulatory mechanisms governing enzyme production and functioning should be required. Since application of xylanase in the commercial sector is widening, an understanding of its nature and properties for efficient and effective usage becomes crucial. Study of synergistic action of multiple forms and mechanism of action of xylanase makes it possible to use it for bio-bleaching of kraft pulp and for desizing and bio-scouring of fabrics. Results revealed that enzymatic treatment leads to the enhancement in various physical properties of the fabric and paper. This review will be helpful in determining the factors affecting xylanase production and its potential industrial applications in textile, paper, pulp, and other industries.
A Fuzzy-based approach to programming language independent source-code plagiarism detection
Source-code plagiarism detection in programming, concerns the identification of source-code files that contain similar and/or identical source-code fragments. Fuzzy clustering approaches are a suitable solution to detecting source-code plagiarism due to their capability to capture the qualitative and semantic elements of similarity. This paper proposes a novel Fuzzy-based approach to source-code plagiarism detection, based on Fuzzy C-Means and the Adaptive-Neuro Fuzzy Inference System (ANFIS). In addition, performance of the proposed approach is compared to the Self- Organising Map (SOM) and the state-of-the-art plagiarism detection Running Karp-Rabin Greedy-String-Tiling (RKR-GST) algorithms. The advantages of the proposed approach are that it is programming language independent, and hence there is no need to develop any parsers or compilers in order for the fuzzy-based predictor to provide detection in different programming languages. The results demonstrate that the performance of the proposed fuzzy-based approach overcomes all other approaches on well-known source code datasets, and reveals promising results as an efficient and reliable approach to source-code plagiarism detection.
Three traditions of network research: What the public management research agenda can learn from other research communities
This article identifies and describes the development of three parallel streams of literature about network theory and research: social network analysis, policy change and political science networks, and public management networks. Noting that these traditions have sometimes been inattentive to each other’s work, the authors illustrate the similarities and differences in the underlying theoretical assumptions, types of research questions addressed, and research methods typically employed by the three traditions. The authors draw especially on the social network analysis (sociological) tradition to provide theoretical and research insights for those who focus primarily on public management networks. The article concludes with recommendations for advancing current scholarship on public management networks.
Effects of stabilization exercise using a ball on mutifidus cross-sectional area in patients with chronic low back pain.
The purpose of this study was to compare the effects of lumbar stabilization exercises using balls to the effects of general lumbar stabilization exercises with respect to changes in the cross section of the multifidus (MF), weight bearing, pain, and functional disorders in patients with non-specific chronic low back pain. Twelve patients participated in either a 8 week (3 days per week) stabilization exercise program using balls and control group (n = 12). The computer tomography (CT) was used to analyze MF cross-sectional areas (CSA) and Tetrax balancing scale was used to analyze left and right weight bearing differences. Both groups had significant changes in the CSA of the MF by segment after training (p < 0.05) and the experimental group showed greater increases at the L4 (F = 9.854, p = 0.005) and L5 (F = 39. 266, p = 0.000). Both groups showed significant decreases in weight bearing, from 9.25% to 5.83% in the experimental group and from 9.33% to 4.25% in the control group (p < 0.05), but did not differ significantly between the two groups. These results suggests that stabilization exercises using ball can increases in the CSA of the MF segments, improvement in weight bearing, pain relief, and recovery from functional disorders, and the increases in the CSA of the MF of the L4 and L5 segments for patients with low back pain. Key PointsCompared with the stabilization exercise using a ball and general stabilization exercise increased the CSA of the MF, weight bearing, pain, and functional ability in patients with low back pain.We verified that increases in the CSA of the MF of the L4 and L5 segments and functional ability during the stabilization exercise using a ball.The stabilization exercise using a ball was shown to be an effective exercise method for patients with low back pain in a rehabilitation program by increasing functional ability and the CSA of the MF.
AoA and ToA Accuracy for Antenna Arrays in Dense Multipath Channels
The accuracy that can be achieved in time of arrival (ToA) estimation strongly depends on the utilized signal bandwidth. In an indoor environment multipath propagation usually causes a degradation of the achievable accuracy due to the overlapping signals. A similar effect can be observed for the angle of arrival (AoA) estimation using arrays. This paper derives a closed-form equation for the Cramer-Rae lower bound (CRLB) of the achievable AoA and the ToA error variances, considering the presence of dense multipath. The Fisher information expressions for both parameters allows an evaluation of the influence of channel parameters and system parameters such as the array geometry. Our results demonstrate that the AoA estimation accuracy is strongly related to the signal bandwidth, due to the multipath influence. The theoretical results are evaluated with experimental data.
Floral patterning defects induced by Arabidopsis APETALA2 and microRNA172 expression in Nicotiana benthamiana
Floral patterning and morphogenesis are controlled by many transcription factors including floral homeotic proteins, by which floral organ identity is determined. Recent studies have uncovered widespread regulation of transcription factors by microRNAs (miRNAs), ~21-nucleotide non-coding RNAs that regulate protein-coding RNAs through transcript cleavage and/or translational inhibition. The regulation of the floral homeotic gene APETALA2 (AP2) by miR172 is crucial for normal Arabidopsis flower development and is likely to be conserved across plant species. Here we probe the activity of the AP2/miR172 regulatory circuit in a heterologous Solanaceae species, Nicotiana benthamiana. We generated transgenic N. benthamiana lines expressing Arabidopsis wild type AP2 (35S::AP2), miR172-resistant AP2 mutant (35S::AP2m3) and MIR172a-1 (35S::MIR172) under the control of the cauliflower mosaic virus 35S promoter. 35S::AP2m3 plants accumulated high levels of AP2 mRNA and protein and exhibited floral patterning defects that included proliferation of numerous petals, stamens and carpels indicating loss of floral determinacy. On the other hand, nearly all 35S::AP2 plants accumulated barely detectable levels of AP2 mRNA or protein and were essentially non-phenotypic. Overall, the data indicated that expression of the wild type Arabidopsis AP2 transgene was repressed at the mRNA level by an endogenous N. benthamiana miR172 homologue that could be detected using Arabidopsis miR172 probe. Interestingly, 35S::MIR172 plants had sepal-to-petal transformations and/or more sepals and petals, suggesting interference with N. benthamiana normal floral homeotic gene function in perianth organs. Our studies uncover the potential utility of the Arabidopsis AP2/miR172 system as a tool for manipulation of floral architecture and flowering time in non-model plants.
A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
Understanding unstructured text is a major goal within natural language processing. Comprehension tests pose questions based on short text passages to evaluate such understanding. In this work, we investigate machine comprehension on the challenging MCTest benchmark. Partly because of its limited size, prior work on MCTest has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy. The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set. Perspectives range from the word level to sentence fragments to sequences of sentences; the networks operate only on word-embedding representations of text. When trained with a methodology designed to help cope with limited training data, our Parallel-Hierarchical model sets a new state of the art for MCTest, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15% absolute).
Using sequential observations to model and predict player behavior
In this paper, we present a data-driven technique for designing models of user behavior. Previously, player models were designed using user surveys, small-scale observation experiments, or knowledge engineering. These methods generally produced semantically meaningful models that were limited in their applicability. To address this, we have developed a purely data-driven methodology for generating player models based on past observations of other players. Our underlying assumption is that we can accurately predict what a player will do in a given situation if we examine enough data from former players that were in similar situations. We have chosen to test our method on achievement data from the MMORPG World of Warcraft. Experiments show that our method greatly outperforms a baseline algorithm in both precision and recall, proving that this method can create accurate player models based solely on observation data.
Logically centralized?: state distribution trade-offs in software defined networks
Software Defined Networks (SDN) give network designers freedom to refactor the network control plane. One core benefit of SDN is that it enables the network control logic to be designed and operated on a global network view, as though it were a centralized application, rather than a distributed system - logically centralized. Regardless of this abstraction, control plane state and logic must inevitably be physically distributed to achieve responsiveness, reliability, and scalability goals. Consequently, we ask: "How does distributed SDN state impact the performance of a logically centralized control application?" Motivated by this question, we characterize the state exchange points in a distributed SDN control plane and identify two key state distribution trade-offs. We simulate these exchange points in the context of an existing SDN load balancer application. We evaluate the impact of inconsistent global network view on load balancer performance and compare different state management approaches. Our results suggest that SDN control state inconsistency significantly degrades performance of logically centralized control applications agnostic to the underlying state distribution.
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines the scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.
Buprenorphine dosing regime for inpatient heroin withdrawal: a symptom-triggered dose titration study.
The study aimed to identify the range of buprenorphine doses required to comfortably alleviate symptoms in patients undergoing inpatient heroin withdrawal using a symptom-triggered titration dosing regime, and to identify the patient characteristics that impact upon the buprenorphine dose requirements. The study was conducted in two Australian inpatient withdrawal units, recruiting 63 dependent, injecting heroin users with no recent methadone treatment, dependence on other drugs, or other active medical or psychiatric conditions. In a single (patient) blinded case series, placebo or 2 mg sublingual buprenorphine tablets was administered four times a day according to severity of withdrawal (assessed with Subjective Opiate Withdrawal Scale). Up to 16 mg buprenorphine was available over the first 4 days of the admission, up to 8 mg on day 5, and placebo continued until day 6. Thirty-two subjects completed the dosing regime, with mean (+/-S.D.) daily doses of 3.8+/-2.8 on day 1, 5.8+/-3.2 on day 2, 4.8+/-3.3 on day 3, 2.3+/-2.6 on day 4, 0.8+/-1.3 on day 5, and a total dose of 17.4+/-9.7. Higher buprenorphine doses were required by those patients with more severe psychosocial dysfunction, women, those with more frequent heroin use, and those with more severe dependence on heroin at intake. A dosing regime using sublingual buprenorphine tablets for short inpatient heroin withdrawal is proposed.
The TGFB1 Functional Polymorphism rs1800469 and Susceptibility to Atrial Fibrillation in Two Chinese Han Populations
Transforming growth factor-β1 (TGF-β1) is related to the degree of atrial fibrosis and plays critical roles in the induction and perpetuation of atrial fibrillation (AF). To investigate the association of the common promoter polymorphism rs1800469 in the TGF-β1 gene (TGFB1) with the risk of AF in Chinese Han population, we carried out a case-control study of two hospital-based independent populations: Southeast Chinese population (581 patients with AF and 723 controls), and Northeast Chinese population (308 AF patients and 292 controls). Two hundred and seventy-eight cases of AF were lone AF and 334 cases of AF were diagnosed as paroxysmal AF. In both populations, AF patients had larger left atrial diameters than the controls did. The rs1800469 genotypes in the TGFB1 gene were determined by polymerase chain reaction-restriction fragment length polymorphism. The genotype and allele frequencies of rs1800469 were not different between AF patients and controls of the Southeast Chinese population, Northeast Chinese population, and total Study Population. After adjustment for age, sex, hypertension and LAD, there was no association between the rs1800469 polymorphism and the risk of AF under the dominant, recessive and additive genetic models. Similar results were obtained from subanalysis of the lone and paroxymal AF subgroups. Our results do not support the role of the TGFB1 rs1800469 functional gene variant in the development of AF in the Chinese Han population.
Using grounded theory to study the experience of software development
Grounded Theory is a research method that generates theory from data and is useful for understanding how people resolve problems that are of concern to them. Although the method looks deceptively simple in concept, implementing Grounded Theory research can often be confusing in practice. Furthermore, despite many papers in the social science disciplines and nursing describing the use of Grounded Theory, there are very few examples and relevant guides for the software engineering researcher. This paper describes our experience using classical (i.e., Glaserian) Grounded Theory in a software engineering context and attempts to interpret the canons of classical Grounded Theory in a manner that is relevant to software engineers. We provide model to help the software engineering researchers interpret the often fuzzy definitions found in Grounded Theory texts and share our experience and lessons learned during our research. We summarize these lessons learned in a set of fifteen guidelines.
Development of a short form of Stroke-Specific Quality of Life Scale for patients after aneurysmal subarachnoid hemorrhage
BACKGROUND The identification of aneurysmal subarachnoid hemorrhage (aSAH) patients with a decrease in health-related quality of life (HRQOL) is challenging. Stroke-Specific Quality of Life (SS-QOL) Scale is one of the commonest disease-specific quality of life measures initially developed and validated for ischemic stroke patients. A disadvantage is subject burden and a short form is more practical to use in clinical and research setting. AIM This study aimed to develop a short form (12-item) of a Chinese version of Stroke-Specific Quality of Life Scale for aSAH (SSQOL-a) for clinical and research applications. METHODS We carried out a prospective observational assessor-blinded multi-center study in Hong Kong. The study was registered at ClinicalTrials.gov of the U.S. National Institutes of Health (NCT01038193), and was approved by hospital ethics committees. RESULTS One hundred and eighty-six aSAH patients were recruited over a 30 month period during admission. One hundred (54%) aSAH patients completed the 12-month assessment battery and were included into the current study. The total score, physical component score, and psychosocial score of the 12-item Chinese version showed satisfactory internal consistency and explained high percentages of variance of the full Chinese version (92% to 96%). The 12-item Chinese version showed significant correlations with neurological, functional, generic quality of life, psychiatric, and cognitive outcome measures at 12 months. Chinese version calculated physical subscore had better discrimination in detecting complete recovery than the Dutch version calculated physical subscore in our Chinese population. CONCLUSIONS The 12-item Chinese version of SSQOL-a has a satisfactory internal consistency and criterion validity for SAH patients at 12 month assessments.
Anthropometry, silhouette trajectory, and risk of breast cancer in Mexican women.
BACKGROUND Obesity has been associated with breast cancer risk in the Caucasian population but the association remains unclear in the Hispanics. Previous studies conducted among Hispanics in the U.S. have shown inconsistent results. PURPOSE The association between anthropometry, body shape evolution across lifetime, and the risk of breast cancer was assessed using a multi-center population-based case-control study conducted in Mexico. METHODS One thousand incident cases and 1074 matched control women aged 35-69 years were recruited between 2004 and 2007, and analyzed in 2011-2012. Conditional logistic regression models were used. RESULTS Height was related to an increased risk of breast cancer in both premenopausal (p trend=0.03) and postmenopausal women (p trend=0.002). In premenopausal women, increase in BMI; waist circumference (WC); hip circumference (HC); and waist-hip ratio (WHR) were inversely associated with breast cancer risk (p trends<0.001 for BMI and WC, 0.003 for HC, and 0.016 for WHR). In postmenopausal women, decreased risks were observed for increased WC (p trend=0.004) and HC (p trend=0.009) among women with time since menopause <10 years. Further analysis of body shape evolution throughout life showed strong and significant increase in risk of breast cancer among women with increasing silhouettes size over time compared to women with no or limited increase. CONCLUSIONS These findings suggest that anthropometric factors may have different associations with breast cancer risk in Hispanic women than in Caucasian women. This study also shows the importance of considering the evolution of body shape throughout life.
Phonological processing and reading in children with speech sound disorders.
PURPOSE To examine the relationship between phonological processing skills prior to kindergarten entry and reading skills at the end of 1st grade, in children with speech sound disorders (SSD). METHOD The participants were 17 children with SSD and poor phonological processing skills (SSD-low PP), 16 children with SSD and good phonological processing skills (SSD-high PP), and 35 children with typical speech who were first assessed during their prekindergarten year using measures of phonological processing (i.e., speech perception, rime awareness, and onset awareness tests), speech production, receptive and expressive language, and phonological awareness skills. This assessment was repeated when the children were completing 1st grade. The Test of Word Reading Efficiency was also conducted at that time. First-grade sight word and nonword reading performance was compared across these groups. RESULTS At the end of 1st grade, the SSD-low PP group achieved significantly lower nonword decoding scores than the SSD-high PP and typical speech groups. The 2 SSD groups demonstrated similarly good receptive language skills and similarly poor articulation skills at that time, however. No between-group differences in sight word reading were observed. All but 1 child (in the SSD-low PP group) obtained reading scores that were within normal limits. CONCLUSION Weaknesses in phonological processing were stable for the SSD-low PP subgroup over a 2-year period.
Penile girth augmentation using flaps "Shaeer's augmentation phalloplasty": a case report.
INTRODUCTION Current girth augmentation techniques rely either on liposuction/injection or on the use of dermal fat grafts. These procedures have serious disadvantages, including regression in gained size, deformities, irregular contour, and asymmetry. Ideally, the augmentation technique should ensure durability and symmetry. This case report describes the first application of a flap (superficial circumflex iliac artery island flap) in penile girth augmentation. MATERIALS AND METHODS The superficial circumflex iliac vessels were identified and the groin flap was elevated from lateral to medial, rotated toward the penis, and tunneled into a penopubic incision. It was wrapped around the penis short of the corpus spongiosum and insinuated under the glans. RESULTS Six months after surgery, the patient had an erect girth of 19.5 cm and a flaccid girth of 16.5 cm, compared with 11 cm and 7 cm, respectively, before surgery, thus maintaining the intraoperative girth gain. The outer surface felt smooth with no lobulation. The size of the glans was proportionate to the shaft's girth. CONCLUSION This case report shows that the application of flaps in penile girth augmentation may provide a reliable alternative to the currently applied techniques. Glans flaring promotes the aesthetic results and is applicable with other techniques of penile girth augmentation.
450mm FOUP/LPU system in advanced semiconductor manufacturing processes: A study on the minimization of oxygen content inside FOUP when the door is opened
In the last 15 years, the FOUP/LPU (front opening unified pod (FOUP) and load-port unit (LPU)) module was adopted by major 300 mm wafer semiconductor fabs and proved to be able to create a very high particle free environment for wafer transfer. However, it is not able to provide a moisture, oxygen or airborne molecular contaminants (AMCs) free environment, as the moisture, oxygen or airborne molecular contaminants exhibit in the FOUP through filter, FOUP material, and/or the last processes (in-process). Currently, the technology roadmap of devices has already moved towards the era of sub-20nm, some even to 10nm, node. For those devices made in such a small scale patterns, they are generally very sensitive to moisture, oxygen and other AMCs in the air. An example is that after the processes of etching, the contaminant, as a form of AMC, may evaporate, deposit, and contaminate wafers in the later processes, such as CMP. The deposited AMC may, again, evaporate and deposit on the wafer of next process. Nitrogen gas purge for stationary door-closed FOUP, which is normally when FOUP is at a purge station or a FOUP stocker, has been adopted to minimize sensitive in-process wafers' exposure to those contaminants in many processes. However, gas purge performed when FOUP door is off i.e. FOUP is in open condition, (thereafter referred as “door off” condition) is still rare. Nevertheless, this approach is very urgent is for sub-20nm process. If oxygen is not of concern, Clean Dry Air (CDA) purge instead of nitrogen is an alternative gas. Note that nitrogen is much more expensive than CDA and with potential safety concern. In-processes, such as etching/Chemical Mechanical Polishing (CMP) require FOUP purge while the FOUP door is open to an Equipment Front End Module (EFEM) load-port. This door off condition comes with exceptional challenges as compared to stationary door-closed conditions. To overcome this critical challenge, a new FOUP/LPU purge system is proposed. The system includes two uniform purge diffusers plus top-down pure gas curtain created by a so called “flow field former” when FOUP is in door-off condition. Note that a conceptual patent about “flow field former” in this proposal has been applied (under reviewing). In implementation of this project, firstly, a prototype FOUP/LPU purge system will be built in an ISO class 1 (0.1. um) cleanroom. Various environment parameters in the FOUP including temperature, relative humidity, air velocity magnitude, and concentration of particle will be monitored. Visualization on flow pattern in the FOUP and in the vicinity of door edge will be carried out by green-light laser visualization system. Optimized size/dimensions and operation parameters of the flow filed former will be determined based on the overall testing results. The performance of the newly proposed system will be eventually verified in a production line of a prestigious semiconductor fab. The ultimate objective of this project is to prevent cross contamination and surface oxidation with a quickly control of moisture, oxygen and AMCs when FOUP is in door-off condition through an efficient and purge gas saving system.
A Steganography Technique for Hiding Image in an Image using LSB Method for 24 Bit Color Image
In this paper, author have propose a steganographic technique by using improved LSB (least significant bit) replacement method for 24 bit color image capable of producing a secret-embedded image that is totally indistinguishable from the original image by the human eye. In addition this paper shows that how improved LSB method for 24 bit color image is better than LSB technique for 8 bit color image. Firstly LSB method for both 8 bit and 24 bit color image are described and then improved LSB method for 24 bit color image, compare their result by calculating PSNR (Peak Signal-to-Noise Ratio), MSE (Mean Squared Error) and finally by histogram analysis. LSB Algorithm embedded MSB of secret image into LSB of cover image. In the case of 24 bit color image two methods are described. In first method, last 2 LSB of each plane (red, green and blue) of cover image, is replaced by 2 MSB of secret image. In the second method, last LSB of each red plane is replaced by first MSB of secret image, last 2 LSB of each green plane by next 2 MSB of secret image and then last 3 LSB of blue plane is replaced by next 3 MSB of secret image. This means that total 6 bits of secret image can be hide in 24 bit color image. Experimental results show that the stego-image is visually indistinguishable from the original cover-image in the case of 24 bit.
Combining Models and Exemplars for Face Recognition : An Illuminating Example
We propose a modeland exemplar-based approach for face recognition. This problem has been previously tackled using either models or exemplars, with limited success. Our idea uses models to synthesize many more exemplars, which are then used in the learning stage of a face recognition system. To demonstrate this, we develop a statistical shape-fromshading model to recover face shape from a single image, and to synthesize the same face under new illumination. We then use this to build a simple and fast classifier that was not possible before because of a lack of training data.
Global effects of smoking, of quitting, and of taxing tobacco.
From the Center for Global Health Research, St. Michael’s Hospital and Dalla Lana School of Public Health, University of Toronto, Toronto (P.J.); and the Clinical Trial Service Unit and Epidemiological Studies Unit, Nuffield Department of Population Health, Richard Doll Building, University of Oxford, Oxford, United Kingdom (R.P.). Address reprint requests to Dr. Jha at [email protected].
Parallel Large Scale Feature Selection for Logistic Regression
In this paper we examine the problem of efficient feature evaluation for logistic regression on very large data sets. We present a new forward feature selection heuristic that ranks features by their estimated effect on the resulting model’s performance. An approximate optimization, based on backfitting, provides a fast and accurate estimate of each new feature’s coefficient in the logistic regression model. Further, the algorithm is highly scalable by parallelizing simultaneously over both features and records, allowing us to quickly evaluate billions of potential features even for very large data sets.
Indonesian News Classification using Support Vector Machine
Digital news with a variety topics is abundant on the internet. The problem is to classify news based on its appropriate category to facilitate user to find relevant news rapidly. Classifier engine is used to split any news automatically into the respective category. This research employs Support Vector Machine (SVM) to classify Indonesian news. SVM is a robust method to classify binary classes. The core processing of SVM is in the formation of an optimum separating plane to separate the different classes. For multiclass problem, a mechanism called one against one is used to combine the binary classification result. Documents were taken from the Indonesian digital news site, www.kompas.com. The experiment showed a promising result with the accuracy rate of 85%. This system is feasible to be implemented on Indonesian news classification. Keywords—classification, Indonesian news, text processing, support vector machine
Providing the basis for human-robot-interaction: a multi-modal attention system for a mobile robot
In order to enable the widespread use of robots in home and office environments, systems with natural interaction capabilities have to be developed. A prerequisite for natural interaction is the robot's ability to automatically recognize when and how long a person's attention is directed towards it for communication. As in open environments several persons can be present simultaneously, the detection of the communication partner is of particular importance. In this paper we present an attention system for a mobile robot which enables the robot to shift its attention to the person of interest and to maintain attention during interaction. Our approach is based on a method for multi-modal person tracking which uses a pan-tilt camera for face recognition, two microphones for sound source localization, and a laser range finder for leg detection. Shifting of attention is realized by turning the camera into the direction of the person which is currently speaking. From the orientation of the head it is decided whether the speaker addresses the robot. The performance of the proposed approach is demonstrated with an evaluation. In addition, qualitative results from the performance of the robot at the exhibition part of the ICVS'03 are provided.
Cyber Scanning: A Comprehensive Survey
Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.
Analysis and Design of Phase-Shifted Dual H-Bridge Converter With a Wide ZVS Range and Reduced Output Filter
In this paper, a phase-shifted dual H-bridge converter, which can solve the drawbacks of existing phase-shifted full-bridge converters such as narrow zero-voltage-switching (ZVS) range, large circulating current, large duty-cycle loss, and serious secondary-voltage overshoot and oscillation, is analyzed and evaluated. The proposed topology is composed of two symmetric half-bridge inverters that are placed in parallel on the primary side and are driven in a phase-shifting manner to regulate the output voltage. At the rectifier stage, a center-tap-type rectifier with two additional low-current-rated diodes is employed. This structure allows the proposed converter to have the advantages of a wide ZVS range, no problems related to duty-cycle loss, no circulating current, and the reduction of secondary-voltage oscillation and overshoot. Moreover, the output filter's size becomes smaller compared to the conventional phase-shift full-bridge converters. This paper describes the operation principle of the proposed converter and the analysis and design consideration in depth. A 1-kW 320-385-V input 50-V output laboratory prototype operating at a 100-kHz switching frequency is designed, built, and tested to verify the effectiveness of the presented converter.
DYNAMIC ASSET ALLOCATION WITH EVENT RISK
An inherent risk facing investors in financial markets is that a major event may trigger a large abrupt change in stock prices and market volatility. This paper studies the implications of jumps in prices and volatility on investment strategies. Using the event-risk framework of Duffie, Pan, and Singleton, we provide an analytical solution to the optimal portfolio problem. We find that event risk dramatically affects the optimal strategy. An investor facing event risk is less willing to take leveraged or short positions. In addition, the investor acts as if some portion of his wealth may become illiquid and the optimal strategy blends elements of both dynamic and buy-and-hold portfolio strategies. Jumps in prices and volatility both have an important influence on the optimal strategy.
Clustering Experiments on Big Transaction Data for Market Segmentation
This paper addresses the Volume dimension of Big Data. It presents a preliminary work on finding segments of retailers from a large amount of Electronic Funds Transfer at Point Of Sale (EFTPOS) transaction data. To the best of our knowledge, this is the first time a work on Big EFTPOS Data problem has been reported. A data reduction technique using the RFM (Recency, Frequency, Monetary) analysis as applied to a large data set is presented. Ways to optimise clustering techniques used to segment the big data set through data partitioning and parallelization are explained. Preliminary analysis on the segments of the retailers output from the clustering experiments demonstrates that further drilling down into the retailer segments to find more insights into their business behaviours is warranted.
Comparison of embedded and added motor imagery training in patients after stroke: study protocol of a randomised controlled pilot trial using a mixed methods approach
BACKGROUND Two different approaches have been adopted when applying motor imagery (MI) to stroke patients. MI can be conducted either added to conventional physiotherapy or integrated within therapy sessions. The proposed study aims to compare the efficacy of embedded MI to an added MI intervention. Evidence from pilot studies reported in the literature suggests that both approaches can improve performance of a complex motor skill involving whole body movements, however, it remains to be demonstrated, which is the more effective one. METHODS/DESIGN A single blinded, randomised controlled trial (RCT) with a pre-post intervention design will be carried out. The study design includes two experimental groups and a control group (CG). Both experimental groups (EG1, EG2) will receive physical practice of a clinical relevant motor task ('Going down, laying on the floor, and getting up again') over a two week intervention period: EG1 with embedded MI training, EG2 with MI training added after physiotherapy. The CG will receive standard physiotherapy intervention and an additional control intervention not related to MI. The primary study outcome is the time difference to perform the task from pre to post-intervention. Secondary outcomes include level of help needed, stages of motor task completion, degree of motor impairment, balance ability, fear of falling measure, motivation score, and motor imagery ability score. Four data collection points are proposed: twice during baseline phase, once following the intervention period, and once after a two week follow up. A nested qualitative part should add an important insight into patients' experience and attitudes towards MI. Semi-structured interviews of six to ten patients, who participate in the RCT, will be conducted to investigate patients' previous experience with MI and their expectations towards the MI intervention in the study. Patients will be interviewed prior and after the intervention period. DISCUSSION Results will determine whether embedded MI is superior to added MI. Findings of the semi-structured interviews will help to integrate patient's expectations of MI interventions in the design of research studies to improve practical applicability using MI as an adjunct therapy technique.
FIXED POINTS AND STABILITY IN NONLINEAR NEUTRAL VOLTERRA INTEGRO-DIFFERENTIAL EQUATIONS WITH VARIABLE DELAYS
Abstract. In this paper we use the contraction mapping theorem to obtain asymptotic stability results of the zero solution of a nonlinear neutral Volterra integro-differential equation with variable delays. Some conditions which allow the coefficient functions to change sign and do not ask the boundedness of delays are given. An asymptotic stability theorem with a necessary and sufficient condition is proved, which improve and extend the results in the literature. Two examples are also given to illustrate this work.
Effect of deworming on disease progression markers in HIV-1-infected pregnant women on antiretroviral therapy: a longitudinal observational study from Rwanda.
BACKGROUND Deworming human immunodeficiency virus (HIV)-infected individuals on antiretroviral therapy (ART) may be beneficial, particularly during pregnancy. We determined the efficacy of targeted and nontargeted antihelminth therapy and its effects on Plasmodium falciparum infection status, hemoglobin levels, CD4 counts, and viral load in pregnant, HIV-positive women receiving ART. METHODS Nine hundred eighty HIV-infected pregnant women receiving ART were examined at 2 visits during pregnancy and 2 postpartum visits within 12 weeks. Women were given antimalarials when malaria-positive whereas albendazole was given in a targeted (n = 467; treatment when helminth stool screening was positive) or nontargeted (n = 513; treatment at all time points, with stool screening) fashion. RESULTS No significant differences were noted between targeted and nontargeted albendazole treatments for the variables measured at each study visit except for CD4 counts, which were lower (P < .05) in the latter group at the final visit. Albendazole therapy was associated with favorable changes in subjects' hemoglobin levels, CD4 counts, and viral loads, particularly with helminth infections. CONCLUSIONS Antihelminthic therapy reduces detectable viral load, and increases CD4 counts and hemoglobin levels in pregnant HIV-infected women with helminth coinfections receiving ART.
Edge Detection with Embedded Confidence
ÐComputing the weighted average of the pixel values in a window is a basic module in many computer vision operators. The process is reformulated in a linear vector space and the role of the different subspaces is emphasized. Within this framework wellknown artifacts of the gradient-based edge detectors, such as large spurious responses can be explained quantitatively. It is also shown, that template matching with a template derived from the input data is meaningful since it provides an independent measure of confidence in the presence of the employed edge model. The widely used three-step edge detection procedure: gradient estimation, nonmaxima suppression, hysteresis thresholding; is generalized to include the information provided by the confidence measure. The additional amount of computation is minimal and experiments with several standard test images show the ability of the new procedure to detect weak edges. Index TermsÐEdge detection, performance assessment, gradient estimation, window operators.
Trademark Image Retrieval Using a Combination of Deep Convolutional Neural Networks
Trademarks are recognizable images and/or words used to distinguish various products or services. They become associated with the reputation, innovation, quality, and warranty of the products. Countries around the world have offices for industrial/intellectual property (IP) registration. A new trademark image in application for registration should be distinct from all the registered trademarks. Due to the volume of trademark registration applications and the size of the databases containing existing trademarks, it is impossible for humans to make all the comparisons visually. Therefore, technological tools are essential for this task. In this work we use a pre-trained, publicly available Convolutional Neural Network (CNN) VGG19 that was trained on the ImageNet database. We adapted the VGG19 for the trademark image retrieval (TIR) task by fine tuning the network using two different databases. The VGG19v was trained with a database organized with trademark images using visual similarities, and the VGG19c was trained using trademarks organized by using conceptual similarities. The database for the VGG19v was built using trademarks downloaded from the WEB, and organized by visual similarity according to experts from the IP office. The database for the VGG19c was built using trademark images from the United States Patent and Trademarks Office and organized according to the Vienna conceptual protocol. The TIR was assessed using the normalized average rank for a test set from the METU database that has 922,926 trademark images. We computed the normalized average ranks for VGG19v, VGG19c, and for a combination of both networks. Our method achieved significantly better results on the METU database than those published previously.
Relevance of mouse models of cardiac fibrosis and hypertrophy in cardiac research
Heart disease causing cardiac cell death due to ischemia–reperfusion injury is a major cause of morbidity and mortality in the United States. Coronary heart disease and cardiomyopathies are the major cause for congestive heart failure, and thrombosis of the coronary arteries is the most common cause of myocardial infarction. Cardiac injury is followed by post-injury cardiac remodeling or fibrosis. Cardiac fibrosis is characterized by net accumulation of extracellular matrix proteins in the cardiac interstitium and results in both systolic and diastolic dysfunctions. It has been suggested by both experimental and clinical evidence that fibrotic changes in the heart are reversible. Hence, it is vital to understand the mechanism involved in the initiation, progression, and resolution of cardiac fibrosis to design anti-fibrotic treatment modalities. Animal models are of great importance for cardiovascular research studies. With the developing research field, the choice of selecting an animal model for the proposed research study is crucial for its outcome and translational purpose. Compared to large animal models for cardiac research, the mouse model is preferred by many investigators because of genetic manipulations and easier handling. This critical review is focused to provide insight to young researchers about the various mouse models, advantages and disadvantages, and their use in research pertaining to cardiac fibrosis and hypertrophy.
On Detecting Adversarial Perturbations
Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small “detector” subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Moreover, while the detectors have been trained to detect only a specific adversary, they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack.
Creativity Support Tools: Report From a U.S. National Science Foundation Sponsored Workshop
The following item is made available as a courtesy to scholars by the author(s) and Drexel University Library and may contain materials and content, including computer code and tags, artwork, text, graphics, images, and illustrations (Material) which may be protected by copyright law. Unless otherwise noted, the Material is made available for non profit and educational purposes, such as research, teaching and private study. For these limited purposes, you may reproduce (print, download or make copies) the Material without prior permission. All copies must include any copyright notice originally included with the Material. You must seek permission from the authors or copyright owners for all uses that are not allowed by fair use and other provisions of the U.S. Copyright Law. The responsibility for making an independent legal assessment and securing any necessary permission rests with persons desiring to reproduce or use the Material.
Microservices: A Systematic Mapping Study
Microservices have recently emerged as an architectural style, addressing how to build, manage, and evolve architectures out of small, self-contained units. Particularly in the cloud, the microservices architecture approach seems to be an ideal complementation of container technology at the PaaS level However, there is currently no secondary study to consolidate this research. We aim here to identify, taxonomically classify and systematically compare the existing research body on microservices and their application in the cloud. We have conducted a systematic mapping study of 21 selected studies, published over the last two years until end of 2015 since the emergence of the microservices pattern. We classified and compared the selected studies based on a characterization framework. This results in a discussion of the agreed and emerging concerns within the microservices architectural style, positioning it within a continuous development context, but also moving it closer to cloud and container technology.
Image Stitching Using Structure Deformation
The aim of this paper is to achieve seamless image stitching without producing visual artifact caused by severe intensity discrepancy and structure misalignment, given that the input images are roughly aligned or globally registered. Our new approach is based on structure deformation and propagation for achieving the overall consistency in image structure and intensity. The new stitching algorithm, which has found applications in image compositing, image blending, and intensity correction, consists of the following main processes. Depending on the compatibility and distinctiveness of the 2D features detected in the image plane, single or double optimal partitions are computed subject to the constraints of intensity coherence and structure continuity. Afterwards, specific 1D features are detected along the computed optimal partitions from which a set of sparse deformation vectors is derived to encode 1D feature matching between the partitions. These sparse deformation cues are robustly propagated into the input images by solving the associated minimization problem in gradient domain, thus providing a uniform framework for the simultaneous alignment of image structure and intensity. We present results in general image compositing and blending in order to show the effectiveness of our method in producing seamless stitching results from complex input images.
Offline Recognition of Devanagari Script: A Survey
In India, more than 300 million people use Devanagari script for documentation. There has been a significant improvement in the research related to the recognition of printed as well as handwritten Devanagari text in the past few years. State of the art from 1970s of machine printed and handwritten Devanagari optical character recognition (OCR) is discussed in this paper. All feature-extraction techniques as well as training, classification and matching techniques useful for the recognition are discussed in various sections of the paper. An attempt is made to address the most important results reported so far and it is also tried to highlight the beneficial directions of the research till date. Moreover, the paper also contains a comprehensive bibliography of many selected papers appeared in reputed journals and conference proceedings as an aid for the researchers working in the field of Devanagari OCR.
Enterprise Performance, Privatization and the Role of Ownership in Portugal
In both economically developed and developing countries, privatisation, budget austerity measures and market liberalisations have become key aspects of structural reform programs in the last three decades. These three recommended policies were parts of strong revival of classical and new-classical school of thought since the middle of 70s. Such programs aim to achieve higher microeconomic efficiency and foster economic growth, whilst also aspiring to reduce public sector borrowing requirements through the elimination of unnecessary subsidies. For firms to achieve superior performance a change in ownership from public (state ownership) to private has been recommended as a vital condition. To assess the ownership role, the economic performances of private, public and mixed enterprises in Portugal is compared through the use of factor analysis method. The extracted factors, using data of two years, 1998 and 2000, do not pick ownership as a key performance factor.
Diet induced epigenetic changes and their implications for health.
Dietary exposures can have consequences for health years or decades later and this raises questions about the mechanisms through which such exposures are 'remembered' and how they result in altered disease risk. There is growing evidence that epigenetic mechanisms may mediate the effects of nutrition and may be causal for the development of common complex (or chronic) diseases. Epigenetics encompasses changes to marks on the genome (and associated cellular machinery) that are copied from one cell generation to the next, which may alter gene expression, but which do not involve changes in the primary DNA sequence. These include three distinct, but closely inter-acting, mechanisms including DNA methylation, histone modifications and non-coding microRNAs (miRNA) which, together, are responsible for regulating gene expression not only during cellular differentiation in embryonic and foetal development but also throughout the life-course. This review summarizes the growing evidence that numerous dietary factors, including micronutrients and non-nutrient dietary components such as genistein and polyphenols, can modify epigenetic marks. In some cases, for example, effects of altered dietary supply of methyl donors on DNA methylation, there are plausible explanations for the observed epigenetic changes, but to a large extent, the mechanisms responsible for diet-epigenome-health relationships remain to be discovered. In addition, relatively little is known about which epigenomic marks are most labile in response to dietary exposures. Given the plasticity of epigenetic marks and their responsiveness to dietary factors, there is potential for the development of epigenetic marks as biomarkers of health for use in intervention studies.
Variable Forgetting in Reasoning about Knowledge
In this paper, we investigate knowledge reasoning within a simple framework called knowledge structure. We use variable forgetting as a basic operation for one agent to reason about its own or other agents’ knowledge. In our framework, two notions namely agents’ observable variables and the weakest sufficient condition play important roles in knowledge reasoning. Given a background knowledge base Γ and a set of observable variables Oi for each agent i, we show that the notion of agent i knowing a formula φ can be defined as a weakest sufficient condition of φ over Oi under Γ. Moreover, we show how to capture the notion of common knowledge by using a generalized notion of weakest sufficient condition. Also, we show that public announcement operator can be conveniently dealt with via our notion of knowledge structure. Further, we explore the computational complexity of the problem whether an epistemic formula is realized in a knowledge structure. In the general case, this problem is PSPACE-hard; however, for some interesting subcases, it can be reduced to co-NP. Finally, we discuss possible applications of our framework in some interesting domains such as the automated analysis of the well-known muddy children puzzle and the verification of the revised Needham-Schroeder protocol. We believe that there are many scenarios where the natural presentation of the available information about knowledge is under the form of a knowledge structure. What makes it valuable compared with the corresponding multi-agent S5 Kripke structure is that it can be much more succinct.
Analysis of recursive state machines
Recursive state machines (RSMs) enhance the power of ordinary state machines by allowing vertices to correspond either to ordinary states or to potentially recursive invocations of other state machines. RSMs can model the control flow in sequential imperative programs containing recursive procedure calls. They can be viewed as a visual notation extending Statecharts-like hierarchical state machines, where concurrency is disallowed but recursion is allowed. They are also related to various models of pushdown systems studied in the verification and program analysis communities.After introducing RSMs and comparing their expressiveness with other models, we focus on whether verification can be efficiently performed for RSMs. Our first goal is to examine the verification of linear time properties of RSMs. We begin this study by dealing with two key components for algorithmic analysis and model checking, namely, reachability (Is a target state reachable from initial states?) and cycle detection (Is there a reachable cycle containing an accepting state?). We show that both these problems can be solved in time O(nθ2) and space O(nθ), where n is the size of the recursive machine and θ is the maximum, over all component state machines, of the minimum of the number of entries and the number of exits of each component. From this, we easily derive algorithms for linear time temporal logic model checking with the same complexity in the model. We then turn to properties in the branching time logic CTL*, and again demonstrate a bound linear in the size of the state machine, but only for the case of RSMs with a single exit node.
Social Dominance An Intergroup Theory of Social Hierarchy and Oppression
Social dominance : an intergroup theory of social hierarchy and oppression / Jim Sidanius, Felicia Pratto. p. cm. Includes bibliographical references and index.
Image-based Airborne LiDAR Point Cloud Encoding for 3 D Building Model Retrieval
With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.
Position and Orientation Estimation Through Millimeter-Wave MIMO in 5G Systems
Millimeter-wave (mm-wave) signals and large antenna arrays are considered enabling technologies for future 5G networks. While their benefits for achieving high-data rate communications are well-known, their potential advantages for accurate positioning are largely undiscovered. We derive the Cramér-Rao bound (CRB) on position and rotation angle estimation uncertainty from mm-wave signals from a single transmitter, in the presence of scatterers. We also present a novel two-stage algorithm for position and rotation angle estimation that attains the CRB for average to high signal-to-noise ratio. The algorithm is based on multiple measurement vectors matching pursuit for coarse estimation, followed by a refinement stage based on the space-alternating generalized expectation maximization algorithm. We find that accurate position and rotation angle estimation is possible using signals from a single transmitter, in either line-of-sight, non-line-of-sight, or obstructed-line-of-sight conditions.
User analysis in HCI - the historical lessons from individual differences research
User analysis is a crucial aspect of user-centered systems design, yet Human-Computer Interaction (HCI) has yet to formulate reliable and valid characterizations of users beyond gross distinctions based on task and experience. Individual differences research from mainstream psychology has identified a stable set of characteristics that would appear to offer potential application in the HCI arena. Furthermore, in its evolution over the last 100 years, research on individual differences has faced many of the problems of theoretical status and applicability that is common to HCI. In the present paper the relationship between work in cognitive and differential psychology and current analyses of users in HCI is examined. It is concluded that HCI could gain significant predictive power if individual differences research was related to the analysis of users in contemporary systems design.
Haemoglobin level and vascular access survival in haemodialysis patients.
BACKGROUND A full correction of anaemia in haemodialysis (HD) patients may lead to an increased risk of vascular access (VA) failure. We studied the relationship between haemoglobin (Hb) level and VA survival. METHODS Incident patients between January 2000 and December 2002 with <1 month on HD were considered. The relative risk (RR) of access failure was evaluated in four different groups of patients divided according to their Hb level (<10, 10-12, 12-13 and >13 g/dl). Other factors possibly influencing VA survival were also considered: age, gender, diabetes, vascular disease, intact parathyroid hormone (iPTH) and treatment with an angiotensin-converting enzyme (ACE) inhibitor, angiotensin receptor blocker (ARB) or recombinant human erythropoeitin therapy. RESULTS We studied 1254 patients (1057 with autologous fistulae, 75 grafts and 122 permanent catheters at admission). Based on Cox analysis, we found the next statistically significant RR of VA failure to be 2.3 times higher with grafts than with arterio-venous fistulae (AVFs) and 1.8 times higher in AVFs with Hb <10 g/dl than in AVFs of the next Hb group. There was no statistically significant difference in the RR of VA failure between patients with Hb 10-12 g/dl and those with Hb 12-13 g/dl or >13 g/dl. Diabetes (RR: 1.41, P = 0.06), age >65 years (RR: 1.32; P = 0.11) and iPTH (RR: 1.56; P = 0.01) were identified as predictive factors for VA failure; ACE inhibitors or ARB (RR: 0.69; P = 0.03) were found to be protective factors. CONCLUSIONS In the studied population, the correction of Hb level to >12 g/dl was not associated with a higher incidence of VA thrombosis than in patients with Hb between 10 and 12 g/dl. ACE inhibitors or ARBs were found to be protective factors, and diabetes, age >65 years and iPTH >400 pg/ml were negative predictive factors for VA survival.
Initial provisioning and spare parts inventory network optimisation in a multi maintenance base environment
Aviation spare parts provisioning is a highly complex problem. Traditionally, provisioning has been carried out using a conventional Poisson-based approach where inventory quantities are calculated separately for each part number and demands from different operations bases are consolidated into one single location. In an environment with multiple operations bases, however, such simplifications can lead to situations in which spares -- although available at another airport -- first have to be shipped to the location where the demand actually arose, leading to flight delays and cancellations. In this paper we demonstrate how simulation-based optimisation can help with the multi-location inventory problem by quantifying synergy potential between locations and how total service lifecycle cost can be further reduced without increasing risk right away from the Initial Provisioning (IP) stage onwards by taking into account advanced logistics policies such as pro-active re-balancing of spares between stocking locations.
Automatic Video Summarization by Graph Modeling
We propose a unified approach for summarization based on the analysis of video structures and video highlights. Our approach emphasizes both the content balance and perceptual quality of a summary. Normalized cut algorithm is employed to globally and optimally partition a video into clusters. A motion attention model based on human perception is employed to compute the perceptual quality of shots and clusters. The clusters, together with the computed attention values, form a temporal graph similar to Markov chain that inherently describes the evolution and perceptual importance of video clusters. In our application, the flow of a temporal graph is utilized to group similar clusters into scenes, while the attention values are used as guidelines to select appropriate sub-shots in scenes for summarization.
A literature review on the state-ofthe-art in patent analysis
The rapid growth of patent documents has called for the development of sophisticated patent analysis tools. Currently, there are various tools that are being utilized by organizations for analyzing patents. These tools are capable of performing wide range of tasks, such as analyzing and forecasting future technological trends, conducting strategic technology planning, detecting patent infringement, determining patents quality and the most promising patents, and identifying technological hotspots and patent vacuums. This literature review presents the state-of-the-art in patent analysis and also presents taxonomy of patent analysis techniques. Moreover, the key features and weaknesses of the discussed tools and techniques are presented and several directions for future research are highlighted. The literature review will be helpful for the researchers in finding the latest research efforts pertaining to the patent analysis in a unified form. 2013 Elsevier Ltd. All rights reserved.
Measuring Attitudes Towards Telepresence Robots
Studies using Nomura et al.’s “Negative Attitude toward Robots Scale” (NARS) [1] as an attitudinal measure have featured robots that were perceived to be autonomous, independent agents. State of the art telepresence robots require an explicit human-in-the-loop to drive the robot around. In this paper, we investigate if NARS can be used with telepresence robots. To this end, we conducted three studies in which people watched videos of telepresence robots (n=70), operated telepresence robots (n=38), and interacted with telepresence robots (n=12). Overall, the results from our three studies indicated that NARS may be applied to telepresence robots, and culture, gender, and prior robot experience can be influential factors on the NARS score.
Novel discrete element modelling of Gilbert‐type delta formation in an active tectonic setting—first results
Gilbert deltas are now recognised as an important stratigraphic component of many extensional basins. They are remarkable due to their coarse‐grained nature, large size and steep foresets (up to 30–35°) and may exhibit a variety of slope instability features (faulting, slump scars, avalanching, etc.). They are also often closely related to major, basin‐margin normal faults. There has been considerable research interest in Gilbert deltas, partly due to their economic significance as stratigraphic traps for hydrocarbons but also due to their sensitivity to relative base level changes, giving them an important role in basin analysis. In addition to field studies, numerical modelling has also been used to simulate such deltas, with some success. However, until now, such studies have typically employed continuum numerical techniques where the basic data elements created by simulations are stratigraphic volumes or timelines and the sediments themselves have no internal properties per se and merely represent areas/volumes of introduced coarsegrained, clastic and sedimentary material. Faulting or folding (if present) are imposed externally and do not develop (naturally) within the modelled delta body itself. Here, I present first results from a novel 2D numerical model which simulates coarse‐grained (Gilbert‐type) deltaic sedimentation in an active extensional tectonic setting undergoing a relative base level rise. Sediment is introduced as packages of discrete elements which are deposited beneath sea level, from the shoreline, upon a pre‐existing basin or delta. These elements are placed carefully and then allowed to settle onto the system. The elements representing the coarsegrained, deltaic sediments can have an intrinsic coefficient of friction, cohesion or other material properties appropriate to the system being considered. The spatial resolution of the modelling is of the order of 15 m and topsets, foresets, bottomsets, faults, slumps and collapse structures all form naturally in the modelled system. Examples of deltas developing as a result of sediment supply from both the footwall and hanging‐wall of a normal fault, and subject to changes in fault slip rate are presented. Implications of the modelling approach, and its application and utility in basin research, are discussed.
Health-related quality of life of patients with advanced breast cancer treated with everolimus plus exemestane versus placebo plus exemestane in the phase 3, randomized, controlled, BOLERO-2 trial.
BACKGROUND The randomized, controlled BOLERO-2 (Breast Cancer Trials of Oral Everolimus) trial demonstrated significantly improved progression-free survival with the use of everolimus plus exemestane (EVE + EXE) versus placebo plus exemestane (PBO + EXE) in patients with advanced breast cancer who developed disease progression after treatment with nonsteroidal aromatase inhibitors. This analysis investigated the treatment effects on health-related quality of life (HRQOL). METHODS Using the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire-Core 30 (EORTC QLQ-C30) questionnaire, HRQOL was assessed at baseline and every 6 weeks thereafter until disease progression and/or treatment discontinuation. The 30 items in 15 subscales of the QLQ-C30 include global health status wherein higher scores (range, 0-100) indicate better HRQOL. This analysis included a protocol-specified time to definitive deterioration (TDD) analysis at a 5% decrease in HRQOL versus baseline, with no subsequent increase above this threshold. The authors report additional sensitivity analyses using 10-point minimal important difference decreases in the global health status score versus baseline. Treatment arms were compared using the stratified log-rank test and Cox proportional hazards model adjusted for trial stratum (visceral metastases, previous hormone sensitivity), age, sex, race, baseline global health status score and Eastern Cooperative Oncology Group performance status, prognostic risk factors, and treatment history. RESULTS Baseline global health status scores were found to be similar between treatment groups (64.7 vs 65.3). The median TDD in HRQOL was 8.3 months with EVE + EXE versus 5.8 months with PBO + EXE (hazard ratio, 0.74; P = .0084). At the 10-point minimal important difference, the median TDD with EVE + EXE was 11.7 months versus 8.4 months with PBO + EXE (hazard ratio, 0.80; P = .1017). CONCLUSIONS In patients with advanced breast cancer who develop disease progression after treatment with nonsteroidal aromatase inhibitors, EVE + EXE was associated with a longer TDD in global HRQOL versus PBO + EXE.
Implementation of a central line maintenance care bundle in hospitalized pediatric oncology patients.
OBJECTIVE To investigate whether a multidisciplinary, best-practice central line maintenance care bundle reduces central line-associated blood stream infection (CLABSI) rates in hospitalized pediatric oncology patients and to further delineate the epidemiology of CLABSIs in this population. METHODS We performed a prospective, interrupted time series study of a best-practice bundle addressing all areas of central line care: reduction of entries, aseptic entries, and aseptic procedures when changing components. Based on a continuous quality improvement model, targeted interventions were instituted to improve compliance with each of the bundle elements. CLABSI rates and epidemiological data were collected for 10 months before and 24 months after implementation of the bundle and compared in a Poisson regression model. RESULTS CLABSI rates decreased from 2.25 CLABSIs per 1000 central line days at baseline to 1.79 CLABSIs per 1000 central line days during the intervention period (incidence rate ratio [IRR]: 0.80, P = .58). Secondary analyses indicated CLABSI rates were reduced to 0.81 CLABSIs per 1000 central line days in the second 12 months of the intervention (IRR: 0.36, P = .091). Fifty-nine percent of infections resulted from Gram-positive pathogens, 37% of patients with a CLABSI required central line removal, and patients with Hickman catheters were more likely to have a CLABSI than patients with Infusaports (IRR: 4.62, P = .02). CONCLUSIONS A best-practice central line maintenance care bundle can be implemented in hospitalized pediatric oncology patients, although long ramp-up times may be necessary to reap maximal benefits. Further research is needed to determine if this CLABSI rate reduction can be sustained and spread.
A study of material effects for the panel level package (PLP) technology
The wafer level package (WLP) is a cost-effective solution for electronic package, and it has been increasingly applied during recent years. In this study, a new packaging technology which retains the advantages of WLP, the panel level package (PLP) technology, is proposed to further obtain the capability of signals fan-out for the fine-pitched integrated circuit (IC). In the PLP, the filler material is selected to fill the trench around the chip and provide a smooth surface for the redistribution lines. Therefore, the solder bumps could be located on both the filler and the chip surface, and the pitch of the chip side is fanned-out. In our previous research, it was found that the lifetime of solder joints in PLP can easily pass 3,500 cycles. The outstanding performance is explained by the application of a soft filler and a lamination material. However, it is also learned that the deformation of the lamination material during thermal loading may affect the reliability of the adjacent metal trace. In this study, the material effects of the proposed PLP technology are investigated and discussed through finite element analysis (FEA). A factorial analysis with three levels and three factors (the chip carrier, the lamination, and the filler material) is performed to obtain sensitivity information. Based on the results, the suggested combinations of packaging material in the PLP are provided. The reliability of the metal trace can be effectively improved by means of wisely applying materials in the PLP, and therefore, the PLP technology is expected to have a high potential for various applications in the near future.
Stability Criteria for Switched and Hybrid Systems
The study of the stability properties of switched and hybrid systems gives rise to a number of interesting and challenging mathematical problems. The objective of this paper is to outline some of these problems, to review progress made in solving these problems in a number of diverse communities, and to review some problems that remain open. An important contribution of our work is to bring together material from several areas of research and to present results in a unified manner. We begin our review by relating the stability problem for switched linear systems and a class of linear differential inclusions. Closely related to the concept of stability are the notions of exponential growth rates and converse Lyapunov theorems, both of which are discussed in detail. In particular, results on common quadratic Lyapunov functions and piecewise linear Lyapunov functions are presented, as they represent constructive methods for proving stability, and also represent problems in which significant progress has been made. We also comment on the inherent difficulty of determining stability of switched systems in general which is exemplified by NP-hardness and undecidability results. We then proceed by considering the stability of switched systems in which there are constraints on the switching rules, through both dwell time requirements and state dependent switching laws. Also in this case the theory of Lyapunov functions and the existence of converse theorems is reviewed. We briefly comment on the classical Lur’e problem and on the theory of stability radii, both of which contain many of the features of switched systems and are rich sources of practical results on the topic. Finally we present a list of questions and open problems which provide motivation for continued research in this area.
RecSys Challenge 2016: job recommendations based on preselection of offers and gradient boosting
We present the Mim-Solution's approach to the RecSys Challenge 2016, which ranked 2nd. The goal of the competition was to prepare job recommendations for the users of the website Xing.com. Our two phase algorithm consists of candidate selection followed by the candidate ranking. We ranked the candidates by the predicted probability that the user will positively interact with the job offer. We have used Gradient Boosting Decision Trees as the regression tool.
Bag picture of the excited QCD vacuum with static Q source
Abstract The gluon excitations of the QCD vacuum are investigated in the presence of a static quark-antiquark source. It is shown that the ground state potential and the excitation spectrum of dynamical gluon degrees of freedom, as determined in our lattice simulations, agree remarkably well with model predictions based on the diaelectric properties of the confining vacuum described as a dual superconductor. The strong chromoelectric field of the static Q Q source creates a bubble (bag) in the condensed phase where weakly interacting gluon modes can be excited. Some features and predictions of the bag model are presented and the chromoelectric vortex limit at large quark-antiquark separation (string formation) is briefly discussed.
Local Rule-Based Explanations of Black Box Decision Systems
The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance’s features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.
Towards a Linguistic Linked Open Data cloud: The Open Linguistics Working Group
The Open Linguistics Working Group (OWLG) is an initiative of experts from different fields concerned with linguistic data, including academic linguistics (e.g. typology, corpus linguistics), applied linguistics (e.g. computational linguistics, lexicography and language documentation), and NLP (e.g. from the Semantic Web community). The primary goals of the working group are 1) promoting the idea of open linguistic resources, 2) developing means for their representation, and 3) encouraging the exchange of ideas across different disciplines. To a certain extent, the activities of the Open Linguistics Working Group converge towards the creation of a Linguistic Linked Open Data cloud, which is a topic addressed from different angles by several members of the Working Group. In this article, some of these currently on-going activities are presented and described. RÉSUMÉ. Le groupe OWLG est une initiative d’experts provenant de différents domaines linguistiques, comprenant la linguistique académique (typologie, corpus), la linguistique appliquée (linguistique computationelle, lexicographie, documentation de langues) et le traitement automatique des langues (p.ex. Web Sémantique). Les objectifs principaux de ce groupe sont 1) la promotion de l’idée de ressources ouvertes et accessibles, 2) le développement de moyens pour représenter lesdites ressources et 3) la stimulation d’échanges entre les diverses disciplines et subdisciplines. Les activités de l’OWLG sont pour la plupart liées à la création d’un nuage de données linguistiques Linked Open Data. Les membres du groupe abordent ce thème sous des aspects différents, dont nous présenterons quelques-uns ci-dessous.
Loop closure detection for visual SLAM using PCANet features
Loop closure detection benefits simultaneous localization and mapping (SLAM) in building a consistent map of the environment by reducing the accumulate error. Handcrafted features have been successfully used in traditional approaches, whereas in this paper, we show that unsupervised features extracted by deep learning models, can improves the accuracy of loop closure detection. In particular, we employ a cascaded deep network, namely the PCANet, to extract features as image descriptors. We tested the performance of our proposed method on open datasets to compare with traditional approaches. We found that the PCANet features outperform state-of-the-art handcrafted competitors, and are computational efficient to be implemented in practical robotics.
Unsupervised convolutional neural networks for motion estimation
Traditional methods for motion estimation estimate the motion field F between a pair of images as the one that minimizes a predesigned cost function. In this paper, we propose a direct method and train a Convolutional Neural Network (CNN) that when, at test time, is given a pair of images as input it produces a dense motion field F at its output layer. In the absence of large datasets with ground truth motion that would allow classical supervised training, we propose to train the network in an unsupervised manner. The proposed cost function that is optimized during training, is based on the classical optical flow constraint. The latter is differentiable with respect to the motion field and, therefore, allows backpropagation of the error to previous layers of the network. Our method is tested on both synthetic and real image sequences and performs similarly to the state-of-the-art methods.
Transforming Holiness: Representations of Holiness in English and American Literary Texts.
This fascinating collection of essays addresses the question of how holiness has been represented in English and American literary texts from early saints' lives to the poetry of the mid-twentieth century. The interaction of spiritual ideals with the creative and often worldly imagination is examined in the work of writers as varied as George Herbert, Harriet Beecher Stowe and D.H. Lawrence. The range of genres discussed includes not only devotional poetry and apparently secular prose fiction, but also political ballads, personal conduct books and congregational psalms and hymns. Holiness is set in relation to vital issues such as creativity, gender, Romanticism, translation and visual culture. Together the essays reveal the full meaning of the title of the collection: that holiness, a transforming force, has transformed itself radically as a concept over the centuries, and undergoes dynamic transformation through its expression in literature.
Parsing Tweets into Universal Dependencies
We study the problem of analyzing tweets with Universal Dependencies (UD; Nivre et al., 2016). We extend the UD guidelines to cover special constructions in tweets that affect tokenization, part-ofspeech tagging, and labeled dependencies. Using the extended guidelines, we create a new tweet treebank for English (TWEEBANK V2) that is four times larger than the (unlabeled) TWEEBANK V1 introduced by Kong et al. (2014). We characterize the disagreements between our annotators and show that it is challenging to deliver consistent annotation due to ambiguity in understanding and explaining tweets. Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD. To overcome annotation noise without sacrificing computational efficiency, we propose a new method to distill an ensemble of 20 transition-based parsers into a single one. Our parser achieves an improvement of 2.2 in LAS over the un-ensembled baseline and outperforms parsers that are state-ofthe-art on other treebanks in both accuracy and speed.
Advanced practice nursing roles: development, implementation and evaluation.
AIM The aim of this paper is to discuss six issues influencing the introduction of advanced practice nursing (APN) roles: confusion about APN terminology, failure to define clearly the roles and goals, role emphasis on physician replacement/support, underutilization of all APN role domains, failure to address environmental factors that undermine the roles, and limited use of evidence-based approaches to guide their development, implementation and evaluation. BACKGROUND Health care restructuring in many countries has led to substantial increases in the different types and number of APN roles. The extent to which these roles truly reflect advanced nursing practice is often unclear. The misuse of APN terminology, inconsistent titling and educational preparation, and misguided interpretations regarding the purpose of these roles pose barriers to realizing their full potential and impact on health. Role conflict, role overload, and variable stakeholder acceptance are frequently reported problems associated with the introduction of APN roles. DISCUSSION Challenges associated with the introduction of APN roles suggests that greater attention to and consistent use of the terms of the terms advanced nursing practice, advancement and advanced practice nursing is required. Advanced nursing practice refers to the work or what nurses do in the role and is important for defining the specific nature and goals for introducing new APN roles. The concept of advancement further defines the multi-dimensional scope and mandate of advanced nursing practice and distinguishes differences from other types of nursing roles. Advanced practice nursing refers to the whole field, involving a variety of such roles and the environments in which they exist. Many barriers to realizing the full potential of these roles could be avoided through better planning and efforts to address environmental factors, structures, and resources that are necessary for advanced nursing practice to take place. CONCLUSIONS Recommendations for the future introduction of APN roles can be drawn from this paper. These include the need for a collaborative, systematic and evidence-based process designed to provide data to support the need and goals for a clearly defined APN role, support a nursing orientation to advanced practice, promote full utilization of all the role domains, create environments that support role development, and provide ongoing evaluation of these roles related to predetermined goals.
Is backhaul becoming a bottleneck for green wireless access networks?
Mobile operators are facing an exponential traffic growth due to the proliferation of portable devices that require a high-capacity connectivity. This, in turn, leads to a tremendous increase of the energy consumption of wireless access networks. A promising solution to this problem is the concept of heterogeneous networks, which is based on the dense deployment of low-cost and low-power base stations, in addition to the traditional macro cells. However, in such a scenario the energy consumed by the backhaul, which aggregates the traffic from each base station towards the metro/core segment, becomes significant and may limit the advantages of heterogeneous network deployments. This paper aims at assessing the impact of backhaul on the energy consumption of wireless access networks, taking into consideration different data traffic requirements (i.e., from todays to 2020 traffic levels). Three backhaul architectures combining different technologies (i.e., copper, fiber, and microwave) are considered. Results show that backhaul can amount to up to 50% of the power consumption of a wireless access network. On the other hand, hybrid backhaul architectures that combines fiber and microwave performs relatively well in scenarios where the wireless network is characterized by a high small-base-stations penetration rate.
Architectural geometry
Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.
The cost-effectiveness and public health benefit of nalmefene added to psychosocial support for the reduction of alcohol consumption in alcohol-dependent patients with high/very high drinking risk levels: a Markov model
OBJECTIVES To determine whether nalmefene combined with psychosocial support is cost-effective compared with psychosocial support alone for reducing alcohol consumption in alcohol-dependent patients with high/very high drinking risk levels (DRLs) as defined by the WHO, and to evaluate the public health benefit of reducing harmful alcohol-attributable diseases, injuries and deaths. DESIGN Decision modelling using Markov chains compared costs and effects over 5 years. SETTING The analysis was from the perspective of the National Health Service (NHS) in England and Wales. PARTICIPANTS The model considered the licensed population for nalmefene, specifically adults with both alcohol dependence and high/very high DRLs, who do not require immediate detoxification and who continue to have high/very high DRLs after initial assessment. DATA SOURCES We modelled treatment effect using data from three clinical trials for nalmefene (ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941)). Baseline characteristics of the model population, treatment resource utilisation and utilities were from these trials. We estimated the number of alcohol-attributable events occurring at different levels of alcohol consumption based on published epidemiological risk-relation studies. Health-related costs were from UK sources. MAIN OUTCOME MEASURES We measured incremental cost per quality-adjusted life year (QALY) gained and number of alcohol-attributable harmful events avoided. RESULTS Nalmefene in combination with psychosocial support had an incremental cost-effectiveness ratio (ICER) of £5204 per QALY gained, and was therefore cost-effective at the £20,000 per QALY gained decision threshold. Sensitivity analyses showed that the conclusion was robust. Nalmefene plus psychosocial support led to the avoidance of 7179 alcohol-attributable diseases/injuries and 309 deaths per 100,000 patients compared to psychosocial support alone over the course of 5 years. CONCLUSIONS Nalmefene can be seen as a cost-effective treatment for alcohol dependence, with substantial public health benefits. TRIAL REGISTRATION NUMBERS This cost-effectiveness analysis was developed based on data from three randomised clinical trials: ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941).
Sparse Submodular Probabilistic PCA
We propose a novel approach for sparse probabilistic principal component analysis, that combines a low rank representation for the latent factors and loadings with a novel sparse variational inference approach for estimating distributions of latent variables subject to sparse support constraints. Inference and parameter estimation for the resulting model is achieved via expectation maximization with a novel variational inference method for the E-step that induces sparsity. We show that this inference problem can be reduced to discrete optimal support selection. The discrete optimization is submodular, hence, greedy selection is guaranteed to achieve 1-1/e fraction of the optimal. Empirical studies indicate effectiveness of the proposed approach for the recovery of a parsimonious decomposition as compared to established baseline methods. We also evaluate our method against state-of-the-art methods on high dimensional fMRI data, and show that the method performs as well as or better than other methods.
HICO: A Benchmark for Recognizing Human-Object Interactions in Images
We introduce a new benchmark "Humans Interacting with Common Objects" (HICO) for recognizing human-object interactions (HOI). We demonstrate the key features of HICO: a diverse set of interactions with common object categories, a list of well-defined, sense-based HOI categories, and an exhaustive labeling of co-occurring interactions with an object category in each image. We perform an in-depth analysis of representative current approaches and show that DNNs enjoy a significant edge. In addition, we show that semantic knowledge can significantly improve HOI recognition, especially for uncommon categories.
Grounded theory research: literature reviewing and reflexivity.
AIM This paper is a report of a discussion of the arguments surrounding the role of the initial literature review in grounded theory. BACKGROUND Researchers new to grounded theory may find themselves confused about the literature review, something we ourselves experienced, pointing to the need for clarity about use of the literature in grounded theory to help guide others about to embark on similar research journeys. DISCUSSION The arguments for and against the use of a substantial topic-related initial literature review in a grounded theory study are discussed, giving examples from our own studies. The use of theoretically sampled literature and the necessity for reflexivity are also discussed. Reflexivity is viewed as the explicit quest to limit researcher effects on the data by awareness of self, something seen as integral both to the process of data collection and the constant comparison method essential to grounded theory. CONCLUSION A researcher who is close to the field may already be theoretically sensitized and familiar with the literature on the study topic. Use of literature or any other preknowledge should not prevent a grounded theory arising from the inductive-deductive interplay which is at the heart of this method. Reflexivity is needed to prevent prior knowledge distorting the researcher's perceptions of the data.
Erratum to "Representation-theoretic support spaces for finite group schemes"
As pointed out to us by Rolf Farnsteiner, the results presented in our paper [AJM 127, pp. 379-420] require a modified definition of "abelian p -point." With this modified definition (functionally equivalent to one which we implicitly use), all of the results of our paper become valid. We make explicit this modified definition as well as those arguments which require this new definition and the modification of one proof (of Theorem 4.8) which is required.
Choosing an NLP Library for Analyzing Software Documentation: A Systematic Literature Review and a Series of Experiments
To uncover interesting and actionable information from natural language documents authored by software developers, many researchers rely on "out-of-the-box" NLP libraries. However, software artifacts written in natural language are different from other textual documents due to the technical language used. In this paper, we first analyze the state of the art through a systematic literature review in which we find that only a small minority of papers justify their choice of an NLP library. We then report on a series of experiments in which we applied four state-of-the-art NLP libraries to publicly available software artifacts from three different sources. Our results show low agreement between different libraries (only between 60% and 71% of tokens were assigned the same part-of-speech tag by all four libraries) as well as differences in accuracy depending on source: For example, spaCy achieved the best accuracy on Stack Overflow data with nearly 90% of tokens tagged correctly, while it was clearly outperformed by Google's SyntaxNet when parsing GitHub ReadMe files. Our work implies that researchers should make an informed decision about the particular NLP library they choose and that customizations to libraries might be necessary to achieve good results when analyzing software artifacts written in natural language.
Target size study for one-handed thumb use on small touchscreen devices
This paper describes a two-phase study conducted to determine optimal target sizes for one-handed thumb use of mobile handheld devices equipped with a touch-sensitive screen. Similar studies have provided recommendations for target sizes when using a mobile device with two hands plus a stylus, and interacting with a desktop-sized display with an index finger, but never for thumbs when holding a small device in a single hand. The first phase explored the required target size for single-target (discrete) pointing tasks, such as activating buttons, radio buttons or checkboxes. The second phase investigated optimal sizes for widgets used for tasks that involve a sequence of taps (serial), such as text entry. Since holding a device in one hand constrains thumb movement, we varied target positions to determine if performance depended on screen location. The results showed that while speed generally improved as targets grew, there were no significant differences in error rate between target sizes =9.6 mm in discrete tasks and targets =7.7 mm in serial tasks. Along with subjective ratings and the findings on hit response variability, we found that target size of 9.2 mm for discrete tasks and targets of 9.6 mm for serial tasks should be sufficiently large for one-handed thumb use on touchscreen-based handhelds without degrading performance and preference.
Canonical Seesaw Mechanism in Electro-Weak SU(4)L x U(1)Y Models
In this paper we prove that the canonical seesaw mechanism can naturally be implemented in a particular class of electro-weak SU(4)L x U(1)Y gauge models. The resulting neutrino mass spectrum is determined by just tuning a unique free parameter 'a' within the algebraical method of solving gauge models with high symmetries. All the Standard Model phenomenology is preserved, being unaffected by the new physics occuring at a high breaking scale m ~ 10^11GeV.
k-way Hypergraph Partitioning via n-Level Recursive Bisection
We develop a multilevel algorithm for hypergraph partitioning that contracts the vertices one at a time. Using several caching and lazy-evaluation techniques during coarsening and refinement, we reduce the running time by up to two-orders of magnitude compared to a naive n-level algorithm that would be adequate for ordinary graph partitioning. The overall performance is even better than the widely used hMetis hypergraph partitioner that uses a classical multilevel algorithm with few levels. Aided by a portfolio-based approach to initial partitioning and adaptive budgeting of imbalance within recursive bipartitioning, we achieve very high quality. We assembled a large benchmark set with 310 hypergraphs stemming from application areas such VLSI, SAT solving, social networks, and scientific computing. We achieve significantly smaller cuts than hMetis and PaToH, while being faster than hMetis. Considerably larger improvements are observed for some instance classes like social networks, for bipartitioning, and for partitions with an allowed imbalance of 10%. The algorithm presented in this work forms the basis of our hypergraph partitioning framework KaHyPar (Karlsruhe Hypergraph Partitioning).
That's What She Said: Double Entendre Identification
Humor identification is a hard natural language understanding problem. We identify a subproblem — the “that’s what she said” problem — with two distinguishing characteristics: (1) use of nouns that are euphemisms for sexually explicit nouns and (2) structure common in the erotic domain. We address this problem in a classification approach that includes features that model those two characteristics. Experiments on web data demonstrate that our approach improves precision by 12% over baseline techniques that use only word-based features.
Quaternion Recurrent Neural Networks
Recurrent neural networks (RNNs) are powerful architectures to model sequential data, due to their capability to learn short and long-term dependencies between the basic elements of a sequence. Nonetheless, popular tasks such as speech or images recognition, involve multi-dimensional input features that are characterized by strong internal dependencies between the dimensions of the input vector. We propose a novel quaternion recurrent neural network (QRNN), alongside with a quaternion long-short term memory neural network (QLSTM), that take into account both the external relations and these internal structural dependencies with the quaternion algebra. Similarly to capsules, quaternions allow the QRNN to code internal dependencies by composing and processing multidimensional features as single entities, while the recurrent operation reveals correlations between the elements composing the sequence. We show that both QRNN and QLSTM achieve better performances than RNN and LSTM in a realistic application of automatic speech recognition. Finally, we show that QRNN and QLSTM reduce by a maximum factor of 3.3x the number of free parameters needed, compared to real-valued RNNs and LSTMs to reach better results, leading to a more compact representation of the relevant information.
Chimeric ferritin nanocages for multiple function loading and multimodal imaging.
Nanomaterials provide large surface areas, relativeto their volumes, on which to load functions. One challenge, however, has been to achieve precise control in loading multiple functionalities. Traditional bioconjugation techniques, which randomly target the surface functional groups of nanomaterials, have been found increasingly inadequate for such control, which is a drawback that may substantially slow down or prohibit the translational efforts. In the current study, we evaluated ferritin nanocages as candidate nanoplatforms for multifunctional loading. Ferritin nanocages can be either genetically or chemically modified to impart functionalities to their surfaces, and metal cations can be encapsulated in their interiors by association with metal binding sites. Moreover, different types of ferritin nanocages can be disassembled under acidic condition and reassembled at pH of 7.4, providing a facile way to achieve function hybridization. We were able to use combinations of these unique properties to produce a number of multifunctional ferritin nanostructures with precise control of their composition. We then studied these nanoparticles, both in vitro and in vivo, to evaluate their potential suitability as multimodality imaging probes. A good tumor targeting profile was observed, which was attributable to both the enhanced permeability and retention (EPR) effect and biovector mediated targeting. This, in combination with the generalizability of the function loading techniques, promises ferritin particles as a powerful nanoplatfom in the era of nanomedicine.
Control of Upper-Limb Power-Assist Exoskeleton Using a Human-Robot Interface Based on Motion Intention Recognition
Recognition of the wearer's motion intention plays an important role in the study of power-assist robots. In this paper, an intention-guided control strategy is proposed and applied to an upper-limb power-assist exoskeleton. Meanwhile, a human-robot interface comprised of force-sensing resistors (FSRs) is designed to estimate the motion intention of the wearer's upper limb in real time. Moreover, a new concept called the “intentional reaching direction (IRD)” is proposed to quantitatively describe this intention. Both the state model and the observation model of IRD are obtained by studying the upper limb behavior modes and analyzing the relationship between the measured force signals and the motion intention. Based on these two models, the IRD can be inferred online using an adapted filtering technique. Guided by the inferred IRD, an admittance control strategy is deployed to control the motions of three DC motors placed at the corresponding joints of the robotic arm. The effectiveness of the proposed approaches is finally confirmed by experiments on a 3 degree-of-freedom (DOF) upper-limb robotic exoskeleton.
Alleviation of sleep apnea in patients with chronic renal failure by nocturnal cycler-assisted peritoneal dialysis compared with conventional continuous ambulatory peritoneal dialysis.
Nocturnal hemodialysis has been shown to improve sleep apnea in patients who receive conventional hemodialysis. It was hypothesized that nocturnal peritoneal dialysis (NPD) also is effective in correcting sleep apnea in patients who receive continuous ambulatory PD (CAPD). Overnight polysomnography (PSG) was performed in 46 stable NPD and CAPD patients who were matched for demographic and clinical attributes. The prevalence of sleep apnea, defined as an apnea-hypopnea index (AHI; or frequency of apnea and hypopnea per hour of sleep) > or =15, was 52% for NPD patients and 91% for CAPD patients (P = 0.007). The mean (+/-SD) AHI in NPD and CAPD patients was 31.6 +/- 25.6 and 50.9 +/- 26.4 (P = 0.025), respectively. For validation of the efficacy of NPD in alleviating sleep apnea, a fixed sequence intervention study was performed in which 24 incident PD patients underwent one PSG study during mandatory cycler-assisted NPD while awaiting their turn for CAPD training and a second PSG recording shortly after they were established on stable CAPD. The prevalence of sleep apnea was 4.2% during NPD and 33.3% during CAPD (P = 0.016). AHI increased from 3.4 +/- 1.34 during NPD to 14.0 +/- 3.46 during CAPD (P < 0.001). With the use of bioelectrical impedance analysis, total body water content was significantly lower during stable NPD than CAPD (32.8 +/- 7.37 versus 35.1 +/- 7.35 L; P = 0.004). NPD delivered greater reductions in total body water (-2.81 +/- 0.45 versus -1.34 +/- 0.3 L; P = 0.015) and hydration fraction (-3.63 +/- 0.64 versus -0.71 +/- 0.52%; P = 0.005) during sleep. Pulmonary function tests remained unchanged before and after conversion from NPD to CAPD. These findings suggest that NPD may have a therapeutic edge over CAPD in sleep apnea that is associated with renal failure as a result of better fluid clearance during sleep.
A multi-agent Q-Learning-based framework for achieving fairness in HTTP Adaptive Streaming
HTTP Adaptive Streaming (HAS) is quickly becoming the de facto standard for Over-The-Top video streaming. In HAS, each video is temporally segmented and stored in different quality levels. Quality selection heuristics, deployed at the video player, allow dynamically requesting the most appropriate quality level based on the current network conditions. Today's heuristics are deterministic and static, and thus not able to perform well under highly dynamic network conditions. Moreover, in a multi-client scenario, issues concerning fairness among clients arise, meaning that different clients negatively influence each other as they compete for the same bandwidth. In this article, we propose a Reinforcement Learning-based quality selection algorithm able to achieve fairness in a multi-client setting. A key element of this approach is a coordination proxy in charge of facilitating the coordination among clients. The strength of this approach is three-fold. First, the algorithm is able to learn and adapt its policy depending on network conditions, unlike current HAS heuristics. Second, fairness is achieved without explicit communication among agents and thus no significant overhead is introduced into the network. Third, no modifications to the standard HAS architecture are required. By evaluating this novel approach through simulations, under mutable network conditions and in several multi-client scenarios, we are able to show how the proposed approach can improve system fairness up to 60% compared to current HAS heuristics.
Combined Group and Exclusive Sparsity for Deep Neural Networks
The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network. Specifically, we propose to use an exclusive sparsity regularization based on (1, 2)-norm, which promotes competition for features between different weights, thus enforcing them to fit to disjoint sets of features. We further combine the exclusive sparsity with the group sparsity based on (2, 1)-norm, to promote both sharing and competition for features in training of a deep neural network. We validate our method on multiple public datasets, and the results show that our method can obtain more compact and efficient networks while also improving the performance over the base networks with full weights, as opposed to existing sparsity regularizations that often obtain efficiency at the expense of prediction accuracy.
Information retrieval in web crawling: A survey
In today's scenario, World Wide Web (WWW) is flooded with huge amount of information. Due to growing popularity of the internet, finding the meaningful information among billions of information resources on the WWW is a challenging task. The information retrieval (IR) provides documents to the end users which satisfy their need of information. Search engine is used to extract valuable information from the internet. Web crawler is the principal part of search engine; it is an automatic script or program which can browse the WWW in automatic manner. This process is known as web crawling. In this paper, review on strategies of information retrieval in web crawling has been presented that are classifying into four categories viz: focused, distributed, incremental and hidden web crawlers. Finally, on the basis of user customized parameters the comparative analysis of various IR strategies has been performed.
Iris code matching using adaptive Hamming distance
The most popular metric distance used in iris code matching is Hamming distance. In this paper, we improve the performance of iris code matching stage by applying adaptive Hamming distance. Proposed method works with Hamming subsets with adaptive length. Based on density of masked bits in the Hamming subset, each subset is able to expand and adjoin to the right or left neighbouring bits. The adaptive behaviour of Hamming subsets increases the accuracy of Hamming distance computation and improves the performance of iris code matching. Results of applying proposed method on Chinese Academy of Science Institute of Automation, CASIA V3.3 shows performance of 99.96% and false rejection rate 0.06.
LinkBench: a database benchmark based on the Facebook social graph
Database benchmarks are an important tool for database researchers and practitioners that ease the process of making informed comparisons between different database hardware, software and configurations. Large scale web services such as social networks are a major and growing database application area, but currently there are few benchmarks that accurately model web service workloads. In this paper we present a new synthetic benchmark called LinkBench. LinkBench is based on traces from production databases that store "social graph" data at Facebook, a major social network. We characterize the data and query workload in many dimensions, and use the insights gained to construct a realistic synthetic benchmark. LinkBench provides a realistic and challenging test for persistent storage of social and web service data, filling a gap in the available tools for researchers, developers and administrators.
Manualized therapy for PTSD: flexing the structure of cognitive processing therapy.
OBJECTIVE This study tested a modified cognitive processing therapy (MCPT) intervention designed as a more flexible administration of the protocol. Number of sessions was determined by client progress toward a priori defined end-state criteria, "stressor sessions" were inserted when necessary, and therapy was conducted by novice CPT clinicians. METHOD A randomized, controlled, repeated measures, semicrossover design was utilized (a) to test the relative efficacy of the MCPT intervention compared with a symptom-monitoring delayed treatment (SMDT) condition and (b) to assess within-group variation in change with a sample of 100 male and female interpersonal trauma survivors with posttraumatic stress disorder (PTSD). RESULTS Hierarchical linear modeling analyses revealed that MCPT evidenced greater improvement on all primary (PTSD and depression) and secondary (guilt, quality of life, general mental health, social functioning, and health perceptions) outcomes compared with SMDT. After the conclusion of SMDT, participants crossed over to MCPT, resulting in a combined MCPT sample (n = 69). Of the 50 participants who completed MCPT, 58% reached end-state criteria prior to the 12th session, 8% at Session 12, and 34% between Sessions 12 and 18. Maintenance of treatment gains was found at the 3-month follow-up, with only 2 of the treated sample meeting criteria for PTSD. Use of stressor sessions did not result in poorer treatment outcomes. CONCLUSIONS Findings suggest that individuals respond at a variable rate to CPT, with significant benefit from additional therapy when indicated and excellent maintenance of gains. Insertion of stressor sessions did not alter the efficacy of the therapy.
Pantoprazole may enhance antiplatelet effect of enteric-coated aspirin in patients with acute coronary syndrome.
BACKGROUND Antiplatelet therapy has proven beneficial in the treatment of cardiovascular disease. Proton pump inhibitors (PPIs) are commonly used for gastroprotection in patients receiving antiplatelet therapy. Several trials have been carried out to establish interactions between PPIs, clopidogrel and soluble formulations of aspirin, but no studies with PPIs and enteric-coated (EC) forms of aspirin have been conducted. The aim of this study was to assess if concomitant pantoprazole usage influences antiplatelet effect of EC aspirin in patients with acute coronary syndrome treated with percutaneous coronary intervention (PCI) and dual antiplatelet therapy. METHODS Thirty-one consecutive patients were prospectively enrolled in the randomized, crossover, open-labelled designed study. The first 16 patients were given orally 40 mg of pantoprazole for the first four days while the next 15 subjects were treated with pantoprazole from the fifth to the eighth day of hospitalisation. Blood samples were collected at 6.00 a.m., 10.00 a.m., 2.00 p.m., and 7.00 p.m. on the fourth and eighth day of hospitalization. Aggregation in response to arachidonic acid was assessed in the whole blood on a new generation impedance aggregometer. RESULTS Lower overall platelet aggregation in patients treated with pantoprazole (p < 0.03) was observed. When aggregation of platelets was analyzed separately at different times, the differences reached statistical significance six hours after the administration of pantoprazole and antiplatelet agents. The highest absolute difference in arachidonic acid-dependent aggregation was observed two hours after drug ingestion. CONCLUSIONS Co-administration of pantoprazole may enhance the antiplatelet effect of enteric-coated aspirin in patients with acute coronary syndrome undergoing PCI.