query_id
stringlengths
32
32
query
stringlengths
0
35.7k
positive_passages
listlengths
1
7
negative_passages
listlengths
22
29
subset
stringclasses
2 values
44f0de3b4bb4c34188a380aad7efbf34
Effect of Iyengar yoga therapy for chronic low back pain
[ { "docid": "9876e4298f674a617f065f348417982a", "text": "On the basis of medical officers diagnosis, thirty three (N = 33) hypertensives, aged 35-65 years, from Govt. General Hospital, Pondicherry, were examined with four variables viz, systolic and diastolic blood pressure, pulse rate and body weight. The subjects were randomly assigned into three groups. The exp. group-I underwent selected yoga practices, exp. group-II received medical treatment by the physician of the said hospital and the control group did not participate in any of the treatment stimuli. Yoga imparted in the morning and in the evening with 1 hr/session. day-1 for a total period of 11-weeks. Medical treatment comprised drug intake every day for the whole experimental period. The result of pre-post test with ANCOVA revealed that both the treatment stimuli (i.e., yoga and drug) were effective in controlling the variables of hypertension.", "title": "" } ]
[ { "docid": "80ed0585f1b040f2af895f1067502899", "text": "In this paper, we present the concept of transmitting power without using wires i.e., transmitting power as microwaves from one place to another is in order to reduce the cost, transmission and distribution losses. This concept is known as Microwave Power transmission (MPT). We also discussed the technological developments in Wireless Power Transmission (WPT) which are required for the improment .The components which are requiredfor the development of Microwave Power transmission(MPT)are also mentioned along with the performance when they are connected to various devices at different frequency levels . The advantages, disadvantages, biological impacts and applications of WPT are also presented.", "title": "" }, { "docid": "a799bba2a5d56d45e3b0569119ee8ad2", "text": "There has been much research investigating team cognition, naturalistic decision making, and collaborative technology as it relates to real world, complex domains of practice. However, there has been limited work in incorporating naturalistic decision making models for supporting distributed team decision making. The aim of this research is to support human decision making teams using cognitive agents empowered by a collaborative Recognition-Primed Decision model. In this paper, we first describe an RPD-enabled agent architecture (R-CAST), in which we have implemented an internal mechanism of decision-making adaptation based on collaborative expectancy monitoring, and an information exchange mechanism driven by relevant cue analysis. We have evaluated R-CAST agents in a real-time simulation environment, feeding teams with frequent decision-making tasks under different tempo situations. While the result conforms to psychological findings that human team members are extremely sensitive to their workload in high-tempo situations, it clearly indicates that human teams, when supported by R-CAST agents, can perform better in the sense that they can maintain team performance at acceptable levels in high time pressure situations.", "title": "" }, { "docid": "9607eff43d60837e407d7fa07eb4650f", "text": "Given a network with node attributes, how can we identify communities and spot anomalies? How can we characterize, describe, or summarize the network in a succinct way? Community extraction requires a measure of quality for connected subgraphs (e.g., social circles). Existing subgraph measures, however, either consider only the connectedness of nodes inside the community and ignore the cross-edges at the boundary (e.g., density) or only quantify the structure of the community and ignore the node attributes (e.g., conductance). In this work, we focus on node-attributed networks and introduce: (1) a new measure of subgraph quality for attributed communities called normality, (2) a community extraction algorithm that uses normality to extract communities and a few characterizing attributes per community, and (3) a summarization and interactive visualization approach for attributed graph exploration. More specifically, (1) we first introduce a new measure to quantify the normality of an attributed subgraph. Our normality measure carefully utilizes structure and attributes together to quantify both the internal consistency and external separability. We then formulate an objective function to automatically infer a few attributes (called the “focus”) and respective attribute weights, so as to maximize the normality  score of a given subgraph. Most notably, unlike many other approaches, our measure allows for many cross-edges as long as they can be “exonerated;” i.e., either (i) are expected under a null graph model, and/or (ii) their boundary nodes do not exhibit the focus attributes. Next, (2) we propose AMEN (for Attributed Mining of Entity Networks), an algorithm that simultaneously discovers the communities and their respective focus in a given graph, with a goal to maximize the total normality. Communities for which a focus that yields high normality  cannot be found are considered low quality or anomalous. Last, (3) we formulate a summarization task with a multi-criteria objective, which selects a subset of the communities that (i) cover the entire graph well, are (ii) high quality and (iii) diverse in their focus attributes. We further design an interactive visualization interface that presents the communities to a user in an interpretable, user-friendly fashion. The user can explore all the communities, analyze various algorithm-generated summaries, as well as devise their own summaries interactively to characterize the network in a succinct way. As the experiments on real-world attributed graphs show, our proposed approaches effectively find anomalous communities and outperform several existing measures and methods, such as conductance, density, OddBall, and SODA. We also conduct extensive user studies to measure the capability and efficiency that our approach provides to the users toward network summarization, exploration, and sensemaking.", "title": "" }, { "docid": "b540cb8f0f0825662d21a5e2ed100012", "text": "Social media platforms are popular venues for fashion brand marketing and advertising. With the introduction of native advertising, users don’t have to endure banner ads that hold very little saliency and are unattractive. Using images and subtle text overlays, even in a world of ever-depreciating attention span, brands can retain their audience and have a capacious creative potential. While an assortment of marketing strategies are conjectured, the subtle distinctions between various types of marketing strategies remain under-explored. This paper presents a qualitative analysis on the influence of social media platforms on different behaviors of fashion brand marketing. We employ both linguistic and computer vision techniques while comparing and contrasting strategic idiosyncrasies. We also analyze brand audience retention and social engagement hence providing suggestions in adapting advertising and marketing strategies over Twitter and Instagram.", "title": "" }, { "docid": "2eba831751ae88cfb69b7c4463df438a", "text": "ÐSoftware engineers use a number of different types of software development technical review (SDTR) for the purpose of detecting defects in software products. This paper applies the behavioral theory of group performance to explain the outcomes of software reviews. A program of empirical research is developed, including propositions to both explain review performance and identify ways of improving review performance based on the specific strengths of individuals and groups. Its contributions are to clarify our understanding of what drives defect detection performance in SDTRs and to set an agenda for future research. In identifying individuals' task expertise as the primary driver of review performance, the research program suggests specific points of leverage for substantially improving review performance. It points to the importance of understanding software reading expertise and implies the need for a reconsideration of existing approaches to managing reviews. Index TermsÐInspections, walkthroughs, technical reviews, defects, defect detection, groups, group process, group size, expertise, reading, training, behavioral research, theory, research program.", "title": "" }, { "docid": "a8b26d719b7512634383c71c1e57c960", "text": "The method of finding high-quality answers has significant impact on user satisfaction in community question answering systems. However, due to the lexical gap between questions and answers as well as spam typically existing in user-generated content, filtering and ranking answers is very challenging. Previous solutions mainly focus on generating redundant features, or finding textual clues using machine learning techniques; none of them ever consider questions and their answers as relational data but instead model them as independent information. Moreover, they only consider the answers of the current question, and ignore any previous knowledge that would be helpful to bridge the lexical and semantic gap. We assume that answers are connected to their questions with various types of latent links, i.e. positive indicating high-quality answers, negative links indicating incorrect answers or user-generated spam, and propose an analogical reasoning-based approach which measures the analogy between the new question-answer linkages and those of relevant knowledge which contains only positive links; the candidate answer which has the most analogous link is assumed to be the best answer. We conducted experiments based on 29.8 million Yahoo!Answer question-answer threads and showed the effectiveness of our approach.", "title": "" }, { "docid": "5fafb56408b75344fe7e55260a758180", "text": "This paper presents a new conversion method to automatically transform a constituent-based Vietnamese Treebank into dependency trees. On a dependency Treebank created according to our new approach, we examine two stateof-the-art dependency parsers: the MSTParser and the MaltParser. Experiments show that the MSTParser outperforms the MaltParser. To the best of our knowledge, we report the highest performances published to date in the task of dependency parsing for Vietnamese. Particularly, on gold standard POS tags, we get an unlabeled attachment score of 79.08% and a labeled attachment score of 71.66%.", "title": "" }, { "docid": "db26d71ec62388e5367eb0f2bb45ad40", "text": "The linear programming (LP) is one of the most popular necessary optimization tool used for data analytics as well as in various scientific fields. However, the current state-of-art algorithms suffer from scalability issues when processing Big Data. For example, the commercial optimization software IBM CPLEX cannot handle an LP with more than hundreds of thousands variables or constraints. Existing algorithms are fundamentally hard to scale because they are inevitably too complex to parallelize. To address the issue, we study the possibility of using the Belief Propagation (BP) algorithm as an LP solver. BP has shown remarkable performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been done in this area. In particular, while it is generally believed that BP implicitly solves an optimization problem, it is not well understood under what conditions the solution to a BP converges to that of a corresponding LP formulation. Our efforts consist of two main parts. First, we perform a theoretic study and establish the conditions in which BP can solve LP [1,2]. Although there has been several works studying the relation between BP and LP for certain instances, our work provides a generic condition unifying all prior works for generic LP. Second, utilizing our theoretical results, we develop a practical BP-based parallel algorithms for solving generic LPs, and it shows 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm [3, 4]. As a result of the study, the PIs have published two conference papers [1,3] and two follow-up journal papers [3,4] are under submission. We refer the readers to our published work [1,3] for details. Introduction: The main goal of our research is to develop a distributed and parallel algorithm for large-scale linear optimization (or programming). Considering the popularity and importance of linear optimizations in various fields, the proposed method has great potentials applicable to various big data analytics. Our approach is based on the Belief Propagation (BP) algorithm, which has shown remarkable performances on various machine learning tasks and naturally lends itself to fast parallel implementations. Our key contributions are summarized below: 1) We establish key theoretic foundations in the area of Belief Propagation. In particular, we show that BP converges to the solution of LP if some sufficient conditions are satisfied. Our DISTRIBUTION A. Approved for public release: distribution unlimited. conditions not only cover various prior studies including maximum weight matching, mincost network flow, shortest path, etc., but also discover new applications such as vertex cover and traveling salesman. 2) While the theoretic study provides understanding of the nature of BP, it falls short in slow convergence speed, oscillation and wrong convergence. To make BP-based algorithms more practical, we design a BP-based framework which uses BP as a ‘weight transformer’ to resolve the convergence issue of BP. We refer the readers to our published work [1, 3] for details. The rest of the report contains a summary of our work appeared in UAI (Uncertainty in Artificial Intelligence) and IEEE Conference in Big Data [1,3] and follow up work [2,4] under submission to major journals. Experiment: We first establish theoretical conditions when Belief Propagation (BP) can solve Linear Programming (LP), and second provide a practical distributed/parallel BP-based framework solving generic optimizations. We demonstrate the wide-applicability of our approach via popular combinatorial optimizations including maximum weight matching, shortest path, traveling salesman, cycle packing and vertex cover. Results and Discussion: Our contribution consists of two parts: Study 1 [1,2] looks at the theoretical conditions that BP converges to the solution of LP. Our theoretical result unify almost all prior result about BP for combinatorial optimization. Furthermore, our conditions provide a guideline for designing distributed algorithm for combinatorial optimization problems. Study 2 [3,4] focuses on building an optimal framework based on the theory of Study 1 for boosting the practical performance of BP. Our framework is generic, thus, it can be easily extended to various optimization problems. We also compare the empirical performance of our framework to other heuristics and state of the art algorithms for several combinatorial optimization problems. -------------------------------------------------------Study 1 -------------------------------------------------------We first introduce the background for our contributions. A joint distribution of � (binary) variables � = [��] ∈ {0,1}� is called graphical model (GM) if it factorizes as follows: for � = [��] ∈ {0,1}�, where ψψ� ,�� are some non-negative functions so called factors; � is a collection of subsets (each αα� is a subset of {1,⋯ ,�} with |��| ≥ 2; �� is the projection of � onto dimensions included in αα. Assignment �∗ is called maximum-a-posteriori (MAP) assignment if �∗maximizes the probability. The following figure depicts the graphical relation between factors � and variables �. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 1: Factor graph for the graphical model with factors αα1 = {1,3},�2 = {1,2,4},�3 = {2,3,4} Now we introduce the algorithm, (max-product) BP, for approximating MAP assignment in a graphical model. BP is an iterative procedure; at each iteration �, there are four messages between each variable �� and every associated αα ∈ ��, where ��: = {� ∈ �:� ∈ �}. Then, messages are updated as follows: Finally, given messages, BP marginal beliefs are computed as follows: Then, BP outputs the approximated MAP assignment ��� = [��] as Now, we are ready to introduce the main result of Study 1. Consider the following GM: for � = [��] ∈ {0,1}� and � = [��] ∈ ��, where the factor function ψψαα for αα ∈ � is defined as for some matrices ��,�� and vectors ��,��. Consider the Linear Programming (LP) corresponding the above GM: One can easily observe that the MAP assignments for GM corresponds to the (optimal) solution of the above LP if the LP has an integral solution �∗ ∈ {0,1}�. The following theorem is our main result of Study 1 which provide sufficient conditions so that BP can indeed find the LP solution DISTRIBUTION A. Approved for public release: distribution unlimited. Theorem 1 can be applied to several combinatorial optimization problems including matching, network flow, shortest path, vertex cover, etc. See [1,2] for the detailed proof of Theorem 1 and its applications to various combinatorial optimizations including maximum weight matching, min-cost network flow, shortest path, vertex cover and traveling salesman. -------------------------------------------------------Study 2 -------------------------------------------------------Study 2 mainly focuses on providing a distributed generic BP-based combinatorial optimization solver which has high accuracy and low computational complexity. In summary, the key contributions of Study 2 are as follows: 1) Practical BP-based algorithm design: To the best of our knowledge, this paper is the first to propose a generic concept for designing BP-based algorithms that solve large-scale combinatorial optimization problems. 2) Parallel implementation: We also demonstrate that the algorithm is easily parallelizable. For the maximum weighted matching problem, this translates to 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm. 3) Extensive empirical evaluation: We evaluate our algorithms on three different combinatorial optimization problems on diverse synthetic and real-world data-sets. Our evaluation shows that the framework shows higher accuracy compared to other known heuristics. Designing a BP-based algorithm for some problem is easy in general. However (a) it might diverge or converge very slowly, (b) even if it converges quickly, the BP decision might be not correct, and (c) even worse, BP might produce an infeasible solution, i.e., it does not satisfy the constraints of the problem. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 2: Overview of our generic BP-based framework To address these issues, we propose a generic BP-based framework that provides highly accurate approximate solutions for combinatorial optimization problems. The framework has two steps, as shown in Figure 2. In the first phase, it runs a BP algorithm for a fixed number of iterations without waiting for convergence. Then, the second phase runs a known heuristic using BP beliefs instead of the original weights to output a feasible solution. Namely, the first and second phases are respectively designed for ‘BP weight transforming’ and ‘post-processing’. Note that our evaluation mainly uses the maximum weight matching problem. The formal description of the maximum weight matching (MWM) problem is as follows: Given a graph � = (�,�) and edge weights � = [��] ∈ �|�|, it finds a set of edges such that each vertex is connected to at most one edge in the set and the sum of edge weights in the set is maximized. The problem is formulated as the following IP (Integer Programming): where δδ(�) is the set of edges incident to vertex � ∈ �. In the following paragraphs, we describe the two phases in more detail in reverse order. We first describe the post-processing phase. As we mentioned, one of the main issue of a BP-based algorithm is that the decision on BP beliefs might give an infeasible solution. To resolve the issue, we use post-processing by utilizing existing heuristics to the given problem that find a feasible solution. Applying post-processing ensures that the solution is at least feasible. In addition, our key idea is to replace the original weights by the logarithm of BP beliefs, i.e. function of (3). After th", "title": "" }, { "docid": "e8e8e6d288491e715177a03601500073", "text": "Protein–protein interactions constitute the regulatory network that coordinates diverse cellular functions. Co-immunoprecipitation (co-IP) is a widely used and effective technique to study protein–protein interactions in living cells. However, the time and cost for the preparation of a highly specific antibody is the major disadvantage associated with this technique. In the present study, a co-IP system was developed to detect protein–protein interactions based on an improved protoplast transient expression system by using commercially available antibodies. This co-IP system eliminates the need for specific antibody preparation and transgenic plant production. Leaf sheaths of rice green seedlings were used for the protoplast transient expression system which demonstrated high transformation and co-transformation efficiencies of plasmids. The transient expression system developed by this study is suitable for subcellular localization and protein detection. This work provides a rapid, reliable, and cost-effective system to study transient gene expression, protein subcellular localization, and characterization of protein–protein interactions in vivo.", "title": "" }, { "docid": "8793b4ed20f6edce8cb61af1ff0aee55", "text": "This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e., the autonomous vehicles' ability to make appropriate driving decisions in city road traffic situations. The paper explains the overall controls system architecture, the decision making task decomposition, and focuses on how Multiple Criteria Decision Making (MCDM) is used in the process of selecting the most appropriate driving maneuver from the set of feasible ones. Experimental tests show that MCDM is suitable for this new application area.", "title": "" }, { "docid": "ce7175f868e2805e9e08e96a1c9738f4", "text": "The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. In A Semantic Web Primer Grigoris Antoniou and Frank van Harmelen provide an introduction and guide to this emerging field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for self-study by professionals, the book concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL, and rules) and technologies (explicit metadata, ontologies, and logic and inference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processible semantics; and OWL, the W3C-approved standard for a Web ontology language that is more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.", "title": "" }, { "docid": "da4d3534f0f8cf463d4dfff9760b68f4", "text": "While recommendation approaches exploiting different input sources have started to proliferate in the literature, an explicit study of the effect of the combination of heterogeneous inputs is still missing. On the other hand, in this context there are sides to recommendation quality requiring further characterisation and methodological research –a gap that is acknowledged in the field. We present a comparative study on the influence that different types of information available in social systems have on item recommendation. Aiming to identify which sources of user interest evidence –tags, social contacts, and user-item interaction data– are more effective to achieve useful recommendations, and in what aspect, we evaluate a number of content-based, collaborative filtering, and social recommenders on three datasets obtained from Delicious, Last.fm, and MovieLens. Aiming to determine whether and how combining such information sources may enhance over individual recommendation approaches, we extend the common accuracy-oriented evaluation practice with various metrics to measure further recommendation quality dimensions, namely coverage, diversity, novelty, overlap, and relative diversity between ranked item recommendations. We report empiric observations showing that exploiting tagging information by content-based recommenders provides high coverage and novelty, and combining social networking and collaborative filtering information by hybrid recommenders results in high accuracy and diversity. This, along with the fact that recommendation lists from the evaluated approaches had low overlap and relative diversity values between them, gives insights that meta-hybrid recommenders combining the above strategies may provide valuable, balanced item suggestions in terms of performance and non-performance metrics.", "title": "" }, { "docid": "802eb80255cf85991260da72b87238e1", "text": "This paper describes the vision-based control of a small autonomous aircraft following a road. The computer vision system detects natural features of the scene and tracks the roadway in order to determine relative yaw and lateral displacement between the aircraft and the road. Using only the vision measurements and onboard inertial sensors, a control strategy stabilizes the aircraft and follows the road. The road detection and aircraft control strategies have been verified by hardware in the loop (HIL) simulations over long stretches (several kilometers) of straight roads and in conditions of up to 5 m/s of prevailing wind. Hardware experiments have also been conducted using a modified radio-controlled aircraft. Successful road following was demonstrated over an airfield runway under variable lighting and wind conditions. The development of vision-based control strategies for unmanned aerial vehicles (UAVs), such as the ones presented here, enables complex autonomous missions in environments where typical navigation sensor like GPS are unavailable.", "title": "" }, { "docid": "8da939b67039eddb24db213337a65958", "text": "Alistair S. Jump* and Josep Peñuelas Unitat d’Ecofisiologia CSICCEAB-CREAF, Centre de Recerca Ecològica i Aplicacions Forestals, Universitat Autònoma de Barcelona, E-08193, Bellaterra, Barcelona, Spain *Correspondence: E-mail: [email protected] Abstract Climate is a potent selective force in natural populations, yet the importance of adaptation in the response of plant species to past climate change has been questioned. As many species are unlikely to migrate fast enough to track the rapidly changing climate of the future, adaptation must play an increasingly important role in their response. In this paper we review recent work that has documented climate-related genetic diversity within populations or on the microgeographical scale. We then describe studies that have looked at the potential evolutionary responses of plant populations to future climate change. We argue that in fragmented landscapes, rapid climate change has the potential to overwhelm the capacity for adaptation in many plant populations and dramatically alter their genetic composition. The consequences are likely to include unpredictable changes in the presence and abundance of species within communities and a reduction in their ability to resist and recover from further environmental perturbations, such as pest and disease outbreaks and extreme climatic events. Overall, a range-wide increase in extinction risk is likely to result. We call for further research into understanding the causes and consequences of the maintenance and loss of climate-related genetic diversity within populations.", "title": "" }, { "docid": "ddcf9180119dfa0b26d7b6d4c0ed958e", "text": "BACKGROUND\nHandling of upper lateral cartilages (ULCs) is of prime importance in rhinoplasty. This study presents the experiences among 2500 cases of rhinoplasty in the past 10 years for managing of ULCs to minimize unwilling results of the shape and functional problems of the nose.\n\n\nMETHODS\nAll cases of rhinoplasties were done by the same surgeon from 2002 to 2013. Management of ULCs changed from resection to preserving the ULCs and to enhance their structural and functional roles. The techniques were spreader grafts, suturing of ULC together at the level or above the septum, using ULCs as auto-spreader flaps and very rarely trimming of ULCs unilaterally or bilaterally for making symmetric dorsal aesthetic lines. Fifty cases were operated based on this classification. Most cases were in type II and III. There were 7 cases in type I and 8 cases in type IV.\n\n\nRESULTS\nAmong most cases, the results were satisfactory although there were 8 cases for revision and among them, 2 cases had some fullness on dorsum and supra-tip because of inappropriate judgment on keeping the relationship between dorsum and tip. The problems in the shape and airways role of the nose reduced dramatically and a useful algorithm was presented.\n\n\nCONCLUSION\nULCs have great important roles in shape and function of nose. Preserving methods to keep these structures are of importance in surgical treatments of primary rhinoplasties. The presented algorithm helps to manage the ULCs in different anatomic types of the noses especially for surgeons who are in learning curve period.", "title": "" }, { "docid": "7731315bb30b1888caf4be87aa38a108", "text": "The problem of scheduling is concerned with searching for optimal (or near-optimal) schedules subject to a number of constraints. A variety of approaches have been developed to solve the problem of scheduling. However, many of these approaches are often impractical in dynamic real-world environments where there are complex constraints and a variety of unexpected disruptions. In most real-world environments, scheduling is an ongoing reactive process where the presence of real-time information continually forces reconsideration and revision of pre-established schedules. Scheduling research has largely ignored this problem, focusing instead on optimisation of static schedules. This paper outlines the limitations of static approaches to scheduling in the presence of real-time information and presents a number of issues that have come up in recent years on dynamic scheduling. The paper defines the problem of dynamic scheduling and provides a review of the state of the art of currently developing research on dynamic scheduling. The principles of several dynamic scheduling techniques, namely, dispatching rules, heuristics, meta-heuristics, artificial intelligence techniques, and multi-agent systems are described in detail, followed by a discussion and comparison of their potential.", "title": "" }, { "docid": "f935bdde9d4571f50e47e48f13bfc4b8", "text": "BACKGROUND\nThe incidence of microcephaly in Brazil in 2015 was 20 times higher than in previous years. Congenital microcephaly is associated with genetic factors and several causative agents. Epidemiological data suggest that microcephaly cases in Brazil might be associated with the introduction of Zika virus. We aimed to detect and sequence the Zika virus genome in amniotic fluid samples of two pregnant women in Brazil whose fetuses were diagnosed with microcephaly.\n\n\nMETHODS\nIn this case study, amniotic fluid samples from two pregnant women from the state of Paraíba in Brazil whose fetuses had been diagnosed with microcephaly were obtained, on the recommendation of the Brazilian health authorities, by ultrasound-guided transabdominal amniocentesis at 28 weeks' gestation. The women had presented at 18 weeks' and 10 weeks' gestation, respectively, with clinical manifestations that could have been symptoms of Zika virus infection, including fever, myalgia, and rash. After the amniotic fluid samples were centrifuged, DNA and RNA were extracted from the purified virus particles before the viral genome was identified by quantitative reverse transcription PCR and viral metagenomic next-generation sequencing. Phylogenetic reconstruction and investigation of recombination events were done by comparing the Brazilian Zika virus genome with sequences from other Zika strains and from flaviviruses that occur in similar regions in Brazil.\n\n\nFINDINGS\nWe detected the Zika virus genome in the amniotic fluid of both pregnant women. The virus was not detected in their urine or serum. Tests for dengue virus, chikungunya virus, Toxoplasma gondii, rubella virus, cytomegalovirus, herpes simplex virus, HIV, Treponema pallidum, and parvovirus B19 were all negative. After sequencing of the complete genome of the Brazilian Zika virus isolated from patient 1, phylogenetic analyses showed that the virus shares 97-100% of its genomic identity with lineages isolated during an outbreak in French Polynesia in 2013, and that in both envelope and NS5 genomic regions, it clustered with sequences from North and South America, southeast Asia, and the Pacific. After assessing the possibility of recombination events between the Zika virus and other flaviviruses, we ruled out the hypothesis that the Brazilian Zika virus genome is a recombinant strain with other mosquito-borne flaviviruses.\n\n\nINTERPRETATION\nThese findings strengthen the putative association between Zika virus and cases of microcephaly in neonates in Brazil. Moreover, our results suggest that the virus can cross the placental barrier. As a result, Zika virus should be considered as a potential infectious agent for human fetuses. Pathogenesis studies that confirm the tropism of Zika virus for neuronal cells are warranted.\n\n\nFUNDING\nConsellho Nacional de Desenvolvimento e Pesquisa (CNPq), Fundação de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ).", "title": "" }, { "docid": "dd7ab988d8a40e6181cd37f8a1b1acfa", "text": "In areas approaching malaria elimination, human mobility patterns are important in determining the proportion of malaria cases that are imported or the result of low-level, endemic transmission. A convenience sample of participants enrolled in a longitudinal cohort study in the catchment area of Macha Hospital in Choma District, Southern Province, Zambia, was selected to carry a GPS data logger for one month from October 2013 to August 2014. Density maps and activity space plots were created to evaluate seasonal movement patterns. Time spent outside the household compound during anopheline biting times, and time spent in malaria high- and low-risk areas, were calculated. There was evidence of seasonal movement patterns, with increased long-distance movement during the dry season. A median of 10.6% (interquartile range (IQR): 5.8-23.8) of time was spent away from the household, which decreased during anopheline biting times to 5.6% (IQR: 1.7-14.9). The per cent of time spent in malaria high-risk areas for participants residing in high-risk areas ranged from 83.2% to 100%, but ranged from only 0.0% to 36.7% for participants residing in low-risk areas. Interventions targeted at the household may be more effective because of restricted movement during the rainy season, with limited movement between high- and low-risk areas.", "title": "" }, { "docid": "eef1e51e4127ed481254f97963496f48", "text": "-Vehicular ad hoc networks (VANETs) are wireless networks that do not require any fixed infrastructure. Regarding traffic safety applications for VANETs, warning messages have to be quickly and smartly disseminated in order to reduce the required dissemination time and to increase the number of vehicles receiving the traffic warning information. Adaptive techniques for VANETs usually consider features related to the vehicles in the scenario, such as their density, speed, and position, to adapt the performance of the dissemination process. These approaches are not useful when trying to warn the highest number of vehicles about dangerous situations in realistic vehicular environments. The Profile-driven Adaptive Warning Dissemination Scheme (PAWDS) designed to improve the warning message dissemination process. PAWDS system that dynamically modifies some of the key parameters of the propagation process and it cannot detect the vehicles which are in the dangerous position. Proposed system identifies the vehicles which are in the dangerous position and to send warning messages immediately. The vehicles must make use of all the available information efficiently to predict the position of nearby vehicles. Keywords— PAWDS, VANET, Ad hoc network , OBU , RSU, GPS.", "title": "" }, { "docid": "5c11736439fe488b389e400141ccfdb0", "text": "We propose a hierarchical model for sequential data that learns a tree on-thefly, i.e. while reading the sequence. In the model, a recurrent network adapts its structure and reuses recurrent weights in a recursive manner. This creates adaptive skip-connections that ease the learning of long-term dependencies. The tree structure can either be inferred without supervision through reinforcement learning, or learned in a supervised manner. We provide preliminary experiments in a novel Math Expression Evaluation (MEE) task, which is explicitly crafted to have a hierarchical tree structure that can be used to study the effectiveness of our model. Additionally, we test our model in a wellknown propositional logic and language modelling tasks. Experimental results show the potential of our approach.", "title": "" } ]
scidocsrr
3780aef416b28a16d5280e0ecdb02ce0
How to Fit when No One Size Fits
[ { "docid": "0485beab9d781e99046042a15ea913c5", "text": "Systems for processing continuous monitoring queries over data streams must be adaptive because data streams are often bursty and data characteristics may vary over time. We focus on one particular type of adaptivity: the ability to gracefully degrade performance via \"load shedding\" (dropping unprocessed tuples to reduce system load) when the demands placed on the system cannot be met in full given available resources. Focusing on aggregation queries, we present algorithms that determine at what points in a query plan should load shedding be performed and what amount of load should be shed at each point in order to minimize the degree of inaccuracy introduced into query answers. We report the results of experiments that validate our analytical conclusions.", "title": "" } ]
[ { "docid": "01997730a1547ac32d1a76e49d2e69e1", "text": "Scrotal calcinosis is a rarely seen benign disease in urological practice. It was first described by Lewinsky in 1883. The etiology is considered to be idiopathic and it is not known exactly. Scrotal calcinosis is usually asymptomatic. Patients live with their disease for a long time until they start to mind their appearances. Scrotal skin lesions can be solitary or multiple and usually are not associated with hormonal or metabolic abnormalities. Histologically, scrotal calcinosis is characterized by the presence of calcium deposits in the dermis, often surrounded by a granulomatous reaction. In this case report, we present a rare scrotal calcinosis case of a 28-year-old man who presented with cosmetic symptoms causing scrotal nodules with no history of metabolic, systemic, neoplastic, or autoimmune diseases.", "title": "" }, { "docid": "8a3dba8aa5aa8cf69da21079f7e36de6", "text": "This letter presents a novel technique for synthesis of coupled-resonator filters with inter-resonator couplings varying linearly with frequency. The values of non-zero elements of the coupling matrix are found by solving a nonlinear least squares problem involving eigenvalues of matrix pencils derived from the coupling matrix and reference zeros and poles of scattering parameters. The proposed method was verified by numerical tests carried out for various coupling schemes including triplets and quadruplets for which the frequency-dependent coupling was found to produce an extra zero.", "title": "" }, { "docid": "8a812c0ec6f8d29f9cbff4af2fa1c868", "text": "Due to the demand for depth maps of higher quality than possible with a single depth imaging technique today, there has been an increasing interest in the combination of different depth sensors to produce a “super-camera” that is more than the sum of the individual parts. In this survey paper, we give an overview over methods for the fusion of Time-ofFlight (ToF) and passive stereo data as well as applications of the resulting high quality depth maps. Additionally, we provide a tutorial-based introduction to the principles behind ToF stereo fusion and the evaluation criteria used to benchmark these methods.", "title": "" }, { "docid": "d0486fc1c105cd3e13ca855221462973", "text": "Automatic segmentation of an organ and its cystic region is a prerequisite of computer-aided diagnosis. In this paper, we focus on pancreatic cyst segmentation in abdominal CT scan. This task is important and very useful in clinical practice yet challenging due to the low contrast in boundary, the variability in location, shape and the different stages of the pancreatic cancer. Inspired by the high relevance between the location of a pancreas and its cystic region, we introduce extra deep supervision into the segmentation network, so that cyst segmentation can be improved with the help of relatively easier pancreas segmentation. Under a reasonable transformation function, our approach can be factorized into two stages, and each stage can be efficiently optimized via gradient back-propagation throughout the deep networks. We collect a new dataset with 131 pathological samples, which, to the best of our knowledge, is the largest set for pancreatic cyst segmentation. Without human assistance, our approach reports a 63.44% average accuracy, measured by the Dice-Sørensen coefficient (DSC), which is higher than the number (60.46%) without deep supervision.", "title": "" }, { "docid": "8bcc223389b7cc2ce2ef4e872a029489", "text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.", "title": "" }, { "docid": "a8de67cc99337dd8cdb92e1d6859f211", "text": "We present a novel way for designing complex joint inference and learning models using Saul (Kordjamshidi et al., 2015), a recently-introduced declarative learning-based programming language (DeLBP). We enrich Saul with components that are necessary for a broad range of learning based Natural Language Processing tasks at various levels of granularity. We illustrate these advances using three different, well-known NLP problems, and show how these generic learning and inference modules can directly exploit Saul’s graph-based data representation. These properties allow the programmer to easily switch between different model formulations and configurations, and consider various kinds of dependencies and correlations among variables of interest with minimal programming effort. We argue that Saul provides an extremely useful paradigm both for the design of advanced NLP systems and for supporting advanced research in NLP.", "title": "" }, { "docid": "98b4e2d51efde6f4f8c43c29650b8d2f", "text": "New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e.g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only nice to have but is in fact a necessary tool for designing embodied agents.", "title": "" }, { "docid": "3dc4384744f2f85983bc58b0a8a241c6", "text": "OBJECTIVE\nTo define a map of interradicular spaces where miniscrew can be likely placed at a level covered by attached gingiva, and to assess if a correlation between crowding and availability of space exists.\n\n\nMETHODS\nPanoramic radiographs and digital models of 40 patients were selected according to the inclusion criteria. Interradicular spaces were measured on panoramic radiographs, while tooth size-arch length discrepancy was assessed on digital models. Statistical analysis was performed to evaluate if interradicular spaces are influenced by the presence of crowding.\n\n\nRESULTS\nIn the mandible, the most convenient sites for miniscrew insertion were in the spaces comprised between second molars and first premolars; in the maxilla, between first molars and second premolars as well as between canines and lateral incisors and between the two central incisors. The interradicular spaces between the maxillary canines and lateral incisors, and between mandibular first and second premolars revealed to be influenced by the presence of dental crowding.\n\n\nCONCLUSIONS\nThe average interradicular sites map hereby proposed can be used as a general guide for miniscrew insertion at the very beginning of orthodontic treatment planning. Then, the clinician should consider the amount of crowding: if this is large, the actual interradicular space in some areas might be significantly different from what reported on average. Individualized radiographs for every patient are still recommended.", "title": "" }, { "docid": "cb4e5999dc1b8b0df8c1406c1227c3b0", "text": "Since adoption of the 2011 National Electrical Code®, many photovoltaic (PV) direct current (DC) arc-fault circuit interrupters (AFCIs) and arc-fault detectors (AFDs) have been introduced into the PV market. To meet the Code requirements, these products must be listed to Underwriters Laboratories (UL) 1699B Outline of Investigation. The UL 1699B test sequence was designed to ensure basic arc-fault detection capabilities with resistance to unwanted tripping; however, field experiences with AFCI/AFD devices have shown mixed results. In this investigation, independent laboratory tests were performed with UL-listed, UL-recognized, and prototype AFCI/AFDs to reveal any limitations with state-of-the-art arc-fault detection products. By running AFCIs and stand-alone AFDs through realistic tests beyond the UL 1699B requirements, many products were found to be sensitive to unwanted tripping or were ineffective at detecting harmful arc-fault events. Based on these findings, additional experiments are encouraged for inclusion in the AFCI/AFD design process and the certification standard to improve products entering the market.", "title": "" }, { "docid": "f393b6e00ef1e97f683a5dace33e40ff", "text": "s on human factors in computing systems (pp. 815–828). ACM New York, NY, USA. Hudlicka, E. (1997). Summary of knowledge elicitation techniques for requirements analysis (Course material for human computer interaction). Worcester Polytechnic Institute. Kaptelinin, V., & Nardi, B. (2012). Affordances in HCI: Toward a mediated action perspective. In Proceedings of CHI '12 (pp. 967–976).", "title": "" }, { "docid": "b6b9e1eaf17f6cdbc9c060e467021811", "text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.", "title": "" }, { "docid": "0a2795008a60a8b3f9c3a4a6834de30f", "text": "Infection, as a common postoperative complication of orthopedic surgery, is the main reason leading to implant failure. Silver nanoparticles (AgNPs) are considered as a promising antibacterial agent and always used to modify orthopedic implants to prevent infection. To optimize the implants in a reasonable manner, it is critical for us to know the specific antibacterial mechanism, which is still unclear. In this review, we analyzed the potential antibacterial mechanisms of AgNPs, and the influences of AgNPs on osteogenic-related cells, including cellular adhesion, proliferation, and differentiation, were also discussed. In addition, methods to enhance biocompatibility of AgNPs as well as advanced implants modifications technologies were also summarized.", "title": "" }, { "docid": "ce53aa803d587301a47166c483ecec34", "text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.", "title": "" }, { "docid": "3b72a89cdd3194f29ebf5db2085cb855", "text": "Spiking neural network (SNN) models describe key aspects of neural function in a computationally efficient manner and have been used to construct large-scale brain models. Large-scale SNNs are challenging to implement, as they demand high-bandwidth communication, a large amount of memory, and are computationally intensive. Additionally, tuning parameters of these models becomes more difficult and time-consuming with the addition of biologically accurate descriptions. To meet these challenges, we have developed CARLsim 3, a user-friendly, GPU-accelerated SNN library written in C/C++ that is capable of simulating biologically detailed neural models. The present release of CARLsim provides a number of improvements over our prior SNN library to allow the user to easily analyze simulation data, explore synaptic plasticity rules, and automate parameter tuning. In the present paper, we provide examples and performance benchmarks highlighting the library's features.", "title": "" }, { "docid": "a63cc19137ead27acf5530c0bdb924f5", "text": "We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications.", "title": "" }, { "docid": "5894fd2d3749df78afb49b27ad26f459", "text": "Information security policy compliance (ISP) is one of the key concerns that face organizations today. Although technical and procedural measures help improve information security, there is an increased need to accommodate human, social and organizational factors. Despite the plethora of studies that attempt to identify the factors that motivate compliance behavior or discourage abuse and misuse behaviors, there is a lack of studies that investigate the role of ethical ideology per se in explaining compliance behavior. The purpose of this research is to investigate the role of ethics in explaining Information Security Policy (ISP) compliance. In that regard, a model that integrates behavioral and ethical theoretical perspectives is developed and tested. Overall, analyses indicate strong support for the validation of the proposed theoretical model.", "title": "" }, { "docid": "b79bf80221c893f40abd7fd6b8a7145a", "text": "Attention is typically used to select informative sub-phrases that are used for prediction. This paper investigates the novel use of attention as a form of feature augmentation, i.e, casted attention. We propose Multi-Cast Attention Networks (MCAN), a new attention mechanism and general model architecture for a potpourri of ranking tasks in the conversational modeling and question answering domains. Our approach performs a series of soft attention operations, each time casting a scalar feature upon the inner word embeddings. The key idea is to provide a real-valued hint (feature) to a subsequent encoder layer and is targeted at improving the representation learning process. There are several advantages to this design, e.g., it allows an arbitrary number of attention mechanisms to be casted, allowing for multiple attention types (e.g., co-attention, intra-attention) and attention variants (e.g., alignment-pooling, max-pooling, mean-pooling) to be executed simultaneously. This not only eliminates the costly need to tune the nature of the co-attention layer, but also provides greater extents of explainability to practitioners. Via extensive experiments on four well-known benchmark datasets, we show that MCAN achieves state-of-the-art performance. On the Ubuntu Dialogue Corpus, MCAN outperforms existing state-of-the-art models by 9%. MCAN also achieves the best performing score to date on the well-studied TrecQA dataset.", "title": "" }, { "docid": "7f067f869481f06e865880e1d529adc8", "text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.", "title": "" }, { "docid": "8f1a5cba150b389eaa8f6e3c1382ac3d", "text": "Recent studies have explored a promising method to measure driver workload—the Peripheral Detection Task (PDT). The PDT has been suggested as a standard method to assess safety-relevant workload from the use of in-vehicle information systems (IVIS) while driving. This paper reports the German part of a Swedish-German cooperative study in which the PDT was investigated focusing on its specific sensitivity compared with alternative workload measures. Forty-nine professional drivers performed the PDT while following route guidance system instructions on an inner-city route. The route consisted of both highly demanding and less demanding sections. Two route guidance systems that differed mainly in display size and display organization were compared. Subjective workload ratings (NASA-TLX) as well as physiological measures (heart rate and heart rate variability) were collected as reference data. The PDT showed sensitivity to route demands. Despite their differing displays, both route guidance systems affected PDT performance similarly in intervals of several minutes. However, the PDT proved sensitive to peaks in workload from IVIS use and from the driving task. Peaks in workload were studied by video analyses of four selected subsections on the route. Subjective workload ratings reflected overall route demands and also did not indicate differing effects of the two displays. The physiological measures were less sensitive to workload and indicated emotional strain as well. An assessment of the PDT as a method for the measurement of safety-related workload is given. 2005 Elsevier Ltd. All rights reserved. 1369-8478/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.trf.2005.04.009 * Corresponding author. Address: University of Freiburg, Center for Cognitive Science, Institute of Computer Science and Social Research, Friedrichstrasse 50, D-79098 Freiburg, Germany. Tel.: +49 761 203 4966; fax: +49 761 203 4938. E-mail address: [email protected] (G. Jahn). 256 G. Jahn et al. / Transportation Research Part F 8 (2005) 255–275", "title": "" }, { "docid": "2751b54b456e5c105d9374b6c64c1985", "text": "Accurate prediction of the postmortem interval requires an understanding of the decomposition process and the factors acting upon it. A controlled experiment, over 60 days at an outdoor site in the northwest of England, used 20 freshly killed pigs (Sus scrofa) as human analogues to study decomposition rate and pattern. Ten pigs were hung off the ground and ten placed on the surface. Observed differences in the decomposition pattern required a new decomposition scoring scale to be produced for the hanging pigs to enable comparisons with the surface pigs. The difference in the rate of decomposition between hanging and surface pigs was statistically significant (p=0.001). Hanging pigs reached advanced decomposition stages sooner, but lagged behind during the early stages. This delay is believed to result from lower variety and quantity of insects, due to restricted beetle access to the aerial carcass, and/or writhing maggots falling from the carcass.", "title": "" } ]
scidocsrr
af661637b41e03b218bc3919969fb2e5
A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation
[ { "docid": "82866d253fda63fd7a1e70e9a0f4252e", "text": "We introduce a new class of maximization-expectation (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue that these hard assignments open the door to very fast implementations based on data structures such as kd-trees and conga lines. The marginalization over parameters ensures that we retain the ability to infer model structure (i.e., number of clusters). As an important example, we discuss a top-down Bayesian k-means algorithm and a bottom-up agglomerative clustering algorithm. In experiments, we compare these algorithms against a number of alternative algorithms that have recently appeared in the literature.", "title": "" } ]
[ { "docid": "6a8a849bc8272a7b73259e732e3be81b", "text": "Northrop Grumman is developing an atom-based magnetometer technology that has the potential for providing a global position reference independent of GPS. The NAV-CAM sensor is a direct outgrowth of the Nuclear Magnetic Resonance Gyro under development by the same technical team. It is capable of providing simultaneous measurements of all 3 orthogonal axes of magnetic vector field components using a single compact vapor cell. The vector sum determination of the whole-field scalar measurement achieves similar precision to the individual vector components. By using a single sensitive element (vapor cell) this approach eliminates many of the problems encountered when using physically separate sensors or sensing elements.", "title": "" }, { "docid": "bb7ac8c753d09383ecbf1c8cd7572d05", "text": "Skills learned through (deep) reinforcement learning often generalizes poorly across domains and re-training is necessary when presented with a new task. We present a framework that combines techniques in formal methods with reinforcement learning (RL). The methods we provide allows for convenient specification of tasks with logical expressions, learns hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards, and construct new skills from existing ones with little to no additional exploration. We evaluate the proposed methods in a simple grid world simulation as well as a more complicated kitchen environment in AI2Thor (Kolve et al. [2017]).", "title": "" }, { "docid": "c9a6fb06acb9e33a607c7f183ff6a626", "text": "The objective of the study was to examine the correlations between intracranial aneurysm morphology and wall shear stress (WSS) to identify reliable predictors of rupture risk. Seventy-two intracranial aneurysms (41 ruptured and 31 unruptured) from 63 patients were studied retrospectively. All aneurysms were divided into two categories: narrow (aspect ratio ≥1.4) and wide-necked (aspect ratio <1.4 or neck width ≥4 mm). Computational fluid dynamics was used to determine the distribution of WSS, which was analyzed between different morphological groups and between ruptured and unruptured aneurysms. Sections of the walls of clipped aneurysms were stained with hematoxylin–eosin, observed under a microscope, and photographed. Ruptured aneurysms were statistically more likely to have a greater low WSS area ratio (LSAR) (P = 0.001) and higher aneurysms parent WSS ratio (P = 0.026) than unruptured aneurysms. Narrow-necked aneurysms were statistically more likely to have a larger LSAR (P < 0.001) and lower values of MWSS (P < 0.001), mean aneurysm-parent WSS ratio (P < 0.001), HWSS (P = 0.012), and the highest aneurysm-parent WSS ratio (P < 0.001) than wide-necked aneurysms. The aneurysm wall showed two different pathological changes associated with high or low WSS in wide-necked aneurysms. Aneurysm morphology could affect the distribution and magnitude of WSS on the basis of differences in blood flow. Both high and low WSS could contribute to focal wall damage and rupture through different mechanisms associated with each morphological type.", "title": "" }, { "docid": "7d228b0da98868e92ab5ae13abddb29b", "text": "An important challenge for human-like AI is compositional semantics. Recent research has attempted to address this by using deep neural networks to learn vector space embeddings of sentences, which then serve as input to other tasks. We present a new dataset for one such task, “natural language inference” (NLI), that cannot be solved using only word-level knowledge and requires some compositionality. We find that the performance of state of the art sentence embeddings (InferSent; Conneau et al., 2017) on our new dataset is poor. We analyze the decision rules learned by InferSent and find that they are consistent with simple heuristics that are ecologically valid in its training dataset. Further, we find that augmenting training with our dataset improves test performance on our dataset without loss of performance on the original training dataset. This highlights the importance of structured datasets in better understanding and improving AI systems.", "title": "" }, { "docid": "ae7ee96b7a525f82c6d8e03e828f32a1", "text": "Teachers are increasingly required to incorporate information and communications technologies (ICT) into the modern classroom. The implementation of ICT into the classroom should not be seen as merely an add-on, however, but should be included with purpose; meaningfully implemented based on pedagogy. The aim of this study is to explore potential factors that might predict purposeful implementation of ICT into the classroom. Using an online survey, skills in and beliefs about ICT were assessed, as well as the teaching and learning beliefs of forty-five K-12 teachers. Hierarchical multiple regression revealed that competence using ICT and a belief in the importance of ICT for student outcomes positively predicted purposeful implementation of ICT into the classroom, while endorsing more traditional content-based learning was a negative predictor. These three predictors explained 47% of the variance in purposeful implementation of ICT into the classroom. ICT competence was unpacked further with correlations. This revealed that there is a relationship between teachers having ICT skills that can personalize, engage, and create an interactive atmosphere for students and purposeful implementation of ICT into the classroom. Based on these findings, suggestions are made of important focal areas for encouraging teachers to purposefully implement ICT into their classrooms.", "title": "" }, { "docid": "9167fbdd1fe4d5c17ffeaf50c6fd32b7", "text": "For many networked games, such as the Defense of the Ancients and StarCraft series, the unofficial leagues created by players themselves greatly enhance user-experience, and extend the success of each game. Understanding the social structure that players of these game s implicitly form helps to create innovative gaming services to the benefit of both players and game operators. But how to extract and analyse the implicit social structure? We address this question by first proposing a formalism consisting of various ways to map interaction to social structure, and apply this to real-world data collected from three different game genres. We analyse the implications of these mappings for in-game and gaming-related services, ranging from network and socially-aware matchmaking of players, to an investigation of social network robustnes against player departure.", "title": "" }, { "docid": "e1440ec680f070fed95ececf1c71949d", "text": "Cryptocurrency wallets store the wallets private key(s), and hence, are a lucrative target for attackers. With possession of the private key, an attacker virtually owns all of the currency in the compromised wallet. Managing cryptocurrency wallets offline, in isolated (’air-gapped’) computers, has been suggested in order to secure the private keys from theft. Such air-gapped wallets are often referred to as ’cold wallets.’ In this paper we show how private keys can be exfiltrated from air-gapped wallets. In the adversarial attack model, the attacker infiltrates the offline wallet, infecting it with malicious code. The malware can be preinstalled or pushed in during the initial installation of the wallet, or it can infect the system when removable media (e.g., USB flash drive) is inserted into the wallet’s computer in order to sign a transaction. These attack vectors have repeatedly been proven feasible in the last decade (e.g., [1],[2],[3],[4],[5],[6],[7],[8],[9],[10]). Having obtained a foothold in the wallet, an attacker can utilize various air-gap covert channel techniques (bridgeware [11]) to jump the airgap and exfiltrate the wallets private keys. We evaluate various exfiltration techniques, including physical, electromagnetic, electric, magnetic, acoustic, optical, and thermal techniques. This research shows that although cold wallets provide a high degree of isolation, its not beyond the capability of motivated attackers to compromise such wallets and steal private keys from them. We demonstrate how a 256-bit private key (e.g., bitcoin’s private keys) can be exfiltrated from an offline, air-gapped wallet of a fictional character named Satoshi within a matter of seconds.", "title": "" }, { "docid": "deb1d53be28bfbd57dc2bdce4115f10d", "text": "Previous research to investigate the interaction between malaria infection and tumor progression has revealed that malaria infection can potentiate host immune response against tumor in tumor-bearing mice. Exosomes may play key roles in disseminating pathogenic host-derived molecules during infection because several studies have shown the involvement and roles of extracellular vesicles in cell–cell communication. However, the role of exosomes generated during Plasmodium infection in tumor growth, progression and angiogenesis has not been studied either in animals or in the clinics. To test this hypothesis, we designed an animal model to generate and isolate exosomes from mice which were subsequently used to treat the tumor. Intra-tumor injection of exosomes derived from the plasma of Plasmodium-infected mice provided significantly reduced Lewis lung cancer growth in mice. We further co-cultured the isolated exosomes with endothelial cells and observed significantly reduced expression of VEGFR2 and migration in the endothelial cells. Interestingly, high level of micro-RNA (miRNA) 16/322/497/17 was detected in the exosomes derived from the plasma of mice infected with Plasmodium compared with those from control mice. We observed that overexpression of the miRNA 16/322/497/17 in endothelial cell corresponded with decreased expression of VEGFR2, inhibition of angiogenesis and inhibition of the miRNA 16/322/497/17 significantly alleviated these effects. These data provide novel scientific evidence of the interaction between Plasmodium infection and lung cancer growth and angiogenesis.", "title": "" }, { "docid": "b210df85635af27665efe9811b2123bf", "text": "Edge detection plays a significant role in image processing and performance of high-level tasks such as image segmentation and object recognition depends on its efficiency. It is clear that accurate edge map generation is more difficult when images are corrupted with noise. Moreover, most of edge detection methods have parameters which must be set manually. Here we propose a new color edge detector based on a statistical test, which is robust to noise. Also, the parameters of this method will be set automatically based on image content. To show the effectiveness of the proposed method, four state-of-the-art edge detectors are implemented and the results are compared. Experimental results on five of the most well-known edge detection benchmarks show that the proposed method is robust to noise. The performance of our method for lower levels of noise is very comparable to the existing approaches, whose performances highly depend on their parameter tuning stage. However, for higher levels of noise, the observed results significantly highlight the superiority of the proposed method over the existing edge detection methods, both quantitatively and qualitatively.", "title": "" }, { "docid": "7b1a6768cc6bb975925a754343dc093c", "text": "In response to the increasing volume of trajectory data obtained, e.g., from tracking athletes, animals, or meteorological phenomena, we present a new space-efficient algorithm for the analysis of trajectory data. The algorithm combines techniques from computational geometry, data mining, and string processing and offers a modular design that allows for a user-guided exploration of trajectory data incorporating domain-specific constraints and objectives.", "title": "" }, { "docid": "f060713abe9ada73c1c4521c5ca48ea9", "text": "In this paper, we revisit the classical Bayesian face recognition method by Baback Moghaddam et al. and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this “difference” formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4% test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10%.", "title": "" }, { "docid": "49fcbc3c543fb9152bc55c71aec586de", "text": "The rapid growth of e-commerce has provided both an opportunity to create new values in the online marketplace and dramatic competition to survive. To survive in a competitive environment, Internet shopping malls attempt to adopt and use Customer Relationship Management. However, previous researches focused on navigation patterns of customers with membership. Therefore, they failed to apply real time web marketing to anonymous customers who navigate web pages without personal login. To overcome the problems noted above, we propose a methodology for predicting the purchase probability of anonymous customers to support real time web marketing. The proposed methodology is composed of two phases: (1) extracting purchase patterns and (2) predicting purchase probability. Purchase pattern provides marketing implications to web marketers while the purchase probability provides an opportunity for real time web marketing by predicting the purchase probability of an anonymous customer. The proposed methodology can be applied to the real time web marketing such as navigation shortcuts, product recommendations and better customer inducement since anonymous customers are included in marketing target and significant navigation pattern for purchase is identified. q 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3a680786aa8525d75e9234f50c2b6600", "text": "The establishment of policy is key to the implementation of actions for health. We review the nature of policy and the definition and directions of health policy. In doing so, we explicitly cast a health political science gaze on setting parameters for researching policy change for health. A brief overview of core theories of the policy process for health promotion is presented, and illustrated with empirical evidence. The key arguments are that (a) policy is not an intervention, but drives intervention development and implementation; (b) understanding policy processes and their pertinent theories is pivotal for the potential to influence policy change; (c) those theories and associated empirical work need to recognise the wicked, multi-level, and incremental nature of elements in the process; and, therefore, (d) the public health, health promotion, and education research toolbox should more explicitly embrace health political science insights. The rigorous application of insights from and theories of the policy process will enhance our understanding of not just how, but also why health policy is structured and implemented the way it is.", "title": "" }, { "docid": "15341073c2c47072f94bd41574312c3c", "text": "In this paper, we review some advances made recently in the study of mobile phone datasets. This area of research has emerged a decade ago, with the increasing availability of large-scale anonymized datasets, and has grown into a stand-alone topic. We survey the contributions made so far on the social networks that can be constructed with such data, the study of personal mobility, geographical partitioning, urban planning, and help towards development as well as security and privacy issues.", "title": "" }, { "docid": "5717c8148c93b18ec0e41580a050bf3a", "text": "Verifiability is one of the core editing principles in Wikipedia, editors being encouraged to provide citations for the added content. For a Wikipedia article, determining the citation span of a citation, i.e. what content is covered by a citation, is important as it helps decide for which content citations are still missing. We are the first to address the problem of determining the citation span in Wikipedia articles. We approach this problem by classifying which textual fragments in an article are covered by a citation. We propose a sequence classification approach where for a paragraph and a citation, we determine the citation span at a finegrained level. We provide a thorough experimental evaluation and compare our approach against baselines adopted from the scientific domain, where we show improvement for all evaluation metrics.", "title": "" }, { "docid": "7cac405dcd832b0eeebbfa634ca2e99b", "text": "We have previously proposed a statistical method for estimating the pronunciation proficiency and intelligibility of presentations made in English by non-native speakers. To investigate the relationship between various acoustic measures and the pronunciation score and intelligibility, we statistically analyzed the speaker’s actual utterances to find combinations of acoustic features with a high correlation between the score estimated by a linear regression model and the score perceived by native English teachers. In this paper, we examined the quality of new acoustic features that are useful when used in combination with the system’s estimates of pronunciation score and intelligibility. Results showed that the best combination of acoustic features produced correlation coefficients of 0.929 and 0.753 for pronunciation and intelligibility, respectively, using open data for speakers at the 10-sentence level.", "title": "" }, { "docid": "9504ed439de69c77ebdce2b148defbe7", "text": "While advances in computing resources have made processing enormous amounts of data possible, human ability to identify patterns in such data has not scaled accordingly. Efficient computational methods for condensing and simplifying data are thus becoming vital for extracting actionable insights. In particular, while data summarization techniques have been studied extensively, only recently has summarizing interconnected data, or graphs, become popular. This survey is a structured, comprehensive overview of the state-of-the-art methods for summarizing graph data. We first broach the motivation behind and the challenges of graph summarization. We then categorize summarization approaches by the type of graphs taken as input and further organize each category by core methodology. Finally, we discuss applications of summarization on real-world graphs and conclude by describing some open problems in the field.", "title": "" }, { "docid": "29cc827b8990bed2b8fba1c974a51fdf", "text": "The calibration parameters of a mobile robot play a substantial role in navigation tasks. Often these parameters are subject to variations that depend either on environmental changes or on the wear of the devices. In this paper, we propose an approach to simultaneously estimate a map of the environment, the position of the on-board sensors of the robot, and its kinematic parameters. Our method requires no prior knowledge about the environment and relies only on a rough initial guess of the platform parameters. The proposed approach performs on-line estimation of the parameters and it is able to adapt to non-stationary changes of the configuration. We tested our approach in simulated environments and on a wide range of real world data using different types of robotic platforms.", "title": "" }, { "docid": "9e6bfc7b5cc87f687a699c62da013083", "text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.", "title": "" }, { "docid": "5068191083a9a14751b88793dd96e7d3", "text": "The electric motor is the main component in an electrical vehicle. Its power density is directly influenced by the winding. For this reason, it is relevant to investigate the influences of coil production on the quality of the stator. The examined stator in this article is wound with the multi-wire needle winding technique. With this method, the placing of the wires can be precisely guided leading to small winding heads. To gain a high winding quality with small winding resistances, the control of the tensile force during the winding process is essential. The influence of the tensile force on the winding resistance during the winding process with the multiple needle winding technique will be presented here. To control the tensile force during the winding process, the stress on the wire during the winding process needs to be examined first. Thus a model will be presented to investigate the tensile force which realizes a coupling between the multibody dynamics simulation and the finite element methods with the software COMSOL Multiphysics®. With the results of the simulation, a new winding-trajectory based wire tension control can be implemented. Therefore, new strategies to control the tensile force during the process using a CAD/CAM approach will be presented in this paper.", "title": "" } ]
scidocsrr
8d85d906841546879e46c8d94a4d52ca
A Generalized Wiener Attack on RSA
[ { "docid": "fe0587c51c4992aa03f28b18f610232f", "text": "We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order $\\frac{1}{4} \\log_2 N$ bits of P.", "title": "" } ]
[ { "docid": "04cf981a76c74b198ebe4703d0039e36", "text": "The acquisition of high-fidelity, long-term neural recordings in vivo is critically important to advance neuroscience and brain⁻machine interfaces. For decades, rigid materials such as metal microwires and micromachined silicon shanks were used as invasive electrophysiological interfaces to neurons, providing either single or multiple electrode recording sites. Extensive research has revealed that such rigid interfaces suffer from gradual recording quality degradation, in part stemming from tissue damage and the ensuing immune response arising from mechanical mismatch between the probe and brain. The development of \"soft\" neural probes constructed from polymer shanks has been enabled by advancements in microfabrication; this alternative has the potential to mitigate mismatch-related side effects and thus improve the quality of recordings. This review examines soft neural probe materials and their associated microfabrication techniques, the resulting soft neural probes, and their implementation including custom implantation and electrical packaging strategies. The use of soft materials necessitates careful consideration of surgical placement, often requiring the use of additional surgical shuttles or biodegradable coatings that impart temporary stiffness. Investigation of surgical implantation mechanics and histological evidence to support the use of soft probes will be presented. The review concludes with a critical discussion of the remaining technical challenges and future outlook.", "title": "" }, { "docid": "752c61771593e4395856f56690a6f61b", "text": "We conducted a longitudinal study with 32 nonmusician children over 9 months to determine 1) whether functional differences between musician and nonmusician children reflect specific predispositions for music or result from musical training and 2) whether musical training improves nonmusical brain functions such as reading and linguistic pitch processing. Event-related brain potentials were recorded while 8-year-old children performed tasks designed to test the hypothesis that musical training improves pitch processing not only in music but also in speech. Following the first testing sessions nonmusician children were pseudorandomly assigned to music or to painting training for 6 months and were tested again after training using the same tests. After musical (but not painting) training, children showed enhanced reading and pitch discrimination abilities in speech. Remarkably, 6 months of musical training thus suffices to significantly improve behavior and to influence the development of neural processes as reflected in specific pattern of brain waves. These results reveal positive transfer from music to speech and highlight the influence of musical training. Finally, they demonstrate brain plasticity in showing that relatively short periods of training have strong consequences on the functional organization of the children's brain.", "title": "" }, { "docid": "b8797251b01821e69fec564f0b2b91fb", "text": "Spectral clustering enjoys its success in both data clustering and semisupervised learning. But, most spectral clustering algorithms cannot handle multi-class clustering problems directly. Additional strategies are needed to extend spectral clustering algorithms to multi-class clustering problems. Furthermore, most spectral clustering algorithms employ hard cluster membership, which is likely to be trapped by the local optimum. In this paper, we present a new spectral clustering algorithm, named “Soft Cut”. It improves the normalized cut algorithm by introducing soft membership, and can be efficiently computed using a bound optimization algorithm. Our experiments with a variety of datasets have shown the promising performance of the proposed clustering algorithm.", "title": "" }, { "docid": "7c6cd51c57eca406e7fe78f7d290045c", "text": "Different types of electric vehicles (EVs) have been recently designed with the aim of solving pollution problems caused by the emission of gasoline-powered engines. Environmental problems promote the adoption of new-generation electric vehicles for urban transportation. As it is well known, one of the weakest points of electric vehicles is the battery system. Vehicle autonomy and, therefore, accurate detection of battery state of charge (SoC) together with battery expected life, i.e., battery state of health, are among the major drawbacks that prevent the introduction of electric vehicles in the consumer market. The electric scooter may provide the most feasible opportunity among EVs. They may be a replacement product for the primary-use vehicle, especially in Europe and Asia, provided that drive performance, safety, and cost issues are similar to actual engine scooters. The battery system choice is a crucial item, and thanks to an increasing emphasis on vehicle range and performance, the Li-ion battery could become a viable candidate. This paper deals with the design of a battery pack based on Li-ion technology for a prototype electric scooter with high performance and autonomy. The adopted battery system is composed of a suitable number of cells series connected, featuring a high voltage level. Therefore, cell equalization and monitoring need to be provided. Due to manufacturing asymmetries, charge and discharge cycles lead to cell unbalancing, reducing battery capacity and, depending on cell type, causing safety troubles or strongly limiting the storage capacity of the full pack. No solution is available on the market at a cheap price, because of the required voltage level and performance, therefore, a dedicated battery management system was designed, that also includes a battery SoC monitoring. The proposed solution features a high capability of energy storing in braking conditions, charge equalization, overvoltage and undervoltage protection and, obviously, SoC information in order to optimize autonomy instead of performance or vice-versa.", "title": "" }, { "docid": "b8cf5e3802308fe941848fea51afddab", "text": "Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses.", "title": "" }, { "docid": "309080fa2ef4f959951c08527ec1980d", "text": "Complete scene understanding has been an aspiration of computer vision since its very early days. It has applications in autonomous navigation, aerial imaging, surveillance, human-computer interaction among several other active areas of research. While many methods since the advent of deep learning have taken performance in several scene understanding tasks to respectable levels, the tasks are far from being solved. One problem that plagues scene understanding is low-resolution. Convolutional Neural Networks that achieve impressive results on high resolution struggle when confronted with low resolution because of the inability to learn hierarchical features and weakening of signal with depth. In this thesis, we study the low resolution and suggest approaches that can overcome its consequences on three popular tasks object detection, in-the-wild face recognition, and semantic segmentation. The popular object detectors were designed for, trained, and benchmarked on datasets that have a strong bias towards medium and large sized objects. When these methods are finetuned and tested on a dataset of small objects, they perform miserably. The most successful detection algorithms follow a two-stage pipeline: the first which quickly generates regions of interest that are likely to contain the object and the second, which classifies these proposal regions. We aim to adapt both these stages for the case of small objects; the first by modifying anchor box generation based on theoretical considerations, and the second using a simple-yet-effective super-resolution step. Motivated by the success of being able to detect small objects, we study the problem of detecting and recognising objects with huge variations in resolution, in the problem of face recognition in semistructured scenes. Semi-structured scenes like social settings are more challenging than regular ones: there are several more faces of vastly different scales, there are large variations in illumination, pose and expression, and the existing datasets do not capture these variations. We address the unique challenges in this setting by (i) benchmarking popular methods for the problem of face detection, and (ii) proposing a method based on resolution-specific networks to handle different scales. Semantic segmentation is a more challenging localisation task where the goal is to assign a semantic class label to every pixel in the image. Solving such a problem is crucial for self-driving cars where we need sharper boundaries for roads, obstacles and paraphernalia. For want of a higher receptive field and a more global view of the image, CNN networks forgo resolution. This results in poor segmentation of complex boundaries, small and thin objects. We propose prefixing a super-resolution step before semantic segmentation. Through experiments, we show that a performance boost can be obtained on the popular streetview segmentation dataset, CityScapes.", "title": "" }, { "docid": "3a3470d13c9c63af1a62ee7bc57a96ef", "text": "Cloud computing is a distributed computing model that still faces problems. New ideas emerge to take advantage of its features and among the research challenges found in the cloud, we can highlight Identity and Access Management. The main problems of the application of access control in the cloud are the necessary flexibility and scalability to support a large number of users and resources in a dynamic and heterogeneous environment, with collaboration and information sharing needs. This paper proposes the use of risk-based dynamic access control for cloud computing. The proposal is presented as an access control model based on an extension of the XACML standard with three new components: the Risk Engine, the Risk Quantification Web Services and the Risk Policies. The risk policies present a method to describe risk metrics and their quantification, using local or remote functions. The risk policies allow users and cloud service providers to define how to handle risk-based access control for their resources, using different quantification and aggregation methods. The model reaches the access decision based on a combination of XACML decisions and risk analysis. A prototype of the model is implemented, showing it has enough expressivity to describe the models of related work. In the experimental results, the prototype takes between 2 and 6 milliseconds to reach access decisions using a risk policy. A discussion on the security aspects of the model is also presented.", "title": "" }, { "docid": "383e88fd5dc669aff5f602f35b319380", "text": "Automatic Turret Gun (ATG) is a weapon system used in numerous combat platforms and vehicles such as in tanks, aircrafts, or stationary ground platforms. ATG plays a big role in both defensive and offensive scenario. It allows combat engagement while the operator of ATG (soldier) covers himself inside a protected control station. On the other hand, ATGs have significant mass and dimension, therefore susceptible to inertial disturbances that need to be compensated to enable the ATG to reach the targeted position quickly and accurately while undergoing disturbances from weapon fire or platform movement. The paper discusses various conventional control method applied in ATG, namely PID controller, RAC, and RACAFC. A number of experiments have been carried out for various range of angle both in azimuth and elevation axis of turret gun. The results show that for an ATG system working under disturbance, RACAFC exhibits greater performance than both RAC and PID, but in experiments without load, equally satisfactory results are obtained from RAC. The exception is for the PID controller, which cannot reach the entire angle given.", "title": "" }, { "docid": "f3bd0da14f446f71d1e1792549227a4e", "text": "Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.", "title": "" }, { "docid": "3eee111e4521528031019f83786efab7", "text": "Social media platforms such as Twitter and Facebook enable the creation of virtual customer environments (VCEs) where online communities of interest form around specific firms, brands, or products. While these platforms can be used as another means to deliver familiar e-commerce applications, when firms fail to fully engage their customers, they also fail to fully exploit the capabilities of social media platforms. To gain business value, organizations need to incorporate community building as part of the implementation of social media.", "title": "" }, { "docid": "3ddcf5f0e4697a0d43eff2cca77a1ab7", "text": "Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.", "title": "" }, { "docid": "e632895c1ab1b994f64ef03260b91acb", "text": "The modified Brostrom procedure is commonly recommended for reconstruction of the anterior talofibular ligament (ATF) and calcaneofibular ligament (CF) with an advancement of the inferior retinaculum. However, some surgeons perform the modified Bostrom procedure with an semi-single ATF ligament reconstruction and advancement of the inferior retinaculum for simplicity. This study evaluated the initial stability of the modified Brostrom procedure and compared a two ligaments (ATF + CF) reconstruction group with a semi-single ligament (ATF) reconstruction group. Sixteen paired fresh frozen cadaveric ankle joints were used in this study. The ankle joint laxity was measured on the plane radiographs with 150 N anterior drawer force and 150 N varus stress force. The anterior displacement distances and varus tilt angles were measured before and after cutting the ATF and CF ligaments. A two ligaments (ATF + CF) reconstruction with an advancement of the inferior retinaculum was performed on eight left cadaveric ankles, and an semi-single ligament (ATF) reconstruction with an advancement of the inferior retinaculum was performed on eight right cadaveric ankles. The ankle instability was rechecked after surgery. The decreases in instability of the ankle after surgery were measured and the difference in the decrease was compared using a Mann–Whitney U test. The mean decreases in anterior displacement were 3.4 and 4.0 mm in the two ligaments reconstruction and semi-single ligament reconstruction groups, respectively. There was no significant difference between the two groups (P = 0.489). The mean decreases in the varus tilt angle in the two ligaments reconstruction and semi-single ligament reconstruction groups were 12.6° and 12.2°, respectively. There was no significant difference between the two groups (P = 0.399). In this cadaveric study, a substantial level of initial stability can be obtained using an anatomical reconstruction of the anterior talofibular ligament only and reinforcement with the inferior retinaculum. The modified Brostrom procedure with a semi-single ligament (Anterior talofibular ligament) reconstruction with an advancement of the inferior retinaculum can provide as much initial stability as the two ligaments (Anterior talofibular ligament and calcaneofibular ligament) reconstruction procedure.", "title": "" }, { "docid": "5459dc71fd40a576365f0afced64b6b7", "text": "Cloud computing providers such as Amazon and Google have recently begun offering container-instances, which provide an efficient route to application deployment within a lightweight, isolated and well-defined execution environment. Cloud providers currently offer Container Service Platforms (CSPs), which orchestrate containerised applications. Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, resource wastage through underutilised container-instances, and a reduction in the number of applications that can be deployed, given the available resources. The research presented in this paper aims to extend existing systems by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications, and resource pool in multiple (geographical distributed) cloud regions. We evaluated CMS on a cloud-based CSP i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the direct deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation, and up to 70% reduction in execution times.", "title": "" }, { "docid": "8057cddc406a90177fda5f3d4ee7c375", "text": "This paper introduces the task of questionanswer driven semantic role labeling (QA-SRL), where question-answer pairs are used to represent predicate-argument structure. For example, the verb “introduce” in the previous sentence would be labeled with the questions “What is introduced?”, and “What introduces something?”, each paired with the phrase from the sentence that gives the correct answer. Posing the problem this way allows the questions themselves to define the set of possible roles, without the need for predefined frame or thematic role ontologies. It also allows for scalable data collection by annotators with very little training and no linguistic expertise. We gather data in two domains, newswire text and Wikipedia articles, and introduce simple classifierbased models for predicting which questions to ask and what their answers should be. Our results show that non-expert annotators can produce high quality QA-SRL data, and also establish baseline performance levels for future work on this task.", "title": "" }, { "docid": "f4ea679d2c09107b1313a4795c749ca2", "text": "Math word problems form a natural abstraction to a range of quantitative reasoning problems, such as understanding financial news, sports results, and casualties of war. Solving such problems requires the understanding of several mathematical concepts such as dimensional analysis, subset relationships, etc. In this paper, we develop declarative rules which govern the translation of natural language description of these concepts to math expressions. We then present a framework for incorporating such declarative knowledge into word problem solving. Our method learns to map arithmetic word problem text to math expressions, by learning to select the relevant declarative knowledge for each operation of the solution expression. This provides a way to handle multiple concepts in the same problem while, at the same time, supporting interpretability of the answer expression. Our method models the mapping to declarative knowledge as a latent variable, thus removing the need for expensive annotations. Experimental evaluation suggests that our domain knowledge based solver outperforms all other systems, and that it generalizes better in the realistic case where the training data it is exposed to is biased in a different way than the test data.", "title": "" }, { "docid": "051d402ce90d7d326cc567e228c8411f", "text": "CDM ESD event has become the main ESD reliability concern for integrated-circuits products using nanoscale CMOS technology. A novel CDM ESD protection design, using self-biased current trigger (SBCT) and source pumping, has been proposed and successfully verified in 0.13-lm CMOS technology to achieve 1-kV CDM ESD robustness. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3510bcd9d52729766e2abe2111f8be95", "text": "Metaphors are common elements of language that allow us to creatively stretch the limits of word meaning. However, metaphors vary in their degree of novelty, which determines whether people must create new meanings on-line or retrieve previously known metaphorical meanings from memory. Such variations affect the degree to which general cognitive capacities such as executive control are required for successful comprehension. We investigated whether individual differences in executive control relate to metaphor processing using eye movement measures of reading. Thirty-nine participants read sentences including metaphors or idioms, another form of figurative language that is more likely to rely on meaning retrieval. They also completed the AX-CPT, a domain-general executive control task. In Experiment 1, we examined sentences containing metaphorical or literal uses of verbs, presented with or without prior context. In Experiment 2, we examined sentences containing idioms or literal phrases for the same participants to determine whether the link to executive control was qualitatively similar or different to Experiment 1. When metaphors were low familiar, all people read verbs used as metaphors more slowly than verbs used literally (this difference was smaller for high familiar metaphors). Executive control capacity modulated this pattern in that high executive control readers spent more time reading verbs when a prior context forced a particular interpretation (metaphorical or literal), and they had faster total metaphor reading times when there was a prior context. Interestingly, executive control did not relate to idiom processing for the same readers. Here, all readers had faster total reading times for high familiar idioms than literal phrases. Thus, executive control relates to metaphor but not idiom processing for these readers, and for the particular metaphor and idiom reading manipulations presented.", "title": "" }, { "docid": "d3c11fc96110e1ab0b801a5ba81133e1", "text": "Two experiments comparing user performance on ClearType and Regular displays are reported. In the first, 26 participants scanned a series of spreadsheets for target information. Speed of performance was significantly faster with ClearType. In the second experiment, 25 users read two articles for meaning. Reading speed was significantly faster for ClearType. In both experiments no differences in accuracy of performance or visual fatigue scores were observed. The data also reveal substantial individual differences in performance suggesting ClearType may not be universally beneficial to information workers.", "title": "" }, { "docid": "cd1cfbdae08907e27a4e1c51e0508839", "text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.", "title": "" } ]
scidocsrr
22f2a21ab25e1d20636299564824a389
What you see is what you set: sustained inattentional blindness and the capture of awareness.
[ { "docid": "6362adacc0ee3e7f3cf418e8d8ff0cb9", "text": "Advances in neuroscience implicate reentrant signaling as the predominant form of communication between brain areas. This principle was used in a series of masking experiments that defy explanation by feed-forward theories. The masking occurs when a brief display of target plus mask is continued with the mask alone. Two masking processes were found: an early process affected by physical factors such as adapting luminance and a later process affected by attentional factors such as set size. This later process is called masking by object substitution, because it occurs whenever there is a mismatch between the reentrant visual representation and the ongoing lower level activity. Iterative reentrant processing was formalized in a computational model that provides an excellent fit to the data. The model provides a more comprehensive account of all forms of visual masking than do the long-held feed-forward views based on inhibitory contour interactions.", "title": "" } ]
[ { "docid": "90da5531538f373d7a591d80615d0fb4", "text": "Re-authenticating users may be necessary for smartphone authentication schemes that leverage user behaviour, device context, or task sensitivity. However, due to the unpredictable nature of re-authentication, users may get annoyed when they have to use the default, non-transparent authentication prompt for re-authentication. We address this concern by proposing several re-authentication configurations with varying levels of screen transparency and an optional time delay before displaying the authentication prompt. We conduct user studies with 30 participants to evaluate the usability and security perceptions of these configurations. We find that participants respond positively to our proposed changes and utilize the time delay while they are anticipating to get an authentication prompt to complete their current task. Though our findings indicate no differences in terms of task performance against these configurations, we find that the participants’ preferences for the configurations are context-based. They generally prefer the reauthentication configuration with a non-transparent background for sensitive applications, such as banking and photo apps, while their preferences are inclined towards convenient, usable configurations for medium and low sensitive apps or while they are using their devices at home. We conclude with suggestions to improve the design of our proposed configurations as well as a discussion of guidelines for future implementations of re-authentication schemes.", "title": "" }, { "docid": "66f17513486e4d25c9be36e71aecbbf8", "text": "Fuzz testing is an active testing technique which consists in automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. Fuzzing entails such questions as: Where to fuzz? Which parameter to fuzz? What kind of anomaly to introduce? Where to observe its effects? etc. Different test contexts depending on the degree of knowledge assumed about the target: recompiling the application (white-box), interacting only at the target interface (blackbox), dynamically instrumenting a binary (grey-box). In this paper, we focus on black-box test contest, and specifically address the questions: How to obtain a notion of coverage on unstructured inputs? How to capture human testers intuitions and use it for the fuzzing? How to drive the search in various directions? We specifically address the problems of detecting Memory Corruption in PDF interpreters and Cross Site Scripting (XSS) in web applications. We detail our approaches which use genetic algorithm, inference and anti-random testing. We empirically evaluate our implementations of XSS fuzzer KameleonFuzz and of PDF fuzzer ShiftMonkey.", "title": "" }, { "docid": "227b995313994032ddeddc3cd4093790", "text": "This paper describes and assesses underwater channel models for optical wireless communication. Models considered are: inherent optical properties; vector radiative transfer theory with the small-angle analytical solution and numerical solutions of the vector radiative transfer equation (Monte Carlo, discrete ordinates and invariant imbedding). Variable composition and refractive index, in addition to background light, are highlighted as aspects of the channel which advanced models must represent effectively. Models are assessed against these aspects in terms of their ability to predict transmitted power and spatial and temporal distributions of light a specified distance from a transmitter. Monte Carlo numerical methods are found to be the most versatile but are compromised by long computational time and greater errors than other methods.", "title": "" }, { "docid": "88af2cee31243eef4e46e357b053b3ae", "text": "Domestic induction heating (IH) is currently the technology of choice in modern domestic applications due to its advantages regarding fast heating time, efficiency, and improved control. New design trends pursue the implementation of new cost-effective topologies with higher efficiency levels. In order to achieve this aim, a direct ac-ac boost resonant converter is proposed in this paper. The main features of this proposal are the improved efficiency, reduced component count, and proper output power control. A detailed analytical model leading to closed-form expressions of the main magnitudes is presented, and a converter design procedure is proposed. In addition, an experimental prototype has been designed and built to prove the expected converter performance and the accurateness of the analytical model. The experimental results are in good agreement with the analytical ones and prove the feasibility of the proposed converter for the IH application.", "title": "" }, { "docid": "e1f531740891d47387a2fc2ef4f71c46", "text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.", "title": "" }, { "docid": "5015d853665e2642add922290b28b685", "text": "What is CRM Customer relationship Management (CRM) appears to be a simple and straightforward concept, but there are many different definitions and implementations of CRM. At present, a number of different conceptual understandings are associated with the term \"Customer Relationship Management (CRM). There understanding range from IT driven programs designed to optimize customer contact to comprehensive approaches for the establishment and design of long-term relationships. The effort to establish a meaningful relationship with the customer is characteristic of this last understanding (Barnes 2003).", "title": "" }, { "docid": "795f5c1085cbdfccb3457adf003faba1", "text": "Abstract—In this paper, a novel dual-band RF-harvesting RF-DC converter with a frequency limited impedance matching network (M/N) is proposed. The proposed RF-DC converter consists of a dual-band impedance matching network, a rectifier circuit with villard structure, a wideband harmonic suppression low-pass filter (LPF), and a termination load. The proposed dual-band M/N can match two receiving band signals and suppress the out-of-band signals effectively, so the back-scattered nonlinear frequency components from the nonlinear rectifying diodes to the antenna can be blocked. The fabricated circuit provides the maximum RF-DC conversion efficiency of 73.76% and output voltage 7.09 V at 881MHz and 69.05% with 6.86V at 2.4GHz with an individual input signal power of 22 dBm. Moreover, the conversion efficiency of 77.13% and output voltage of 7.25V are obtained when two RF waves with input dual-band signal power of 22 dBm are fed simultaneously.", "title": "" }, { "docid": "41d6fe50d6ef17936d457c801024274f", "text": "In this article, we quantitatively analyze how the term “fake news” is being shaped in news media in recent years. We study the perception and the conceptualization of this term in the traditional media using eight years of data collected from news outlets based in 20 countries. Our results not only corroborate previous indications of a high increase in the usage of the expression “fake news”, but also show contextual changes around this expression after the United States presidential election of 2016. Among other results, we found changes in the related vocabulary, in the mentioned entities, in the surrounding topics and in the contextual polarity around the term “fake news”, suggesting that this expression underwent a change in perception and conceptualization after 2016. These outcomes expand the understandings on the usage of the term “fake news”, helping to comprehend and more accurately characterize this relevant social phenomenon linked to misinformation and manipulation.", "title": "" }, { "docid": "3831c1b7b1679f6e158d6a17e47df122", "text": "Social media platforms provide an inexpensive communication medium that allows anyone to quickly reach millions of users. Consequently, in these platforms anyone can publish content and anyone interested in the content can obtain it, representing a transformative revolution in our society. However, this same potential of social media systems brings together an important challenge---these systems provide space for discourses that are harmful to certain groups of people. This challenge manifests itself with a number of variations, including bullying, offensive content, and hate speech. Specifically, authorities of many countries today are rapidly recognizing hate speech as a serious problem, specially because it is hard to create barriers on the Internet to prevent the dissemination of hate across countries or minorities. In this paper, we provide the first of a kind systematic large scale measurement and analysis study of hate speech in online social media. We aim to understand the abundance of hate speech in online social media, the most common hate expressions, the effect of anonymity on hate speech and the most hated groups across regions. In order to achieve our objectives, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both of these systems. Our results identify hate speech forms and unveil a set of important patterns, providing not only a broader understanding of online hate speech, but also offering directions for detection and prevention approaches.", "title": "" }, { "docid": "f652e66bbc0e6a1ddaec31f16286a332", "text": "In Rspondin-based 3D cultures, Lgr5 stem cells from multiple organs form ever-expanding epithelial organoids that retain their tissue identity. We report the establishment of tumor organoid cultures from 20 consecutive colorectal carcinoma (CRC) patients. For most, organoids were also generated from adjacent normal tissue. Organoids closely recapitulate several properties of the original tumor. The spectrum of genetic changes within the \"living biobank\" agrees well with previous large-scale mutational analyses of CRC. Gene expression analysis indicates that the major CRC molecular subtypes are represented. Tumor organoids are amenable to high-throughput drug screens allowing detection of gene-drug associations. As an example, a single organoid culture was exquisitely sensitive to Wnt secretion (porcupine) inhibitors and carried a mutation in the negative Wnt feedback regulator RNF43, rather than in APC. Organoid technology may fill the gap between cancer genetics and patient trials, complement cell-line- and xenograft-based drug studies, and allow personalized therapy design. PAPERCLIP.", "title": "" }, { "docid": "567445f68597ea8ff5e89719772819be", "text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.", "title": "" }, { "docid": "6f6ae8ea9237cca449b8053ff5f368e7", "text": "With the rapid development of Location-based Social Network (LBSN) services, a large number of Point-of-Interests (POIs) have been available, which consequently raises a great demand of building personalized POI recommender systems. A personalized POI recommender system can significantly help users to find their preferred POIs and assist POI owners to attract more customers. However, due to the complexity of users’ checkin decision making process that is influenced by many different factors such as POI distance and region’s prosperity, and the dynamics of user’s preference, POI recommender systems usually suffer from many challenges. Although different latent factor based methods (e.g., probabilistic matrix factorization) have been proposed, most of them do not successfully incorporate both geographical influence and temporal effect together into latent factor models. To this end, in this paper, we propose a new Spatial-Temporal Probabilistic Matrix Factorization (STPMF) model that models a user’s preference for POI as the combination of his geographical preference and other general interest in POI. Furthermore, in addition to static general interest of user, we capture the temporal dynamics of user’s interest as well by modeling checkin data in a unique way. To evaluate the proposed STPMF model, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on two real-world data sets. The experimental results clearly demonstrate the effectiveness of our proposed STPMF model.", "title": "" }, { "docid": "d1c88428d398caba2dc9a8f79f84a45f", "text": "In this article, a novel compact reconfigurable antenna based on substrate integrated waveguide (SIW) technology is introduced. The geometry of the proposed antennas is symmetric with respect to the horizontal center line. The electrical shape of the antenna is composed of double H-plane SIW based horn antennas and radio frequency micro electro mechanical system (RF-MEMS) actuators. The RF-MEMS actuators are integrated in the planar structure of the antenna for reconfiguring the radiation pattern by adding nulls to the pattern. The proper activation/deactivation of the switches alters the modes distributed in the structure and changes the radiation pattern. When different combinations of switches are on or off, the radiation patterns have 2, 4, 6, 8, . . . nulls with nearly similar operating frequencies. The attained peak gain of the proposed antenna is higher than 5 dB at any point on the far field radiation pattern except at the null positions. The design procedure and closed form formulation are provided for analytical determination of the antenna parameters. Moreover, the designed antenna with an overall dimensions of only 63.6 × 50 mm2 is fabricated and excited through standard SMA connector and compared with the simulated results. The measured results show that the antenna can clearly alters its beams using the switching components. The proposed antenna retains advantages of low cost, low cross-polarized radiation, and easy integration of configuration.", "title": "" }, { "docid": "901fbd46cdd4403c8398cb21e1c75ba1", "text": "Hidden Markov Model (HMM) based applications are common in various areas, but the incorporation of HMM's for anomaly detection is still in its infancy. This paper aims at classifying the TCP network traffic as an attack or normal using HMM. The paper's main objective is to build an anomaly detection system, a predictive model capable of discriminating between normal and abnormal behavior of network traffic. In the training phase, special attention is given to the initialization and model selection issues, which makes the training phase particularly effective. For training HMM, 12.195% features out of the total features (5 features out of 41 features) present in the KDD Cup 1999 data set are used. Result of tests on the KDD Cup 1999 data set shows that the proposed system is able to classify network traffic in proportion to the number of features used for training HMM. We are extending our work on a larger data set for building an anomaly detection system.", "title": "" }, { "docid": "c82f4117c7c96d0650eff810f539c424", "text": "The Stock Market is known for its volatile and unstable nature. A particular stock could be thriving in one period and declining in the next. Stock traders make money from buying equity when they are at their lowest and selling when they are at their highest. The logical question would be: \"What Causes Stock Prices To Change?\". At the most fundamental level, the answer to this would be the demand and supply. In reality, there are many theories as to why stock prices fluctuate, but there is no generic theory that explains all, simply because not all stocks are identical, and one theory that may apply for today, may not necessarily apply for tomorrow. This paper covers various approaches taken to attempt to predict the stock market without extensive prior knowledge or experience in the subject area, highlighting the advantages and limitations of the different techniques such as regression and classification. We formulate both short term and long term predictions. Through experimentation we achieve 81% accuracy for future trend direction using classification, 0.0117 RMSE for next day price and 0.0613 RMSE for next day change in price using regression techniques. The results obtained in this paper are achieved using only historic prices and technical indicators. Various methods, tools and evaluation techniques will be assessed throughout the course of this paper, the result of this contributes as to which techniques will be selected and enhanced in the final artefact of a stock prediction model. Further work will be conducted utilising deep learning techniques to approach the problem. This paper will serve as a preliminary guide to researchers wishing to expose themselves to this area.", "title": "" }, { "docid": "5c3ae59522d549bed4c059a11b9724c6", "text": "The chemokine receptor CCR7 drives leukocyte migration into and within lymph nodes (LNs). It is activated by chemokines CCL19 and CCL21, which are scavenged by the atypical chemokine receptor ACKR4. CCR7-dependent navigation is determined by the distribution of extracellular CCL19 and CCL21, which form concentration gradients at specific microanatomical locations. The mechanisms underpinning the establishment and regulation of these gradients are poorly understood. In this article, we have incorporated multiple biochemical processes describing the CCL19-CCL21-CCR7-ACKR4 network into our model of LN fluid flow to establish a computational model to investigate intranodal chemokine gradients. Importantly, the model recapitulates CCL21 gradients observed experimentally in B cell follicles and interfollicular regions, building confidence in its ability to accurately predict intranodal chemokine distribution. Parameter variation analysis indicates that the directionality of these gradients is robust, but their magnitude is sensitive to these key parameters: chemokine production, diffusivity, matrix binding site availability, and CCR7 abundance. The model indicates that lymph flow shapes intranodal CCL21 gradients, and that CCL19 is functionally important at the boundary between B cell follicles and the T cell area. It also predicts that ACKR4 in LNs prevents CCL19/CCL21 accumulation in efferent lymph, but does not control intranodal gradients. Instead, it attributes the disrupted interfollicular CCL21 gradients observed in Ackr4-deficient LNs to ACKR4 loss upstream. Our novel approach has therefore generated new testable hypotheses and alternative interpretations of experimental data. Moreover, it acts as a framework to investigate gradients at other locations, including those that cannot be visualized experimentally or involve other chemokines.", "title": "" }, { "docid": "4e182b30dcbc156e2237e7d1d22d5c93", "text": "A brain-computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI) is presented which allows human subjects to observe and control changes of their own blood oxygen level-dependent (BOLD) response. This BCI performs data preprocessing (including linear trend removal, 3D motion correction) and statistical analysis on-line. Local BOLD signals are continuously fed back to the subject in the magnetic resonance scanner with a delay of less than 2 s from image acquisition. The mean signal of a region of interest is plotted as a time-series superimposed on color-coded stripes which indicate the task, i.e., to increase or decrease the BOLD signal. We exemplify the presented BCI with one volunteer intending to control the signal of the rostral-ventral and dorsal part of the anterior cingulate cortex (ACC). The subject achieved significant changes of local BOLD responses as revealed by region of interest analysis and statistical parametric maps. The percent signal change increased across fMRI-feedback sessions suggesting a learning effect with training. This methodology of fMRI-feedback can assess voluntary control of circumscribed brain areas. As a further extension, behavioral effects of local self-regulation become accessible as a new field of research.", "title": "" }, { "docid": "c8b9efec71a72a1d0f0fc7170efba61d", "text": "Microorganisms present in our oral cavity which are called the human micro flora attach to our tooth surfaces and develop biofilms. In maximum organic habitats microorganisms generally prevail as multispecies biolfilms with the help of intercellular interactions and communications among them which are the main keys for their endurance. These biofilms are formed by initial attachment of bacteria to a surface, development of a multi –dimensional complex structure and detachment to progress other site. The best example of biofilm formation is dental plaque. Plaque formation can lead to dental caries and other associated diseases causing tooth loss. Many different bacteria are involved in these processes and one among them is Streptococcus mutans which is the principle and the most important agent. When these infections become severe, during the treatment the bacterium can enter the bloodstream from the oral cavity and cause endocarditis. The oral bacterium S. mutans is greatly skilled in its mechanical modes of carbohydrate absorption. It also synthesizes polysaccharides that are present in dental plaque causing caries. As dental caries is a preventable disease major distinct approaches for its prevention are: carbohydrate diet, sugar substitutes, mechanical cleaning techniques, use of fluorides, antimicrobial agents, fissure sealants, vaccines, probiotics, replacement theory and dairy products and at the same time for tooth remineralization fluorides and casein phosphopeptides are extensively employed. The aim of this review article is to put forth the general features of the bacterium S.mutans and how it is involved in certain diseases like: dental plaque, dental caries and endocarditis.", "title": "" }, { "docid": "8077eb57c4232bc7e502f864f659ee7b", "text": "Sex based differences in immune responses, affecting both the innate and adaptive immune responses, contribute to differences in the pathogenesis of infectious diseases in males and females, the response to viral vaccines and the prevalence of autoimmune diseases. Indeed, females have a lower burden of bacterial, viral and parasitic infections, most evident during their reproductive years. Conversely, females have a higher prevalence of a number of autoimmune diseases, including Sjogren's syndrome, systemic lupus erythematosus (SLE), scleroderma, rheumatoid arthritis (RA) and multiple sclerosis (MS). These observations suggest that gonadal hormones may have a role in this sex differential. The fundamental differences in the immune systems of males and females are attributed not only to differences in sex hormones, but are related to X chromosome gene contributions and the effects of environmental factors. A comprehensive understanding of the role that sex plays in the immune response is required for therapeutic intervention strategies against infections and the development of appropriate and effective therapies for autoimmune diseases for both males and females. This review will focus on the differences between male and female immune responses in terms of innate and adaptive immunity, and the effects of sex hormones in SLE, MS and RA.", "title": "" }, { "docid": "6ed4d5ae29eef70f5aae76ebed76b8ca", "text": "Web services that thrive on mining user interaction data such as search engines can currently track clicks and mouse cursor activity on their Web pages. Cursor interaction mining has been shown to assist in user modeling and search result relevance, and is becoming another source of rich information that data scientists and search engineers can tap into. Due to the growing popularity of touch-enabled mobile devices, search systems may turn to tracking touch interactions in place of cursor interactions. However, unlike cursor interactions, touch interactions are difficult to record reliably and their coordinates have not been shown to relate to regions of user interest. A better approach may be to track the viewport coordinates instead, which the user must manipulate to view the content on a mobile device. These recorded viewport coordinates can potentially reveal what regions of the page interest users and to what degree. Using this information, search system can then improve the design of their pages or use this information in click models or learning to rank systems. In this position paper, we discuss some of the challenges faced in mining interaction data for new modes of interaction, and future research directions in this field.", "title": "" } ]
scidocsrr
3f85ab24763b17b0e940da68b34bb844
Computational personality traits assessment: A review
[ { "docid": "1378ab6b9a77dba00beb63c27b1addf6", "text": "Whenever we listen to or meet a new person we try to predict personality attributes of the person. Our behavior towards the person is hugely influenced by the predictions we make. Personality is made up of the characteristic patterns of thoughts, feelings and behaviors that make a person unique. Your personality affects your success in the role. Recognizing about yourself and reflecting on your personality can help you to understand how you might shape your future. Various approaches like personality prediction through speech, facial expression, video, and text are proposed in literature to recognize personality. Personality predictions can be made out of one’s handwriting as well. The objective of this paper is to discuss methodology used to identify personality through handwriting analysis and present current state-of-art related to it.", "title": "" }, { "docid": "c0d794e7275e7410998115303bf0cf79", "text": "We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.", "title": "" } ]
[ { "docid": "7ebf04cde2f938787dac4718e768efe1", "text": "With the proliferation of mobile demands and increasingly multifarious services and applications, mobile Internet has been an irreversible trend. Unfortunately, the current mobile and wireless network (MWN) faces a series of pressing challenges caused by the inherent design. In this paper, we extend two latest and promising innovations of Internet, software-defined networking and network virtualization, to mobile and wireless scenarios. We first describe the challenges and expectations of MWN, and analyze the opportunities provided by the software-defined wireless network (SDWN) and wireless network virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting the main ideas, advantages, ongoing researches and key technologies, and open issues respectively. Moreover, we interpret that these two technologies highly complement each other, and further investigate efficient joint design between them. This paper confirms that SDWN and WNV may efficiently address the crucial challenges of This work is supported by National Basic Research Program of China (973 Program Grant No. 2013CB329105), National Natural Science Foundation of China (Grants No. 61301080 and No. 61171065), Chinese National Major Scientific and Technological Specialized Project (No. 2013ZX03002001), Chinas Next Generation Internet (No. CNGI-12-03-007), and ZTE Corporation. M. Yang School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, P. R. China E-mail: [email protected] Y. Li · D. Jin · L. Zeng Department of Electronic Engineering, Tsinghua University, Beijing 100084, P. R. China Y. Li E-mail: [email protected] D. Jin, L. Zeng E-mail: {jindp, zenglg}@mail.tsinghua.edu.cn Xin Wu Big Switch, USA E-mail: [email protected] A. V. Vasilakos Department of Computer and Telecommunications Engineering,University of Western Macedonia, Greece Electrical and Computer Engineering, National Technical University of Athens (NTUA), Greece E-mail: [email protected] MWN and significantly benefit the future mobile and wireless network.", "title": "" }, { "docid": "e708fc43b5ac8abf8cc2707195e8a45e", "text": "We develop analytical models for predicting the magnetic field distribution in Halbach magnetized machines. They are formulated in polar coordinates and account for the relative recoil permeability of the magnets. They are applicable to both internal and external rotor permanent-magnet machines with either an iron-cored or air-cored stator and/or rotor. We compare predicted results with those obtained by finite-element analyses and measurements. We show that the air-gap flux density varies significantly with the pole number and that an optimal combination of the magnet thickness and the pole number exists for maximum air-gap flux density, while the back iron can enhance the air-gap field and electromagnetic torque when the radial thickness of the magnet is small.", "title": "" }, { "docid": "ac1f2a1a96ab424d9b69276efd4f1ed4", "text": "This paper describes various systems from the University of Minnesota, Duluth that participated in the CLPsych 2015 shared task. These systems learned decision lists based on lexical features found in training data. These systems typically had average precision in the range of .70 – .76, whereas a random baseline attained .47 – .49.", "title": "" }, { "docid": "19e09b1c0eb3646e5ae6484524f82e10", "text": "Results from 12 switchback field trials involving 1216 cows were combined to assess the effects of a protected B vitamin blend (BVB) upon milk yield (kg), fat percentage (%), protein %, fat yield (kg) and protein yield (kg) in primiparous and multiparous cows. Trials consisted of 3 test periods executed in the order control-test-control. No diet changes other than the inclusion of 3 grams/cow/ day of the BVB during the test period occurred. Means from the two control periods were compared to results obtained during the test period using a paired T test. Cows include in the analysis were between 45 and 300 days in milk (DIM) at the start of the experiment and were continuously available for all periods. The provision of the BVB resulted in increased (P < 0.05) milk, fat %, protein %, fat yield and protein yield. Regression models showed that the amount of milk produced had no effect upon the magnitude of the increase in milk components. The increase in milk was greatest in early lactation and declined with DIM. Protein and fat % increased with DIM in mature cows, but not in first lactation cows. Differences in fat yields between test and control feeding periods did not change with DIM, but the improvement in protein yield in mature cows declined with DIM. These results indicate that the BVB provided economically important advantages throughout lactation, but expected results would vary with cow age and stage of lactation.", "title": "" }, { "docid": "66c218bddb0bce210f8e0efa7bb457a7", "text": "The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.", "title": "" }, { "docid": "7a055093ac92c7d2fa7aa8dcbe47a8b8", "text": "In this paper, we present the design process of a smart bracelet that aims at enhancing the life of elderly people. The bracelet acts as a personal assistant during the user's everyday life, monitoring the health status and alerting him or her about abnormal conditions, reminding medications and facilitating the everyday life in many outdoor and indoor activities.", "title": "" }, { "docid": "c7a32821699ebafadb4c59e99fb3aa9e", "text": "According to the trend towards high-resolution CMOS image sensors, pixel sizes are continuously shrinking, towards and below 1.0μm, and sizes are now reaching a technological limit to meet required SNR performance [1-2]. SNR at low-light conditions, which is a key performance metric, is determined by the sensitivity and crosstalk in pixels. To improve sensitivity, pixel technology has migrated from frontside illumination (FSI) to backside illumiation (BSI) as pixel size shrinks down. In BSI technology, it is very difficult to further increase the sensitivity in a pixel of near-1.0μm size because there are no structural obstacles for incident light from micro-lens to photodiode. Therefore the only way to improve low-light SNR is to reduce crosstalk, which makes the non-diagonal elements of the color-correction matrix (CCM) close to zero and thus reduces color noise [3]. The best way to improve crosstalk is to introduce a complete physical isolation between neighboring pixels, e.g., using deep-trench isolation (DTI). So far, a few attempts using DTI have been made to suppress silicon crosstalk. A backside DTI in as small as 1.12μm-pixel, which is formed in the BSI process, is reported in [4], but it is just an intermediate step in the DTI-related technology because it cannot completely prevent silicon crosstalk, especially for long wavelengths of light. On the other hand, front-side DTIs for FSI pixels [5] and BSI pixels [6] are reported. In [5], however, DTI is present not only along the periphery of each pixel, but also invades into the pixel so that it is inefficient in terms of gathering incident light and providing sufficient amount of photodiode area. In [6], the pixel size is as large as 2.0μm and it is hard to scale down with this technology for near 1.0μm pitch because DTI width imposes a critical limit on the sufficient amount of photodiode area for full-well capacity. Thus, a new technological advance is necessary to realize the ideal front DTI in a small size pixel near 1.0μm.", "title": "" }, { "docid": "60094e041c1be864ba8a636308b7ee12", "text": "This paper presents two chatbot systems, ALICE and Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the Dialogue Diversity Corpus to retrain a chatbot system with human dialogue examples. A Java program to convert from dialog transcript to AIML format provides a basic implementation of corpusbased chatbot training.. We conclude that dialogue researchers should adopt clearer standards for transcription and markup format in dialogue corpora to be used in training a chatbot system more effectively.", "title": "" }, { "docid": "5591d4842507a097e353c67c7d56262d", "text": "Reasoning about entities and their relationships from multimodal data is a key goal of Artificial General Intelligence. The visual question answering (VQA) problem is an excellent way to test such reasoning capabilities of an AI model and its multimodal representation learning. However, the current VQA models are oversimplified deep neural networks, comprised of a long short-term memory (LSTM) unit for question comprehension and a convolutional neural network (CNN) for learning single image representation. We argue that the single visual representation contains a limited and general information about the image contents and thus limits the model reasoning capabilities. In this work we introduce a modular neural network model that learns a multimodal and multifaceted representation of the image and the question. The proposed model learns to use the multimodal representation to reason about the image entities and achieves a new state-of-the-art performance on both VQA benchmark datasets, VQA v1.0 and v2.0, by a wide margin.", "title": "" }, { "docid": "ce5fc5fbb3cb0fb6e65ca530bfc097b1", "text": "The Bulgarian electricity market rules require from the transmission system operator, to procure electricity for covering transmission grid losses on hourly base before day-ahead gate closure. In this paper is presented a software solution for day-ahead forecasting of hourly transmission losses that is based on statistical approach of the impacting factors correlations and uses as inputs numerical weather predictions.", "title": "" }, { "docid": "8e2006ca72dbc6be6592e21418b7f3ba", "text": "In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.", "title": "" }, { "docid": "0bc0e621c58a79a7455f0849ccf41a02", "text": "With the adoption of power electronic converters in shipboard power systems and associated novel fault management concepts, the ability to isolate electric faults quickly from the power system is becoming more important than breaking high magnitude fault currents and the corresponding arcing between opening contacts within a switch. This allows for the design of substantially faster, as well as potentially lighter and more compact, mechanical disconnect switches. Herein, we are proposing a new class of mechanical disconnect switches that utilize piezoelectric actuators to isolate within less than one millisecond. This technology may become a key enabler for future all-electric ships.", "title": "" }, { "docid": "14fb71b01f86008f0772eabd52ea747a", "text": "This paper introduces a positioning system for walking persons, called \"Personal Dead-reckoning\" (PDR) system. The PDR system does not require GPS, beacons, or landmarks. The system is therefore useful in GPS-denied environments, such as inside buildings, tunnels, or dense forests. Potential users of the system are military and security personnel as well as emergency responders. The PDR system uses a 6-DOF inertial measurement unit (IMU) attached to the user's boot. The IMU provides rate-of-rotation and acceleration measurements that are used in real-time to estimate the location of the user relative to a known starting point. In order to reduce the most significant errors of this IMU-based system-caused by the bias drift of the accelerometers-we implemented a technique known as \"Zero Velocity Update\" (ZUPT). With the ZUPT technique and related signal processing algorithms, typical errors of our system are about 2% of distance traveled for short walks. This typical PDR system error is largely independent of the gait or speed of the user. When walking continuously for several minutes, the error increases gradually beyond 2%. The PDR system works in both 2-dimensional (2-D) and 3-D environments, although errors in Z-direction are usually larger than 2% of distance traveled. Earlier versions of our system used an unpractically large IMU. In the most recent version we implemented a much smaller IMU. This paper discussed specific problems of this small IMU, our measures for eliminating these problems, and our first experimental results with the small IMU under different conditions.", "title": "" }, { "docid": "d041a5fc5f788b1abd8abf35a26cb5d2", "text": "In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Although most of these models have claimed state-of-the-art performance, the original papers often reported on only one or two selected datasets. We provide a systematic study and show that (i) encoding contextual information by LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help as much as previously claimed but surprisingly improves performance on Twitter datasets, (iii) the Enhanced Sequential Inference Model (Chen et al., 2017) is the best so far for larger datasets, while the Pairwise Word Interaction Model (He and Lin, 2016) achieves the best performance when less data is available. We release our implementations as an open-source toolkit.", "title": "" }, { "docid": "602077b20a691854102946757da4b287", "text": "For three-dimensional (3D) ultrasound imaging, connecting elements of a two-dimensional (2D) transducer array to the imaging system's front-end electronics is a challenge because of the large number of array elements and the small element size. To compactly connect the transducer array with electronics, we flip-chip bond a 2D 16 times 16-element capacitive micromachined ultrasonic transducer (CMUT) array to a custom-designed integrated circuit (IC). Through-wafer interconnects are used to connect the CMUT elements on the top side of the array with flip-chip bond pads on the back side. The IC provides a 25-V pulser and a transimpedance preamplifier to each element of the array. For each of three characterized devices, the element yield is excellent (99 to 100% of the elements are functional). Center frequencies range from 2.6 MHz to 5.1 MHz. For pulse-echo operation, the average -6-dB fractional bandwidth is as high as 125%. Transmit pressures normalized to the face of the transducer are as high as 339 kPa and input-referred receiver noise is typically 1.2 to 2.1 rnPa/ radicHz. The flip-chip bonded devices were used to acquire 3D synthetic aperture images of a wire-target phantom. Combining the transducer array and IC, as shown in this paper, allows for better utilization of large arrays, improves receive sensitivity, and may lead to new imaging techniques that depend on transducer arrays that are closely coupled to IC electronics.", "title": "" }, { "docid": "427c5f5825ca06350986a311957c6322", "text": "Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion and malware etc. However, recent research has shown that machine learning models are venerable to attacks by adversaries at all phases of machine learning (e.g., training data collection, training, operation). All model classes of machine learning systems can be misled by providing carefully crafted inputs making them wrongly classify inputs. Maliciously created input samples can affect the learning process of a ML system by either slowing the learning process, or affecting the performance of the learned model or causing the system make error only in attacker’s planned scenario. Because of these developments, understanding security of machine learning algorithms and systems is emerging as an important research area among computer security and machine learning researchers and practitioners. We present a survey of this emerging area.", "title": "" }, { "docid": "b7ca3a123963bb2f0bfbe586b3bc63d0", "text": "Objective In symptom-dependent diseases such as functional dyspepsia (FD), matching the pattern of epigastric symptoms, including severity, kind, and perception site, between patients and physicians is critical. Additionally, a comprehensive examination of the stomach, duodenum, and pancreas is important for evaluating the origin of such symptoms. Methods FD-specific symptoms (epigastric pain, epigastric burning, early satiety, and postprandial fullness) and other symptoms (regurgitation, nausea, belching, and abdominal bloating) as well as the perception site of the above symptoms were investigated in healthy subjects using a new questionnaire with an illustration of the human body. A total of 114 patients with treatment-resistant dyspeptic symptoms were evaluated for their pancreatic exocrine function using N-benzoyl-L-tyrosyl-p-aminobenzoic acid. Results A total of 323 subjects (men:women, 216:107; mean age, 52.1 years old) were initially enrolled. Most of the subjects felt the FD-specific symptoms at the epigastrium, while about 20% felt them at other abdominal sites. About 30% of expressed as epigastric symptoms were FD-nonspecific symptoms. At the epigastrium, epigastric pain and epigastric burning were mainly felt at the upper part, and postprandial fullness and early satiety were felt at the lower part. The prevalence of patients with pancreatic exocrine dysfunction was 71% in the postprandial fullness group, 68% in the epigastric pain group, and 82% in the diarrhea group. Conclusion We observed mismatch in the perception site and expression between the epigastric symptoms of healthy subjects and FD-specific symptoms. Postprandial symptoms were often felt at the lower part of the epigastrium, and pancreatic exocrine dysfunction may be involved in the FD symptoms, especially for treatment-resistant dyspepsia patients.", "title": "" }, { "docid": "6ab5678d7f4bcb0d686ca3f384381134", "text": "We present a TTS neural network that is able to produce speech in multiple languages. The proposed network is able to transfer a voice, which was presented as a sample in a source language, into one of several target languages. Training is done without using matching or parallel data, i.e., without samples of the same speaker in multiple languages, making the method much more applicable. The conversion is based on learning a polyglot network that has multiple perlanguage sub-networks and adding loss terms that preserve the speaker’s identity in multiple languages. We evaluate the proposed polyglot neural network for three languages with a total of more than 400 speakers and demonstrate convincing conversion capabilities.", "title": "" }, { "docid": "e2f2961ab8c527914c3d23f8aa03e4bf", "text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.", "title": "" }, { "docid": "796625110c6e97f4ff834cfe04c784fe", "text": "This paper addresses the large-scale visual font recognition (VFR) problem, which aims at automatic identification of the typeface, weight, and slope of the text in an image or photo without any knowledge of content. Although visual font recognition has many practical applications, it has largely been neglected by the vision community. To address the VFR problem, we construct a large-scale dataset containing 2,420 font classes, which easily exceeds the scale of most image categorization datasets in computer vision. As font recognition is inherently dynamic and open-ended, i.e., new classes and data for existing categories are constantly added to the database over time, we propose a scalable solution based on the nearest class mean classifier (NCM). The core algorithm is built on local feature embedding, local feature metric learning and max-margin template selection, which is naturally amenable to NCM and thus to such open-ended classification problems. The new algorithm can generalize to new classes and new data at little added cost. Extensive experiments demonstrate that our approach is very effective on our synthetic test images, and achieves promising results on real world test images.", "title": "" } ]
scidocsrr
1bc5bd619547727ba6793016cfa593c0
Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features
[ { "docid": "a5306ca9a50e82e07d487d1ac7603074", "text": "Many modern visual recognition algorithms incorporate a step of spatial ‘pooling’, where the outputs of several nearby feature detectors are combined into a local or global ‘bag of features’, in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.", "title": "" } ]
[ { "docid": "cf751df3c52306a106fcd00eef28b1a4", "text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.", "title": "" }, { "docid": "731df77ded13276e7bdb9f67474f3810", "text": "Given a graph <i>G</i> = (<i>V,E</i>) and positive integral vertex weights <i>w</i> : <i>V</i> → N, the <i>max-coloring problem</i> seeks to find a proper vertex coloring of <i>G</i> whose color classes <i>C</i><inf>1,</inf> <i>C</i><inf>2,</inf>...,<i>C</i><inf><i>k</i></inf>, minimize Σ<sup><i>k</i></sup><inf><i>i</i> = 1</inf> <i>max</i><inf>ν∈<i>C</i><inf>i</inf></inf><i>w</i>(ν). This problem, restricted to interval graphs, arises whenever there is a need to design dedicated memory managers that provide better performance than the general purpose memory management of the operating system. Specifically, companies have tried to solve this problem in the design of memory managers for wireless protocol stacks such as GPRS or 3G.Though this problem seems similar to the wellknown dynamic storage allocation problem, we point out fundamental differences. We make a connection between max-coloring and on-line graph coloring and use this to devise a simple 2-approximation algorithm for max-coloring on interval graphs. We also show that a simple first-fit strategy, that is a natural choice for this problem, yields a 10-approximation algorithm. We show this result by proving that the first-fit algorithm for on-line coloring an interval graph <i>G</i> uses no more than 10.<i>x</i>(<i>G</i>) colors, significantly improving the bound of 26.<i>x</i>(<i>G</i>) by Kierstead and Qin (<i>Discrete Math.</i>, 144, 1995). We also show that the max-coloring problem is NP-hard.", "title": "" }, { "docid": "b52580bfad9621a1b0537ceed0c912c0", "text": "Partial discharge (PD) detection is an effective method for finding insulation defects in HV and EHV power cables. PD apparent charge is typically expressed in picocoulombs (pC) when the calibration procedure defined in IEC 60270 is applied during off-line tests. During on-line PD detection, measured signals are usually denoted in mV or dB without transforming the measured signal into a charge quantity. For AC XLPE power cable systems, on-line PD detection is conducted primarily with the use of high frequency current transformer (HFCT). The HFCT is clamped around the cross-bonding link of the joint or the grounding wire of termination. In on-line occasion, PD calibration is impossible from the termination. A novel on-line calibration method using HFCT is introduced in this paper. To eliminate the influence of cross-bonding links, the interrupted cable sheath at the joint was reconnected via the high-pass C-arm connector. The calibration signal was injected into the cable system via inductive coupling through the cable sheath. The distributed transmission line equivalent circuit of the cable was used in consideration of the signal attenuation. Both the conventional terminal calibration method and the proposed on-line calibration method were performed on the coaxial cable model loop for experimental verification. The amplitude and polarity of signals that propagate in the cable sheath and the conductor were evaluated. The results indicate that the proposed method can calibrate the measured signal during power cable on-line PD detection.", "title": "" }, { "docid": "8f4576ba6c0d7f54568c21db4a68fb56", "text": "Although speech dysfluencies have been hypothesized to be associated with abnormal function of dopaminergic system, the effects of dopaminergic medication on speech fluency in Parkinson’s disease (PD) have not been systematically studied. The aim of the present study was, therefore, to investigate the long-term effect of dopaminergic medication on speech fluency in PD. Fourteen de novo PD patients with no history of developmental stuttering and 14 age- and sex-matched healthy controls (HC) were recruited. PD subjects were examined three times; before the initiation of dopaminergic treatment and twice in following 6 years. The percentage of dysfluent words was calculated from reading passage and monolog. The amount of medication was expressed by cumulative doses of l-dopa equivalent. After 3–6 years of dopaminergic therapy, PD patients exhibited significantly more dysfluent events compared to healthy subjects as well as to their own speech performance before the introduction of dopaminergic therapy (p < 0.05). In addition, we found a strong positive correlation between the increased occurrence of dysfluent words and the total cumulative dose of l-dopa equivalent (r = 0.75, p = 0.002). Our findings indicate an adverse effect of prolonged dopaminergic therapy contributing to the development of stuttering-like dysfluencies in PD. These findings may have important implication in clinical practice, where speech fluency should be taken into account to optimize dopaminergic therapy.", "title": "" }, { "docid": "07bb0aec18894ae389eea9e2756443f8", "text": "Generative Adversarial Networks (GANs) and their extensions have carved open many exciting ways to tackle well known and challenging medical image analysis problems such as medical image denoising, reconstruction, segmentation, data simulation, detection or classification. Furthermore, their ability to synthesize images at unprecedented levels of realism also gives hope that the chronic scarcity of labeled data in the medical field can be resolved with the help of these generative models. In this review paper, a broad overview of recent literature on GANs for medical applications is given, the shortcomings and opportunities of the proposed methods are thoroughly discussed and potential future work is elaborated. A total of 63 papers published until end of July 2018 are reviewed. For quick access, the papers and important details such as the underlying method, datasets and performance are summarized in tables.", "title": "" }, { "docid": "fc5ddec1d157724c245e25558741acdb", "text": "Emotions involve physiological responses that are regulated by the brain. The present paper reviews the empirical literature on central nervous system (CNS) and autonomic nervous system (ANS) concomitants of emotional states, with a focus on studies that simultaneously assessed CNS and ANS activity. The reviewed data support two primary conclusions: (1) numerous cortical and subcortical regions show co-occurring activity with ANS responses in emotion, and (2) there may be reversed asymmetries on cortical and subcortical levels with respect to CNS/ANS interrelations. These observations are interpreted in terms of a model of neurovisceral integration in emotion, and directions for future research are presented.", "title": "" }, { "docid": "5cd70dede0014f4a58c0dc8460ba8513", "text": "In this paper the Model Predictive Control (MPC) strategy is used to solve the mobile robot trajectory tracking problem, where controller must ensure that robot follows pre-calculated trajectory. The so-called explicit optimal controller design and implementation are described. The MPC solution is calculated off-line and expressed as a piecewise affine function of the current state of a mobile robot. A linearized kinematic model of a differential drive mobile robot is used for the controller design purpose. The optimal controller, which has a form of a look-up table, is tested in simulation and experimentally.", "title": "" }, { "docid": "f831b51b6e9f635efca9d3d20e6e144d", "text": "Drones equipped with cameras are emerging as a powerful tool for large-scale aerial 3D scanning, but existing automatic flight planners do not exploit all available information about the scene, and can therefore produce inaccurate and incomplete 3D models. We present an automatic method to generate drone trajectories, such that the imagery acquired during the flight will later produce a high-fidelity 3D model. Our method uses a coarse estimate of the scene geometry to plan camera trajectories that: (1) cover the scene as thoroughly as possible; (2) encourage observations of scene geometry from a diverse set of viewing angles; (3) avoid obstacles; and (4) respect a user-specified flight time budget. Our method relies on a mathematical model of scene coverage that exhibits an intuitive diminishing returns property known as submodularity. We leverage this property extensively to design a trajectory planning algorithm that reasons globally about the non-additive coverage reward obtained across a trajectory, jointly with the cost of traveling between views. We evaluate our method by using it to scan three large outdoor scenes, and we perform a quantitative evaluation using a photorealistic video game simulator.", "title": "" }, { "docid": "11cfe05879004f225aee4b3bda0ce30b", "text": "Data mining system contain large amount of private and sensitive data such as healthcare, financial and criminal records. These private and sensitive data can not be share to every one, so privacy protection of data is required in data mining system for avoiding privacy leakage of data. Data perturbation is one of the best methods for privacy preserving. We used data perturbation method for preserving privacy as well as accuracy. In this method individual data value are distorted before data mining application. In this paper we present min max normalization transformation based data perturbation. The privacy parameters are used for measurement of privacy protection and the utility measure shows the performance of data mining technique after data distortion. We performed experiment on real life dataset and the result show that min max normalization transformation based data perturbation method is effective to protect confidential information and also maintain the performance of data mining technique after data distortion.", "title": "" }, { "docid": "a7c661ce625c60ef7a1ff498795b9020", "text": "Median filtering technique is often used to remove additive white, salt and pepper noise from a signal or a source image. This filtering method is essential for the processing of digital data representing analog signals in real time. The median filter considers each pixel in the image in turn and looks at its nearby neighbors to determine whether or not it is representative of its surroundings. It replaces the pixel value with the median of neighboring pixel values. The median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value. We have used graphics processing units (GPUs) to implement the post-processing, performed by NVIDIA Compute Unified Device Architecture (CUDA). Such a system is faster than the CPU version, or other traditional computing, for processing medical applications such as echography or Doppler. This paper shows the effect of the Median Filtering and a comparison of the performance of the CPU and GPU in terms of response time.", "title": "" }, { "docid": "79f4951b91c222585abe7452c2a61625", "text": "This article presents a Hoare-style calculus for a substantial subset of Java Card, which we call Java . In particular, the language includes side-effecting expressions, mutual recursion, dynamic method binding, full exception handling, and static class initialization. The Hoare logic of partial correctness is proved not only sound (w.r.t. our operational semantics of Java, described in detail elsewhere) but even complete. It is the first logic for an object-oriented language that is provably complete. The completeness proof uses a refinement of the Most General Formula approach. The proof of soundness gives new insights into the role of type safety. Further by-products of this work are a new general methodology for handling side-effecting expressions and their results, the discovery of the strongest possible rule of consequence, and a flexible Call rule for mutual recursion. We also give a small but non-trivial application example. All definitions and proofs have been done formally with the interactive theorem prover Isabelle/HOL. This guarantees not only rigorous definitions, but also gives maximal confidence in the results obtained.", "title": "" }, { "docid": "d1e055bcdcbf6fe964e45f41efc95504", "text": "Microcycle conidiation is a survival mechanism of fungi encountering unfavorable conditions. In this phenomenon, asexual spores germinate secondary spores directly without formation of mycelium. As Penicillium camemberti conidia have the ability to produce conidiophores after germination in liquid culture induced by a thermal stress (18 and 30 °C), our work has aimed at producing conidia through this mean. Incubation at 18 and 30 °C increased the swelling of conidia and their proportion thereby producing conidiophores. Our results showed that the microcycle of conidiation can produce 5 × 108 conidia ml−1 after 7 days at 18 °C of culture. The activity of these conidia was checked through culture on a solid medium. Conidia produced by microcycle conidiation formed a normal mycelium on the surface of solid media and 25 % could still germinate after 5 months of storage.", "title": "" }, { "docid": "f88adc9a6332a24a855346dfb3c6db08", "text": "In the last few years, obfuscation has been used more and more by spammers to make spam emails bypass filters. The standard method is to use images that look like text, since typical spam filters are unable to parse such messages; this is what is used in so-called \"rock phishing\". To fight image-based spam, many spam filters use heuristic rules in which emails containing images are flagged, and since not many legit emails are composed mainly of a big image, this aids in detecting image-based spam. The spammers are thus interested in circumventing these methods. Unicode transliteration is a convenient tool for spammers, since it allows a spammer to create a large number of homomorphic clones of the same looking message; since Unicode contains many characters that are unique but appear very similar, spammers can translate a message's characters at random to hide black-listed words in an effort to bypass filters. In order to defend against these unicode-obfuscated spam emails, we developed a prototype tool that can be used with Spam Assassin to block spam obfuscated in this way by mapping polymorphic messages to a common, more homogeneous representation. This representation can then be filtered using traditional methods. We demonstrate the ease with which Unicode polymorphism can be used to circumvent spam filters such as SpamAssassin, and then describe a de-obfuscation technique that can be used to catch messages that have been obfuscated in this fashion.", "title": "" }, { "docid": "7908cc9a1cd6e6f48258a300db37d4a5", "text": "This report describes the algorithms implemented in a Matlab toolbox for change detection and data segmentation. Functions are provided for simulating changes, choosing design parameters and detecting abrupt changes in signals.", "title": "" }, { "docid": "441f80a25e7a18760425be5af1ab981d", "text": "This paper proposes efficient algorithms for group sparse optimization with mixed `2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.", "title": "" }, { "docid": "7f27e9b29e6ed2800ef850e6022d29ba", "text": "In 2004, the US Center for Disease Control (CDC) published a paper showing that there is no link between the age at which a child is vaccinated with MMR and the vaccinated children's risk of a subsequent diagnosis of autism. One of the authors, William Thompson, has now revealed that statistically significant information was deliberately omitted from the paper. Thompson first told Dr S Hooker, a researcher on autism, about the manipulation of the data. Hooker analysed the raw data from the CDC study afresh. He confirmed that the risk of autism among African American children vaccinated before the age of 2 years was 340% that of those vaccinated later.", "title": "" }, { "docid": "8b0a09cbac4b1cbf027579ece3dea9ef", "text": "Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence.", "title": "" }, { "docid": "cd449faa3508b96cd827647de9f9c0cb", "text": "Living with unrelenting pain (chronic pain) is maladaptive and is thought to be associated with physiological and psychological modifications, yet there is a lack of knowledge regarding brain elements involved in such conditions. Here, we identify brain regions involved in spontaneous pain of chronic back pain (CBP) in two separate groups of patients (n = 13 and n = 11), and contrast brain activity between spontaneous pain and thermal pain (CBP and healthy subjects, n = 11 each). Continuous ratings of fluctuations of spontaneous pain during functional magnetic resonance imaging were separated into two components: high sustained pain and increasing pain. Sustained high pain of CBP resulted in increased activity in the medial prefrontal cortex (mPFC; including rostral anterior cingulate). This mPFC activity was strongly related to intensity of CBP, and the region is known to be involved in negative emotions, response conflict, and detection of unfavorable outcomes, especially in relation to the self. In contrast, the increasing phase of CBP transiently activated brain regions commonly observed for acute pain, best exemplified by the insula, which tightly reflected duration of CBP. When spontaneous pain of CBP was contrasted to thermal stimulation, we observe a double-dissociation between mPFC and insula with the former correlating only to intensity of spontaneous pain and the latter correlating only to pain intensity for thermal stimulation. These findings suggest that subjective spontaneous pain of CBP involves specific spatiotemporal neuronal mechanisms, distinct from those observed for acute experimental pain, implicating a salient role for emotional brain concerning the self.", "title": "" }, { "docid": "f8ddedb1bdc57d75fb5ea9bf81ec51f5", "text": "Given a text description, most existing semantic parsers synthesize a program in one shot. However, it is quite challenging to produce a correct program solely based on the description, which in reality is often ambiguous or incomplete. In this paper, we investigate interactive semantic parsing, where the agent can ask the user clarification questions to resolve ambiguities via a multi-turn dialogue, on an important type of programs called “If-Then recipes.” We develop a hierarchical reinforcement learning (HRL) based agent that significantly improves the parsing performance with minimal questions to the user. Results under both simulation and human evaluation show that our agent substantially outperforms non-interactive semantic parsers and rule-based agents.", "title": "" }, { "docid": "750a5ab91b823f4043a988cb25a3f00c", "text": "Do informational deficits on the part of voters sustain poor quality of governance in low income countries? We provide experimental evidence on the role of public disclosures on candidate quality and incumbent performance in enhancing electoral accountability. Slum dwellers who were randomly exposed to newspaper report cards on politician performance responded by increasing turnout and rewarding incumbents who spent more in slums and attended fair price shop oversight committee meetings. We also find evidence of yardstick competition – incumbent’s vote share is sensitive to the wealth and education qualifications of his challengers. ∗The authors are from MIT (Banerjee), Yale (Kumar) and Harvard University (Pande and Su). We thank our partners Satark Nagrik Sangathan, Delhi NGO Network and Hindustan times and especially Anjali Bharadwaj, Amrita Johri and Mrinal Pande for enabling this study and Shobhini Mukherji for providing field oversight.We also thank Hewlett Foundation for financial support, and Tim Besley and Esther Duflo for helpful comments.", "title": "" } ]
scidocsrr
8c7fc170a90b0706a01e81269d1c4266
An Analysis Matrix for the Assessment of Smart City Technologies: Main Results of Its Application
[ { "docid": "b99e81f6f2a50fc7c5b09f194fc51f49", "text": "The current digital revolution has ignited the evolution of communications grids and the development of new schemes for productive systems. Traditional technologic scenarios have been challenged, and Smart Cities have become the basis for urban competitiveness. The citizen is the one who has the power to set new scenarios, and that is why a definition of the way people interact with their cities is needed, as is commented in the first part of the article. At the same time, a lack of clarity has been detected in the way of describing what Smart Cities are, and the second part will try to set the basis for that. For all before, the information and communication technologies that manage and transform 21st century cities must be reviewed, analyzing their impact on new social behaviors that shape the spaces and means of communication, as is posed in the experimental section, setting the basis for an analysis matrix to score the different elements that affect a Smart City environment. So, as the better way to evaluate what a Smart City is, there is a need for a tool to score the different technologies on the basis of their usefulness and consequences, considering the impact of each application. For all of that, the final section describes the main objective of this article in practical scenarios, considering how the technologies are used by citizens, who must be the main concern of all urban development.", "title": "" } ]
[ { "docid": "9c61ac11d2804323ba44ed91d05a0e46", "text": "Nostalgia fulfills pivotal functions for individuals, but lacks an empirically derived and comprehensive definition. We examined lay conceptions of nostalgia using a prototype approach. In Study 1, participants generated open-ended features of nostalgia, which were coded into categories. In Study 2, participants rated the centrality of these categories, which were subsequently classified as central (e.g., memories, relationships, happiness) or peripheral (e.g., daydreaming, regret, loneliness). Central (as compared with peripheral) features were more often recalled and falsely recognized (Study 3), were classified more quickly (Study 4), were judged to reflect more nostalgia in a vignette (Study 5), better characterized participants' own nostalgic (vs. ordinary) experiences (Study 6), and prompted higher levels of actual nostalgia and its intrapersonal benefits when used to trigger a personal memory, regardless of age (Study 7). These findings highlight that lay people view nostalgia as a self-relevant and social blended emotional and cognitive state, featuring a mixture of happiness and loss. The findings also aid understanding of nostalgia's functions and identify new methods for future research.", "title": "" }, { "docid": "73252fdecc2a01699bdadb4962b4b376", "text": "Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D motion from such unstructured point clouds using a deep neural network. In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospect of estimating 3D scene flow from unstructured point clouds is promising, it is also a challenging task. We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem. For training our deep network, a large dataset is required. Because of this, we augment real scans from KITTI with virtual objects, realistically modeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-based techniques highlights the robustness of the proposed approach.", "title": "" }, { "docid": "b6205296bd71b938b71224121ca0c8b4", "text": "INTRODUCTION People make judgments about the world around them. They harbor positive and negative attitudes about people, organizations, places, events, and ideas. We regard these types of attitudes as sentiments. Sentiments are private states,1 cognitive phenomena that are not directly observable by others. However, expressions of sentiment can be manifested in actions, including written and spoken language. Sentiment analysis is the study of automated techniques for extracting sentiment from written language. This has been a very active area entiment analysis—the automated extraction of expressions of positive or negative attitudes from text—has received considerable attention from researchers during the past 10 years. During the same period, the widespread growth of social media has resulted in an explosion of publicly available, user-generated text on the World Wide Web. These data can potentially be utilized to provide real-time insights into the aggregated sentiments of people. The tools provided by statistical natural language processing and machine learning, along with exciting new scalable approaches to working with large volumes of text, make it possible to begin extracting sentiments from the web. We discuss some of the challenges of sentiment extraction and some of the approaches employed to address these challenges. In particular, we describe work we have done to annotate sentiment in blogs at the levels of sentences and subsentences (clauses); to classify subjectivity at the level of sentences; and to identify the targets, or topics, of sentiment at the level of clauses.", "title": "" }, { "docid": "3fd2f7e4d0d0460fda7f7e947e45d9d9", "text": "Because of the complexity of the hospital environment, there exist a lot of medical information systems from different vendors with incompatible structures. In order to establish an enterprise hospital information system, the integration among these heterogeneous systems must be considered. Complete integration should cover three aspects: data integration, function integration and workflow integration. However most of the previous design of architecture did not accomplish such a complete integration. This article offers an architecture design of the enterprise hospital information system based on the concept of digital neural network system in hospital. It covers all three aspects of integration, and eventually achieves the target of one virtual data center with Enterprise Viewer for users of different roles. The initial implementation of the architecture in the 5-year Digital Hospital Project in Huzhou Central hospital of Zhejiang Province is also described", "title": "" }, { "docid": "84a9af22a0fa5a755b750ddf914360f9", "text": "Pancreatic cancer has one of the worst survival rates amongst all forms of cancer because its symptoms manifest later into the progression of the disease. One of those symptoms is jaundice, the yellow discoloration of the skin and sclera due to the buildup of bilirubin in the blood. Jaundice is only recognizable to the naked eye in severe stages, but a ubiquitous test using computer vision and machine learning can detect milder forms of jaundice. We propose BiliScreen, a smartphone app that captures pictures of the eye and produces an estimate of a person's bilirubin level, even at levels normally undetectable by the human eye. We test two low-cost accessories that reduce the effects of external lighting: (1) a 3D-printed box that controls the eyes' exposure to light and (2) paper glasses with colored squares for calibration. In a 70-person clinical study, we found that BiliScreen with the box achieves a Pearson correlation coefficient of 0.89 and a mean error of -0.09 ± 2.76 mg/dl in predicting a person's bilirubin level. As a screening tool, BiliScreen identifies cases of concern with a sensitivity of 89.7% and a specificity of 96.8% with the box accessory.", "title": "" }, { "docid": "f7c92b4342944a1f937f19b144a61d8a", "text": "Randomization in randomized controlled trials involves more than generation of a random sequence by which to assign subjects. For randomization to be successfully implemented, the randomization sequence must be adequately protected (concealed) so that investigators, involved health care providers, and subjects are not aware of the upcoming assignment. The absence of adequate allocation concealment can lead to selection bias, one of the very problems that randomization was supposed to eliminate. Authors of reports of randomized trials should provide enough details on how allocation concealment was achieved so the reader can determine the likelihood of success. Fortunately, a plan of allocation concealment can always be incorporated into the design of a randomized trial. Certain methods minimize the risk of concealment failing more than others. Keeping knowledge of subjects' assignment after allocation from subjects, investigators/health care providers, or those assessing outcomes is referred to as masking (also known as blinding). The goal of masking is to prevent ascertainment bias. In contrast to allocation concealment, masking cannot always be incorporated into a randomized controlled trial. Both allocation concealment and masking add to the elimination of bias in randomized controlled trials.", "title": "" }, { "docid": "447c5b2db5b1d7555cba2430c6d73a35", "text": "Recent years have seen a proliferation of complex Advanced Driver Assistance Systems (ADAS), in particular, for use in autonomous cars. These systems consist of sensors and cameras as well as image processing and decision support software components. They are meant to help drivers by providing proper warnings or by preventing dangerous situations. In this paper, we focus on the problem of design time testing of ADAS in a simulated environment. We provide a testing approach for ADAS by combining multi-objective search with surrogate models developed based on neural networks. We use multi-objective search to guide testing towards the most critical behaviors of ADAS. Surrogate modeling enables our testing approach to explore a larger part of the input search space within limited computational resources. We characterize the condition under which the multi-objective search algorithm behaves the same with and without surrogate modeling, thus showing the accuracy of our approach. We evaluate our approach by applying it to an industrial ADAS system. Our experiment shows that our approach automatically identifies test cases indicating critical ADAS behaviors. Further, we show that combining our search algorithm with surrogate modeling improves the quality of the generated test cases, especially under tight and realistic computational resources.", "title": "" }, { "docid": "9e7fc71def2afc58025ff5e0198148d0", "text": "BACKGROUD\nWith the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects.\n\n\nNEW METHOD\nWe have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability.\n\n\nRESULTS\nThis study demonstrates new methods for computing and visualizing 'grand' ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute.", "title": "" }, { "docid": "646d097ef0b299c0f591448fd842103e", "text": "Research on brain–machine interfaces has been ongoing for at least a decade. During this period, simultaneous recordings of the extracellular electrical activity of hundreds of individual neurons have been used for direct, real-time control of various artificial devices. Brain–machine interfaces have also added greatly to our knowledge of the fundamental physiological principles governing the operation of large neural ensembles. Further understanding of these principles is likely to have a key role in the future development of neuroprosthetics for restoring mobility in severely paralysed patients.", "title": "" }, { "docid": "934dabd76c1442e091e4eb932ef5f07a", "text": "Cyber-physical systems tightly integrate physical processes and information and communication technologies. As today’s critical infrastructures, e.g., the power grid or water distribution networks, are complex cyber-physical systems, ensuring their safety and security becomes of paramount importance. Traditional safety analysis methods, such as HAZOP, are ill-suited to assess these systems. Furthermore, cybersecurity vulnerabilities are often not considered critical, because their effects on the physical processes are not fully understood. In this work, we present STPA-SafeSec, a novel analysis methodology for both safety and security. Its results show the dependencies between cybersecurity vulnerabilities and system safety. Using this information, the most effective mitigation strategies to ensure safety and security of the system can be readily identified. We apply STPA-SafeSec to a use case in the power grid domain, and highlight", "title": "" }, { "docid": "060e518af9a250c1e6a3abf49555754f", "text": "The deep learning community has proposed optimizations spanning hardware, software, and learning theory to improve the computational performance of deep learning workloads. While some of these optimizations perform the same operations faster (e.g., switching from a NVIDIA K80 to P100), many modify the semantics of the training procedure (e.g., large minibatch training, reduced precision), which can impact a model’s generalization ability. Due to a lack of standard evaluation criteria that considers these trade-offs, it has become increasingly difficult to compare these different advances. To address this shortcoming, DAWNBENCH and the upcoming MLPERF benchmarks use time-to-accuracy as the primary metric for evaluation, with the accuracy threshold set close to state-of-the-art and measured on a held-out dataset not used in training; the goal is to train to this accuracy threshold as fast as possible. In DAWNBENCH, the winning entries improved time-to-accuracy on ImageNet by two orders of magnitude over the seed entries. Despite this progress, it is unclear how sensitive time-to-accuracy is to the chosen threshold as well as the variance between independent training runs, and how well models optimized for time-to-accuracy generalize. In this paper, we provide evidence to suggest that time-to-accuracy has a low coefficient of variance and that the models tuned for it generalize nearly as well as pre-trained models. We additionally analyze the winning entries to understand the source of these speedups, and give recommendations for future benchmarking efforts.", "title": "" }, { "docid": "ff02ddb759f94367813324ce15f09f8d", "text": "The present work describes a website designed for remote teaching of optical measurements using lasers. It enables senior undergraduate and postgraduate students to learn theoretical aspects of the subject and also have a means to perform experiments for better understanding of the application at hand. At this stage of web development, optical methods considered are those based on refractive index changes in the material medium. The website is specially designed in order to provide remote access of expensive lasers, cameras, and other laboratory instruments by employing a commercially available web browser. The web suite integrates remote experiments, hands-on experiments and life-like optical images generated by using numerical simulation techniques based on Open Foam software package. The remote experiments are real time experiments running in the physical laboratory but can be accessed remotely from anywhere in the world and at any time. Numerical simulation of problems enhances learning, visualization of problems and interpretation of results. In the present work hand-on experimental results are discussed with respect to simulated results. A reasonable amount of resource material, specifically theoretical background of interferometry is available on the website along with computer programs image processing and analysis of results obtained in an experiment.", "title": "" }, { "docid": "587c6f30cda5f45a6b43d55197d2ed40", "text": "We present a mechanism that puts users in the center of control and empowers them to dictate the access to their collections of data. Revisiting the fundamental mechanisms in security for providing protection, our solution uses capabilities, access lists, and access rights following well-understood formal notions for reasoning about access. This contribution presents a practical, correct, auditable, transparent, distributed, and decentralized mechanism that is well-matched to the current emerging environments including Internet of Things, smart city, precision medicine, and autonomous cars. It is based on well-tested principles and practices used in distributed authorization, cryptocurrencies, and scalable computing.", "title": "" }, { "docid": "5b55b1c913aa9ec461c6c51c3d00b11b", "text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.", "title": "" }, { "docid": "b82b46fc0d886e3e87b757a6ca14d4bb", "text": "Objective: To study the efficacy and safety of an indigenously designed low cost nasal bubble continuous positive airway pressure (NB-CPAP) in neonates admitted with respiratory distress. Study Design: A descriptive study. Place and Duration of Study: Combined Military Hospital (CMH), Peshawar from Jan 2014 to May 2014. Material and Methods: Fifty neonates who developed respiratory distress within 6 hours of life were placed on an indigenous NB-CPAP device (costing 220 PKR) and evaluated for gestational age, weight, indications, duration on NB-CPAP, pre-defined outcomes and complications. Results: A total of 50 consecutive patients with respiratory distress were placed on NB-CPAP. Male to Female ratio was 2.3:1. Mean weight was 2365.85 ± 704 grams and mean gestational age was 35.41 ± 2.9 weeks. Indications for applying NB-CPAP were transient tachypnea of the newborn (TTN, 52%) and respiratory distress syndrome (RDS, 44%). Most common complications were abdominal distension (15.6%) and pulmonary hemorrhage (6%). Out of 50 infants placed on NB-CPAP, 35 (70%) were managed on NB-CPAP alone while 15 (30%) needed mechanical ventilation following a trial of NB-CPAP. Conclusion: In 70% of babies invasive mechanical ventilation was avoided using NB-CPAP.", "title": "" }, { "docid": "b7dbf710a191e51dc24619b2a520cf31", "text": "This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of3D shape from a single color image.", "title": "" }, { "docid": "258f246b97bba091e521cd265126191a", "text": "This paper presents a method of electric tunability using varactor diodes installed on SIR coaxial resonators and associated filters. Using varactor diodes connected in parallel, in combination with the SIR coaxial resonator, makes it possible, by increasing the number of varactor diodes, to expand the tuning range and maintain the unloaded quality factor of the resonator. A second order filter, tunable in center frequency, was built with these resonators, providing a very large tuning range.", "title": "" }, { "docid": "91f9aae59d659b2bf7ea67e6bb5ed6b8", "text": "Due to standardization and connectivity to the Internet, Supervisory Control and Data Acquisition (SCADA) systems now face the threat of cyber attacks. SCADA systems were designed without cyber security in mind and hence the problem of how to modify conventional Information Technology (IT) intrusion detection techniques to suit the needs of SCADA is a big challenge. We explain the nuance associated with the task of SCADA-specific intrusion detection and frame it in the domain interest of control engineers and researchers to illuminate the problem space. We present a taxonomy and a set of metrics for SCADA-specific intrusion detection techniques by heightening their possible use in SCADA systems. In particular, we enumerate Intrusion Detection Systems (IDS) that have been proposed to undertake this endeavor. We draw upon the discussion to identify the deficits and voids in current research. Finally, we offer recommendations and future research venues based upon our taxonomy and analysis on which SCADAspecific IDS strategies are most likely to succeed, in part through presenting a prototype of our efforts towards this goal.", "title": "" }, { "docid": "d3765112295d9a4591b438130df59a25", "text": "This paper presents the design and mathematical model of a lower extremity exoskeleton device used to make paralyzed people walk again. The design takes into account the anatomy of standard human leg with a total of 11 Degrees of freedom (DoF). A CAD model in SolidWorks is presented along with its fabrication and a mathematical model in MATLAB.", "title": "" }, { "docid": "e31ea6b8c4a5df049782b463abc602ea", "text": "Nature plays a very important role to solve problems in a very effective and well-organized way. Few researchers are trying to create computational methods that can assist human to solve difficult problems. Nature inspired techniques like swarm intelligence, bio-inspired, physics/chemistry and many more have helped in solving difficult problems and also provide most favourable solution. Nature inspired techniques are wellmatched for soft computing application because parallel, dynamic and self organising behaviour. These algorithms motivated from the working group of social agents like ants, bees and insect. This paper is a complete survey of nature inspired techniques.", "title": "" } ]
scidocsrr
a59a2d57e6e47df9688eb0ba79bd6e9f
Nethammer: Inducing Rowhammer Faults through Network Requests
[ { "docid": "5a18a7f42ab40cd238c92e19d23e0550", "text": "As memory scales down to smaller technology nodes, new failure mechanisms emerge that threaten its correct operation. If such failure mechanisms are not anticipated and corrected, they can not only degrade system reliability and availability but also, perhaps even more importantly, open up security vulnerabilities: a malicious attacker can exploit the exposed failure mechanism to take over the entire system. As such, new failure mechanisms in memory can become practical and significant threats to system security. In this work, we discuss the RowHammer problem in DRAM, which is a prime (and perhaps the first) example of how a circuit-level failure mechanism in DRAM can cause a practical and widespread system security vulnerability. RowHammer, as it is popularly referred to, is the phenomenon that repeatedly accessing a row in a modern DRAM chip causes bit flips in physically-adjacent rows at consistently predictable bit locations. It is caused by a hardware failure mechanism called DRAM disturbance errors, which is a manifestation of circuit-level cell-to-cell interference in a scaled memory technology. Researchers from Google Project Zero recently demonstrated that this hardware failure mechanism can be effectively exploited by user-level programs to gain kernel privileges on real systems. Several other recent works demonstrated other practical attacks exploiting RowHammer. These include remote takeover of a server vulnerable to RowHammer, takeover of a victim virtual machine by another virtual machine running on the same system, and takeover of a mobile device by a malicious user-level application that requires no permissions. We analyze the root causes of the RowHammer problem and examine various solutions. We also discuss what other vulnerabilities may be lurking in DRAM and other types of memories, e.g., NAND flash memory or Phase Change Memory, that can potentially threaten the foundations of secure systems, as the memory technologies scale to higher densities. We conclude by describing and advocating a principled approach to memory reliability and security research that can enable us to better anticipate and prevent such vulnerabilities.", "title": "" } ]
[ { "docid": "cd224f035982a669dcd8eb0c086a1be0", "text": "In this paper we integrate a humanoid robot with a powered wheelchair with the aim of lowering the cognitive requirements needed for powered mobility. We propose two roles for this companion: pointing out obstacles and giving directions. We show that children enjoyed driving with the humanoid companion by their side during a field-trial in an uncontrolled environment. Moreover, we present the results of a driving experiment for adults where the companion acted as a driving aid and conclude that participants preferred the humanoid companion to a simulated companion. Our results suggest that people will welcome a humanoid companion for their wheelchairs.", "title": "" }, { "docid": "f48bcd934ae9e410d6b980e8e868e7f5", "text": "An experiment was conducted in a Cave-like environment to explore the relationship between physiological responses and breaks in presence and utterances by virtual characters towards the participants. Twenty people explored a virtual environment (VE) that depicted a virtual bar scenario. The experiment was divided into a training and an experimental phase. During the experimental phase breaks in presence (BIPs) in the form of whiteouts of the VE scenario were induced for 2 s at four equally spaced times during the approximately 5 min in the bar scenario. Additionally, five virtual characters addressed remarks to the subjects. Physiological measures including electrocardiagram (ECG) and galvanic skin response (GSR) were recorded throughout the whole experiment. The heart rate, the heart rate variability, and the event-related heart rate changes were calculated from the acquired ECG data. The frequency response of the GSR signal was calculated with a wavelet analysis. The study shows that the heart rate and heart rate variability parameters vary significantly between the training and experimental phase. GSR parameters and event-related heart rate changes show the occurrence of breaks in presence. Event-related heart rate changes also signified the virtual character utterances. There were also differences in response between participants who report more or less socially anxious.", "title": "" }, { "docid": "3d5e2e0f0b9cefd240de2fd952eaf961", "text": "This paper focuses on detecting anomalies in a digital video broadcasting (DVB) system from providers’ perspective. We learn a probabilistic deterministic real timed automaton profiling benign behavior of encryption control in the DVB control access system. This profile is used as a one-class classifier. Anomalous items in a testing sequence are detected when the sequence is not accepted by the learned model.", "title": "" }, { "docid": "7bc8be5766eeb11b15ea0aa1d91f4969", "text": "A coplanar waveguide (CPW)-fed planar monopole antenna with triple-band operation for WiMAX and WLAN applications is presented. The antenna, which occupies a small size of 25(L) × 25(W) × 0.8(H) mm3, is simply composed of a pentagonal radiating patch with two bent slots. By carefully selecting the positions and lengths of these slots, good dual stopband rejection characteristic of the antenna can be obtained so that three operating bands covering 2.14-2.85, 3.29-4.08, and 5.02-6.09 GHz can be achieved. The measured results also demonstrate that the proposed antenna has good omnidirectional radiation patterns with appreciable gain across the operating bands and is thus suitable to be integrated within the portable devices for WiMAX/WLAN applications.", "title": "" }, { "docid": "d7b638eae20bc28e2042f4666ec1c97f", "text": "Finding informative genes from microarray data is an important research problem in bioinformatics research and applications. Most of the existing methods rank features according to their discriminative capability and then find a subset of discriminative genes (usually top k genes). In particular, t-statistic criterion and its variants have been adopted extensively. This kind of methods rely on the statistics principle of t-test, which requires that the data follows a normal distribution. However, according to our investigation, the normality condition often cannot be met in real data sets.To avoid the assumption of the normality condition, in this paper, we propose a rank sum test method for informative gene discovery. The method uses a rank-sum statistic as the ranking criterion. Moreover, we propose using the significance level threshold, instead of the number of informative genes, as the parameter. The significance level threshold as a parameter carries the quality specification in statistics. We follow the Pitman efficiency theory to show that the rank sum method is more accurate and more robust than the t-statistic method in theory.To verify the effectiveness of the rank sum method, we use support vector machine (SVM) to construct classifiers based on the identified informative genes on two well known data sets, namely colon data and leukemia data. The prediction accuracy reaches 96.2% on the colon data and 100% on the leukemia data. The results are clearly better than those from the previous feature ranking methods. By experiments, we also verify that using significance level threshold is more effective than directly specifying an arbitrary k.", "title": "" }, { "docid": "caaab1ca0175a6387b1a0c7be7803513", "text": "Probably the most promising breakthroughs in vehicular safety will emerge from intelligent, Advanced Driving Assistance Systems (i-ADAS). Influential research institutions and large vehicle manufacturers work in lockstep to create advanced, on-board safety systems by means of integrating the functionality of existing systems and developing innovative sensing technologies. In this contribution, we describe a portable and scalable vehicular instrumentation designed for on-road experimentation and hypothesis verification in the context of designing i-ADAS prototypes.", "title": "" }, { "docid": "786a70f221a70038f930352e8022ae29", "text": "We present IndoNet, a multilingual lexical knowledge base for Indian languages. It is a linked structure of wordnets of 18 different Indian languages, Universal Word dictionary and the Suggested Upper Merged Ontology (SUMO). We discuss various benefits of the network and challenges involved in the development. The system is encoded in Lexical Markup Framework (LMF) and we propose modifications in LMF to accommodate Universal Word Dictionary and SUMO. This standardized version of lexical knowledge base of Indian Languages can now easily be linked to similar global resources.", "title": "" }, { "docid": "71573bc8f5be1025837d5c72393b4fa6", "text": "This paper describes our initial work in developing a real-time audio-visual Chinese speech synthesizer with a 3D expressive avatar. The avatar model is parameterized according to the MPEG-4 facial animation standard [1]. This standard offers a compact set of facial animation parameters (FAPs) and feature points (FPs) to enable realization of 20 Chinese visemes and 7 facial expressions (i.e. 27 target facial configurations). The Xface [2] open source toolkit enables us to define the influence zone for each FP and the deformation function that relates them. Hence we can easily animate a large number of coordinates in the 3D model by specifying values for a small set of FAPs and their FPs. FAP values for 27 target facial configurations were estimated from available corpora. We extended the dominance blending approach to effect animations for coarticulated visemes superposed with expression changes. We selected six sentiment-carrying text messages and synthesized expressive visual speech (for all expressions, in randomized order) with neutral audio speech. A perceptual experiment involving 11 subjects shows that they can identify the facial expression that matches the text message’s sentiment 85% of the time.", "title": "" }, { "docid": "67d5858b803f47870e36a7821feaa38d", "text": "Online social networks (OSNs) are becoming extremely popular among Internet users as they spend significant amount of time on popular social networking sites like Facebook, Twitter and Google+. These sites are turning out to be fundamentally pervasive and are developing a communication channel for billions of users. Online community use them to find new friends, update their existing friends list with their latest thoughts and activities. Huge information available on these sites attracts the interest of cyber criminals who misuse these sites to exploit vulnerabilities for their illicit benefits such as advertising some product or to attract victims to click on malicious links or infecting users system just for the purpose of making money. Spam detection is one of the major problems these days in social networking sites such as twitter. Most previous techniques use different set of features to classify spam and non-spam users. In this paper, we proposed a hybrid technique which uses content-based as well as graph-based features for identification of spammers on twitter platform. We have analysed the proposed technique on real Twitter dataset with 11k uses and more than 400k tweets approximately. Our results show that the detection rate of our proposed technique is much higher than any of the existing techniques.", "title": "" }, { "docid": "cfc3d8ee024928151edb5ee2a1d28c13", "text": "Objective: In this paper, we present a systematic literature review of motivation in Software Engineering. The objective of this review is to plot the landscape of current reported knowledge in terms of what motivates developers, what de-motivates them and how existing models address motivation. Methods: We perform a systematic literature review of peer reviewed published studies that focus on motivation in Software Engineering. Systematic reviews are well established in medical research and are used to systematically analyse the literature addressing specific research questions. Results: We found 92 papers related to motivation in Software Engineering. Fifty-six percent of the studies reported that Software Engineers are distinguishable from other occupational groups. Our findings suggest that Software Engineers are likely to be motivated according to three related factors: their ‘characteristics’ (for example, their need for variety); internal ‘controls’ (for example, their personality) and external ‘moderators’ (for example, their career stage). The literature indicates that de-motivated engineers may leave the organisation or take more sick-leave, while motivated engineers will increase their productivity and remain longer in the organisation. Aspects of the job that motivate Software Engineers include problem solving, working to benefit others and technical challenge. Our key finding is that the published models of motivation in Software Engineering are disparate and do not reflect the complex needs of Software Engineers in their career stages, cultural and environmental settings. Conclusions: The literature on motivation in Software Engineering presents a conflicting and partial picture of the area. It is clear that motivation is context dependent and varies from one engineer to another. The most commonly cited motivator is the job itself, yet we found very little work on what it is about that job that Software Engineers find motivating. Furthermore, surveys are often aimed at how Software Engineers feel about ‘the organisation’, rather than ‘the profession’. Although models of motivation in Software Engineering are reported in the literature, they do not account for the changing roles and environment in which Software Engineers operate. Overall, our findings indicate that there is no clear understanding of the Software Engineers’ job, what motivates Software Engineers, how they are motivated, or the outcome and benefits of motivating Software Engineers. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "138ee58ce9d2bcfa14b44642cf9af08b", "text": "This research is a partial test of Park et al.’s (2008) model to assess the impact of flow and brand equity in 3D virtual worlds. It draws on flow theory as its main theoretical foundation to understand and empirically assess the impact of flow on brand equity and behavioral intention in 3D virtual worlds. The findings suggest that the balance of skills and challenges in 3D virtual worlds influences users’ flow experience, which in turn influences brand equity. Brand equity then increases behavioral intention. The authors also found that the impact of flow on behavioral intention in 3D virtual worlds is indirect because the relationship between them is mediated by brand equity. This research highlights the importance of balancing the challenges posed by 3D virtual world branding sites with the users’ skills to maximize their flow experience and brand equity to increase the behavioral intention associated with the brand.", "title": "" }, { "docid": "15881d5448e348c6e1a63e195daa68eb", "text": "Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, we observed that bottleneck autoencoders produce subjectively low quality reconstructed images. In this work, we explore the ability of sparse coding to improve reconstructed image quality for the same degree of compression. We observe that sparse image compression produces visually superior reconstructed images and yields higher values of pixel-wise measures of reconstruction quality (PSNR and SSIM) compared to bottleneck autoencoders. In addition, we find that using alternative metrics that correlate better with human perception, such as feature perceptual loss and the classification accuracy, sparse image compression scores up to 18.06% and 2.7% higher, respectively, compared to bottleneck autoencoders. Although computationally much more intensive, we find that sparse coding is otherwise superior to bottleneck autoencoders for the same degree of compression.", "title": "" }, { "docid": "b44df1268804e966734ea404b8c29360", "text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.", "title": "" }, { "docid": "b89259a915856b309a02e6e7aa6c957f", "text": "The paper proposes a comprehensive information security maturity model (ISMM) that addresses both technical and socio/non-technical security aspects. The model is intended for securing e-government services (implementation and service delivery) in an emerging and increasing security risk environment. The paper utilizes extensive literature review and survey study approaches. A total of eight existing ISMMs were selected and critically analyzed. Models were then categorized into security awareness, evaluation and management orientations. Based on the model’s strengths – three models were selected to undergo further analyses and then synthesized. Each of the three selected models was either from the security awareness, evaluation or management orientations category. To affirm the findings – a survey study was conducted into six government organizations located in Tanzania. The study was structured to a large extent by the security controls adopted from the Security By Consensus (SBC) model. Finally, an ISMM with five critical maturity levels was proposed. The maturity levels were: undefined, defined, managed, controlled and optimized. The papers main contribution is the proposed model that addresses both technical and non-technical security services within the critical maturity levels. Additionally, the paper enhances awareness and understanding on the needs for security in e-government services to stakeholders.", "title": "" }, { "docid": "2ee1f7a56eba17b75217cca609452f20", "text": "We describe the annotation of a new dataset for German Named Entity Recognition (NER). The need for this dataset is motivated by licensing issues and consistency issues of existing datasets. We describe our approach to creating annotation guidelines based on linguistic and semantic considerations, and how we iteratively refined and tested them in the early stages of annotation in order to arrive at the largest publicly available dataset for German NER, consisting of over 31,000 manually annotated sentences (over 591,000 tokens) from German Wikipedia and German online news. We provide a number of statistics on the dataset, which indicate its high quality, and discuss legal aspects of distributing the data as a compilation of citations. The data is released under the permissive CC-BY license, and will be fully available for download in September 2014 after it has been used for the GermEval 2014 shared task on NER. We further provide the full annotation guidelines and links to the annotation tool used for the creation of this resource.", "title": "" }, { "docid": "8b62238fc7c436030810be0792b59239", "text": "We interpret meta-reinforcement learning as the problem of learning how to quickly find a good sampling distribution in a new environment. This interpretation leads to the development of two new meta-reinforcement learning algorithms: E-MAML and E-RL. Results are presented on a new environment we call ‘Krazy World’: a difficult high-dimensional gridworld which is designed to highlight the importance of correctly differentiating through sampling distributions in meta-reinforcement learning. Further results are presented on a set of maze environments. We show E-MAML and E-RL deliver better performance than baseline algorithms on both tasks.", "title": "" }, { "docid": "88cb13565f66a5d20b7b5ee1c01ee730", "text": "We present four new reinforcement learning algorithms based on actor-critic and natural-gradient ideas, and provide their convergence proofs. Actor-critic reinforcement learning methods are online approximations to policy iteration in which the value-function parameters are estimated using temporal difference learning and the policy parameters are updated by stochastic gradient descent. Methods based on policy gradients in this way are of special interest because of their compatibility with function approximation methods, which are needed to handle large or infinite state spaces. The use of temporal difference learning in this way is of interest because in many applications it dramatically reduces the variance of the gradient estimates. The use of the natural gradient is of interest because it can produce better conditioned parameterizations and has been shown to further reduce variance in some cases. Our results extend prior two-timescale convergence results for actor-critic methods by Konda and Tsitsiklis by using temporal difference learning in the actor and by incorporating natural gradients, and they extend prior empirical studies of natural actor-critic methods by Peters, Vijayakumar and Schaal by providing the first convergence proofs and the first fully incremental algorithms.", "title": "" }, { "docid": "bf9e44e81e37b0aefb12250202d59111", "text": "There are many clustering tasks which are closely related in the real world, e.g. clustering the web pages of different universities. However, existing clustering approaches neglect the underlying relation and treat these clustering tasks either individually or simply together. In this paper, we will study a novel clustering paradigm, namely multi-task clustering, which performs multiple related clustering tasks together and utilizes the relation of these tasks to enhance the clustering performance. We aim to learn a subspace shared by all the tasks, through which the knowledge of the tasks can be transferred to each other. The objective of our approach consists of two parts: (1) Within-task clustering: clustering the data of each task in its input space individually; and (2) Cross-task clustering: simultaneous learning the shared subspace and clustering the data of all the tasks together. We will show that it can be solved by alternating minimization, and its convergence is theoretically guaranteed. Furthermore, we will show that given the labels of one task, our multi-task clustering method can be extended to transductive transfer classification (a.k.a. cross-domain classification, domain adaption). Experiments on several cross-domain text data sets demonstrate that the proposed multi-task clustering outperforms traditional single-task clustering methods greatly. And the transductive transfer classification method is comparable to or even better than several existing transductive transfer classification approaches.", "title": "" }, { "docid": "c77b2092daceab26611e427facd8e6fb", "text": "Transactional Memory (TM) is on its way to becoming the programming API of choice for writing correct, concurrent, and scalable programs. Hardware TM (HTM) implementations are expected to be significantly faster than pure software TM (STM); however, full hardware support for true closed and open nested transactions is unlikely to be practical.\n This paper presents a novel mechanism, the split hardware transaction (SpHT), that uses minimal software support to combine multiple segments of an atomic block, each executed using a separate hardware transaction, into one atomic operation. The idea of segmenting transactions can be used for many purposes, including nesting, local retry, orElse, and user-level thread scheduling; in this paper we focus on how it allows linear closed and open nesting of transactions. SpHT overcomes the limited expressive power of best-effort HTM while imposing overheads dramatically lower than STM and preserving useful guarantees such as strong atomicity provided by the underlying HTM.", "title": "" }, { "docid": "4ed47f48df37717148d985ad927b813f", "text": "Given an incorrect value produced during a failed program run (e.g., a wrong output value or a value that causes the program to crash), the backward dynamic slice of the value very frequently captures the faulty code responsible for producing the incorrect value. Although the dynamic slice often contains only a small percentage of the statements executed during the failed program run, the dynamic slice can still be large and thus considerable effort may be required by the programmer to locate the faulty code.In this paper we develop a strategy for pruning the dynamic slice to identify a subset of statements in the dynamic slice that are likely responsible for producing the incorrect value. We observe that some of the statements used in computing the incorrect value may also have been involved in computing correct values (e.g., a value produced by a statement in the dynamic slice of the incorrect value may also have been used in computing a correct output value prior to the incorrect value). For each such executed statement in the dynamic slice, using the value profiles of the executed statements, we compute a confidence value ranging from 0 to 1 - a higher confidence value corresponds to greater likelihood that the execution of the statement produced a correct value. Given a failed run involving execution of a single error, we demonstrate that the pruning of a dynamic slice by excluding only the statements with the confidence value of 1 is highly effective in reducing the size of the dynamic slice while retaining the faulty code in the slice. Our experiments show that the number of distinct statements in a pruned dynamic slice are 1.79 to 190.57 times less than the full dynamic slice. Confidence values also prioritize the statements in the dynamic slice according to the likelihood of them being faulty. We show that examining the statements in the order of increasing confidence values is an effective strategy for reducing the effort of fault location.", "title": "" } ]
scidocsrr
1842d15bfd1eb13834bff3680a5da929
Learning, memory, and synesthesia.
[ { "docid": "03277ef81159827a097c73cd24f8b5c0", "text": "It is generally accepted that there is something special about reasoning by using mental images. The question of how it is special, however, has never been satisfactorily spelled out, despite more than thirty years of research in the post-behaviorist tradition. This article considers some of the general motivation for the assumption that entertaining mental images involves inspecting a picture-like object. It sets out a distinction between phenomena attributable to the nature of mind to what is called the cognitive architecture, and ones that are attributable to tacit knowledge used to simulate what would happen in a visual situation. With this distinction in mind, the paper then considers in detail the widely held assumption that in some important sense images are spatially displayed or are depictive, and that examining images uses the same mechanisms that are deployed in visual perception. I argue that the assumption of the spatial or depictive nature of images is only explanatory if taken literally, as a claim about how images are physically instantiated in the brain, and that the literal view fails for a number of empirical reasons--for example, because of the cognitive penetrability of the phenomena cited in its favor. Similarly, while it is arguably the case that imagery and vision involve some of the same mechanisms, this tells us very little about the nature of mental imagery and does not support claims about the pictorial nature of mental images. Finally, I consider whether recent neuroscience evidence clarifies the debate over the nature of mental images. I claim that when such questions as whether images are depictive or spatial are formulated more clearly, the evidence does not provide support for the picture-theory over a symbol-structure theory of mental imagery. Even if all the empirical claims were true, they do not warrant the conclusion that many people have drawn from them: that mental images are depictive or are displayed in some (possibly cortical) space. Such a conclusion is incompatible with what is known about how images function in thought. We are then left with the provisional counterintuitive conclusion that the available evidence does not support rejection of what I call the \"null hypothesis\"; namely, that reasoning with mental images involves the same form of representation and the same processes as that of reasoning in general, except that the content or subject matter of thoughts experienced as images includes information about how things would look.", "title": "" }, { "docid": "8422d4f3f1b18a4bcee29342ce0bf0e3", "text": "Synesthesia is an unusual condition in which stimulation of one modality evokes sensation or experience in another modality. Although discussed in the literature well over a century ago, synesthesia slipped out of the scientific spotlight for decades because of the difficulty in verifying and quantifying private perceptual experiences. In recent years, the study of synesthesia has enjoyed a renaissance due to the introduction of tests that demonstrate the reality of the condition, its automatic and involuntary nature, and its measurable perceptual consequences. However, while several research groups now study synesthesia, there is no single protocol for comparing, contrasting and pooling synesthetic subjects across these groups. There is no standard battery of tests, no quantifiable scoring system, and no standard phrasing of questions. Additionally, the tests that exist offer no means for data comparison. To remedy this deficit we have devised the Synesthesia Battery. This unified collection of tests is freely accessible online (http://www.synesthete.org). It consists of a questionnaire and several online software programs, and test results are immediately available for use by synesthetes and invited researchers. Performance on the tests is quantified with a standard scoring system. We introduce several novel tests here, and offer the software for running the tests. By presenting standardized procedures for testing and comparing subjects, this endeavor hopes to speed scientific progress in synesthesia research.", "title": "" } ]
[ { "docid": "7278df43944050e85a7917c26b0fac56", "text": "─ A novel broadband 3-dB directional coupler design method utilizing HFSS and realization are given in this paper. It is realized in stripline, showing great agreement with the simulation and design format. The unique property of this design method is that it unnecessitates both a feedback from the realization for broadbanding and an additional smoothing of transition between coupled sections of the whole directional coupler. There is also no need for either a specialised CAD tool or a computer program. Key-words:digital frequency discriminator, HFSS, APLAC, broadside coupling", "title": "" }, { "docid": "1544bcda2c29bb4e2fc21357d73856a8", "text": "The ability to give precise and fast prediction for the price movement of stocks is the key to profitability in High Frequency Trading. The main objective of this paper is to propose a novel way of modeling the high frequency trading problem using Deep Neural Networks at its heart and to argue why Deep Learning methods can have a lot of potential in the field of High Frequency Trading. The paper goes on to analyze the model’s performance based on it’s prediction accuracy as well as prediction speed across full-day trading simulations.", "title": "" }, { "docid": "5dac4a5d6adcb75742344268bb717e11", "text": "System logs are widely used in various tasks of software system management. It is crucial to avoid logging too little or too much. To achieve so, developers need to make informed decisions on where to log and what to log in their logging practices during development. However, there exists no work on studying such logging practices in industry or helping developers make informed decisions. To fill this significant gap, in this paper, we systematically study the logging practices of developers in industry, with focus on where developers log. We obtain six valuable findings by conducting source code analysis on two large industrial systems (2.5M and 10.4M LOC, respectively) at Microsoft. We further validate these findings via a questionnaire survey with 54 experienced developers in Microsoft. In addition, our study demonstrates the high accuracy of up to 90% F-Score in predicting where to log.", "title": "" }, { "docid": "4d2460ea467745a6e93af8f448a64b68", "text": "In this paper, we propose the use of a novel fixed-wing vertical take-off and landing (VTOL) aerobot. A mission profile to investigate the Isidis Planitia region of Mars is proposed based on the knowledge of the planet's geophysical characteristics, its atmosphere and terrain. The aerobot design is described from the aspects of vehicle selection, its propulsion system, power system, payload, thermal management, structure, mass budget, and control strategy and sensor suite. The aerobot proposed in this paper is believed to be a practical and realistic solution to the problem of investigating the Martian surface. A six-degree-of-freedom flight simulator has been created to support the aerobot design process by providing performance evaluations. The nonlinear dynamics is then linearized to a state-space formulation at a certain trimmed equilibrium point. Basic autopilot modes are developed for the aerobot based on the linearized state-space model. The results of the simulation show the aerobot is stable and controllable.", "title": "" }, { "docid": "007c0b3a0ca691ceef937c23d34ce5c2", "text": "With the increasing complexity of modern Systems-on-Chip, the possibility of functional errors escaping design verification is growing. Post-silicon validation targets the discovery of these errors in early hardware prototypes. Due to limited visibility and observability, dedicated design-for-debug (DFD) hardware such as trace buffers are inserted to aid post-silicon validation. In spite of its benefit, such hardware incurs area overheads, which impose size limitations. However, the overhead could be overcome if the area dedicated to DFD could be reused in-field. In this work, we present a novel method for reusing an existing trace buffer as a victim cache of a processor to enhance performance. The trace buffer storage space is reused for the victim cache, with a small additional controller logic. Experimental results on several benchmarks and trace buffer sizes show that the proposed approach can enhance the average performance by up to 8.3% over a baseline architecture. We also propose a strategy for dynamic power management of the structure, to enable saving energy with negligible impact on performance.", "title": "" }, { "docid": "d2146f1821812ca65cfd56f557252200", "text": "This paper presents an automatic annotation tool AATOS for providing documents with semantic annotations. The tool links entities found from the texts to ontologies defined by the user. The application is highly configurable and can be used with different natural language Finnish texts. The application was developed as a part of the WarSampo and Semantic Finlex projects and tested using Kansa Taisteli magazine articles and consolidated Finnish legislation of Semantic Finlex. The quality of the automatic annotation was evaluated by measuring precision and recall against existing manual annotations. The results showed that the quality of the input text, as well as the selection and configuration of the ontologies impacted the results.", "title": "" }, { "docid": "6729cd1a3627510cc03b8e3e475a017e", "text": "In this paper we consider the mobile robot parking problem, i.e., the stabilization of a wheeled vehicle to a given position and orientation, using only visual feedback from low-cost cameras. We take into account the practically most relevant problem of keeping the tracked features in sight of the camera while maneuvering to park the vehicle. This constraint, often neglected in the literature, combines with the non-holonomic nature of the vehicle kinematics in a challenging controller design problem. We provide an effective solution to such a problem by using a combination of previous results on non-smooth control synthesis and recently developed hybrid control techniques. Simulations and experimental results on a laboratory vehicle are reported, showing the practicality of the proposed approach. KEY WORDS—parking of wheeled robots, visual servoing, hybrid control, non-holonomic systems", "title": "" }, { "docid": "1e1cb143f26f43f6d44994459ec46eb6", "text": "Examination of motivational dynamics in academic contexts within self-determination theory has centered primarily around both the motives (initially intrinsic vs. extrinsic, later autonomous vs. controlled) that regulate learners’study behavior and the contexts that promote or hinder these regulations. Less attention has been paid to the goal contents (intrinsic vs. extrinsic) that learners hold and to the different goal contents that are communicated in schools to increase the perceived relevance of the learning. Recent field experiments are reviewed showing that intrinsic goal framing (relative to extrinsic goal framing and no-goal framing) produces deeper engagement in learning activities, better conceptual learning, and higher persistence at learning activities. These effects occur for both intrinsically and extrinsically oriented individuals. Results are discussed in terms of self-determination theory’s concept of basic psychological needs for autonomy, competence, and relatedness.", "title": "" }, { "docid": "9bafd07082066235a6b99f00e360b0d2", "text": "Mobile devices have become a significant part of people’s lives, leading to an increasing number of users involved with such technology. The rising number of users invites hackers to generate malicious applications. Besides, the security of sensitive data available on mobile devices is taken lightly. Relying on currently developed approaches is not sufficient, given that intelligent malware keeps modifying rapidly and as a result becomes more difficult to detect. In this paper, we propose an alternative solution to evaluating malware detection using the anomaly-based approach with machine learning classifiers. Among the various network traffic features, the four categories selected are basic information, content based, time based and connection based. The evaluation utilizes two datasets: public (i.e. MalGenome) and private (i.e. self-collected). Based on the evaluation results, both the Bayes network and random forest classifiers produced more accurate readings, with a 99.97 % true-positive rate (TPR) as opposed to the multi-layer perceptron with only 93.03 % on the MalGenome dataset. However, this experiment revealed that the k-nearest neighbor classifier efficiently detected the latest Android malware with an 84.57 % truepositive rate higher than other classifiers. Communicated by V. Loia. F. A. Narudin · A. Gani Mobile Cloud Computing (MCC), University of Malaya, 50603 Kuala Lumpur, Malaysia A. Feizollah (B) · N. B. Anuar Security Research Group (SECReg), Faculty of Computer Science and Information Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia e-mail: [email protected]", "title": "" }, { "docid": "4a989671768dee7428612adfc6c3f8cc", "text": "We developed computational models to predict the emergence of depression and Post-Traumatic Stress Disorder in Twitter users. Twitter data and details of depression history were collected from 204 individuals (105 depressed, 99 healthy). We extracted predictive features measuring affect, linguistic style, and context from participant tweets (N = 279,951) and built models using these features with supervised learning algorithms. Resulting models successfully discriminated between depressed and healthy content, and compared favorably to general practitioners’ average success rates in diagnosing depression, albeit in a separate population. Results held even when the analysis was restricted to content posted before first depression diagnosis. State-space temporal analysis suggests that onset of depression may be detectable from Twitter data several months prior to diagnosis. Predictive results were replicated with a separate sample of individuals diagnosed with PTSD (Nusers = 174, Ntweets = 243,775). A state-space time series model revealed indicators of PTSD almost immediately post-trauma, often many months prior to clinical diagnosis. These methods suggest a data-driven, predictive approach for early screening and detection of mental illness.", "title": "" }, { "docid": "c58fc1a572d5120e14eb6e501a50b8aa", "text": "475 Abstract— In this paper a dc-dc buck-boost converter is modeled and controlled using sliding mode technique. First the buck-boost converter is modeled and dynamic equations describing the converter are derived and sliding mode controller is designed. The robustness of the converter system is tested against step load changes and input voltage variations. Matlab/Simulink is used for the simulations. The simulation results are presented..", "title": "" }, { "docid": "b5f0c24ad49a8e4b6b0e4640c57367eb", "text": "Vehicle detection is important for advanced driver assistance systems (ADAS). Both LiDAR and cameras are often used. LiDAR provides excellent range information but with limits to object identification; on the other hand, the camera allows for better recognition but with limits to the high resolution range information. This paper presents a sensor fusion based vehicle detection approach by fusing information from both LiDAR and cameras. The proposed approach is based on two components: a hypothesis generation phase to generate positions that potential represent vehicles and a hypothesis verification phase to classify the corresponding objects. Hypothesis generation is achieved using the stereo camera while verification is achieved using the LiDAR. The main contribution is that the complementary advantages of two sensors are utilized, with the goal of vehicle detection. The proposed approach leads to an enhanced detection performance; in addition, maintains tolerable false alarm rates compared to vision based classifiers. Experimental results suggest a performance which is broadly comparable to the current state of the art, albeit with reduced false alarm rate.", "title": "" }, { "docid": "04647771810ac62b27ee8da12833a02d", "text": "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.", "title": "" }, { "docid": "1f6e92bc8239e358e8278d13ced4a0a9", "text": "This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depthbased network of the paired depth images to constrain midlevel RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.", "title": "" }, { "docid": "1180eda496b8b3f7f41de6ce79b0b9b1", "text": "Biomaterial development is currently the most active research area in the field of biomedical engineering. The bioglasses possess immense potential for being the ideal biomaterials due to their high adaptiveness to the biological environment as well as tunable properties. Bioglasses like 45S5 has shown great clinical success over the past 10 years. The bioglasses like 45S5 were prepared using melt-quenching techniques but recently porous bioactive glasses have been derived through sol-gel process. The synthesis route exhibits marked effect on the specific surface area, as well as degradability of the material. This article is an attempt to provide state of the art of the sol-gel and melt quenched bioactive bioglasses for tissue regeneration. Fabrication routes for bioglasses suitable for bone tissue engineering are highlighted and the effect of these fabrication techniques on the porosity, pore-volume, mechanical properties, cytocompatibilty and especially apatite layer formation on the surface of bioglasses is analyzed in detail. Drug delivery capability of bioglasses is addressed shortly along with the bioactivity of mesoporous glasses. © 2015 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 104B: 1248-1275, 2016.", "title": "" }, { "docid": "e44f67fec39390f215b5267c892d1a26", "text": "Primary progressive aphasia (PPA) may be the onset of several neurodegenerative diseases. This study evaluates a cohort of patients with PPA to assess their progression to different clinical syndromes, associated factors that modulate this progression, and patterns of cerebral metabolism linked to different clinical evolutionary forms. Thirty-five patients meeting PPA criteria underwent a clinical and neuroimaging 18F-Fluorodeoxyglucose PET evaluation. Survival analysis was performed using time from clinical onset to the development of a non-language symptom or deficit (PPA-plus). Cerebral metabolism was analyzed using Statistical Parametric Mapping. Patients classified into three PPA variants evolved to atypical parkinsonism, behavioral disorder and motor neuron disease in the agrammatic variant; to behavioral disorder in the semantic; and to memory impairment in the logopenic. Median time from the onset of symptoms to PPA-plus was 36 months (31–40, 95 % confidence interval). Right laterality, and years of education were associated to a lower risk of progression, while logopenic variant to a higher risk. Different regions of hypometabolism were identified in agrammatic PPA with parkinsonism, motor neuron disease and logopenic PPA-plus. Clinical course of PPA differs according to each variant. Left anterior temporal and frontal medial hypometabolism in agrammatic variant is linked to motor neuron disease and atypical parkinsonism, respectively. PPA variant, laterality and education may be associated to the risk of progression. These results suggest the possibility that clinical and imaging data could help to predict the clinical course of PPA.", "title": "" }, { "docid": "9e865969535469357f2600985750d78e", "text": "Patients with pathological laughter and crying (PLC) are subject to relatively uncontrollable episodes of laughter, crying or both. The episodes occur either without an apparent triggering stimulus or following a stimulus that would not have led the subject to laugh or cry prior to the onset of the condition. PLC is a disorder of emotional expression rather than a primary disturbance of feelings, and is thus distinct from mood disorders in which laughter and crying are associated with feelings of happiness or sadness. The traditional and currently accepted view is that PLC is due to the damage of pathways that arise in the motor areas of the cerebral cortex and descend to the brainstem to inhibit a putative centre for laughter and crying. In that view, the lesions 'disinhibit' or 'release' the laughter and crying centre. The neuroanatomical findings in a recently studied patient with PLC, along with new knowledge on the neurobiology of emotion and feeling, gave us an opportunity to revisit the traditional view and propose an alternative. Here we suggest that the critical PLC lesions occur in the cerebro-ponto-cerebellar pathways and that, as a consequence, the cerebellar structures that automatically adjust the execution of laughter or crying to the cognitive and situational context of a potential stimulus, operate on the basis of incomplete information about that context, resulting in inadequate and even chaotic behaviour.", "title": "" }, { "docid": "2364fc795ff8e449a557eda4b498b42d", "text": "With the increasing utilization and popularity of the cloud infrastructure, more and more data are moved to the cloud storage systems. This makes the availability of cloud storage services critically important, particularly given the fact that outages of cloud storage services have indeed happened from time to time. Thus, solely depending on a single cloud storage provider for storage services can risk violating the service-level agreement (SLA) due to the weakening of service availability. This has led to the notion of Cloud-of-Clouds, where data redundancy is introduced to distribute data among multiple independent cloud storage providers, to address the problem. The key in the effectiveness of the Cloud-of-Clouds approaches lies in how the data redundancy is incorporated and distributed among the clouds. However, the existing Cloud-of-Clouds approaches utilize either replication or erasure codes to redundantly distribute data across multiple clouds, thus incurring either high space or high performance overheads. In this paper, we propose a hybrid redundant data distribution approach, called HyRD, to improve the cloud storage availability in Cloud-of-Clouds by exploiting the workload characteristics and the diversity of cloud providers. In HyRD, large files are distributed in multiple cost-efficient cloud storage providers with erasure-coded data redundancy while small files and file system metadata are replicated on multiple high-performance cloud storage providers. The experiments conducted on our lightweight prototype implementation of HyRD show that HyRD improves the cost efficiency by 33.4 and 20.4 percent, and reduces the access latency by 58.7 and 34.8 percent than the DuraCloud and RACS schemes, respectively.", "title": "" }, { "docid": "1c005124e2014b1d2eaaa178eda3e4d0", "text": "BACKGROUND\nThere is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated a priori intervention effect or from an intervention effect suggested by trials with low-risk of bias.\n\n\nMETHODS\nInformation size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta-analysis.\n\n\nRESULTS\nWe devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between-trial variability and a sampling error estimate considering the required information size. D2 is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (I2), which may underestimate the required information size. Thus, D2 and I2 are compared and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses.\n\n\nCONCLUSION\nWe conclude that D2 seems a better alternative than I2 to consider model variation in any random-effects meta-analysis despite the choice of the between trial variance estimator that constitutes the model. Furthermore, D2 can readily adjust the required information size in any random-effects model meta-analysis.", "title": "" } ]
scidocsrr
338ece7a572b698d226dd8c322205a7d
The pharmacology of psilocybin.
[ { "docid": "e50320cfddc32a918389fbf8707d599f", "text": "Psilocybin, an indoleamine hallucinogen, produces a psychosis-like syndrome in humans that resembles first episodes of schizophrenia. In healthy human volunteers, the psychotomimetic effects of psilocybin were blocked dose-dependently by the serotonin-2A antagonist ketanserin or the atypical antipsychotic risperidone, but were increased by the dopamine antagonist and typical antipsychotic haloperidol. These data are consistent with animal studies and provide the first evidence in humans that psilocybin-induced psychosis is due to serotonin-2A receptor activation, independently of dopamine stimulation. Thus, serotonin-2A overactivity may be involved in the pathophysiology of schizophrenia and serotonin-2A antagonism may contribute to therapeutic effects of antipsychotics.", "title": "" }, { "docid": "a67390291a84c641e57e4e8ac04d30f4", "text": "1. Reactions induced by LSD, mescaline, psilocin, and psilocybin are qualitatively similar. 2. The time course of the psilocin and psilocybin reactions are shorter than those of LSD or mescaline reactions. li 4. Psilocin is approximately 1.4 times as potent as psilocybin. This ratio is the same as that of the molecular weights of the two drugs. Reactions induced by LSD, mescaline, psilocin, and psilocybin are qualitatively similar. The time course of the psilocin and psilocybin reactions are shorter than those of LSD or mescaline reactions. li Psilocin is approximately 1.4 times as potent as psilocybin. This ratio is the same as that of the molecular weights of the two drugs.", "title": "" } ]
[ { "docid": "ebb4bf38c87364cdad5764d3d5f5713e", "text": "IMPORTANCE\nAlthough several longitudinal studies have demonstrated an effect of violent video game play on later aggressive behavior, little is known about the psychological mediators and moderators of the effect.\n\n\nOBJECTIVE\nTo determine whether cognitive and/or emotional variables mediate the effect of violent video game play on aggression and whether the effect is moderated by age, sex, prior aggressiveness, or parental monitoring.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nThree-year longitudinal panel study. A total of 3034 children and adolescents from 6 primary and 6 secondary schools in Singapore (73% male) were surveyed annually. Children were eligible for inclusion if they attended one of the 12 selected schools, 3 of which were boys' schools. At the beginning of the study, participants were in third, fourth, seventh, and eighth grades, with a mean (SD) age of 11.2 (2.1) years (range, 8-17 years). Study participation was 99% in year 1.\n\n\nMAIN OUTCOMES AND MEASURES\nThe final outcome measure was aggressive behavior, with aggressive cognitions (normative beliefs about aggression, hostile attribution bias, aggressive fantasizing) and empathy as potential mediators.\n\n\nRESULTS\nLongitudinal latent growth curve modeling demonstrated that the effects of violent video game play are mediated primarily by aggressive cognitions. This effect is not moderated by sex, prior aggressiveness, or parental monitoring and is only slightly moderated by age, as younger children had a larger increase in initial aggressive cognition related to initial violent game play at the beginning of the study than older children. Model fit was excellent for all models.\n\n\nCONCLUSIONS AND RELEVANCE\nGiven that more than 90% of youths play video games, understanding the psychological mechanisms by which they can influence behaviors is important for parents and pediatricians and for designing interventions to enhance or mitigate the effects.", "title": "" }, { "docid": "f1635351e7d3c308eeca5df314b18b8f", "text": "The vertex cover problem Find a set of vertices that cover the graph LP rounding is a 4 step scheme to approximate combinatorial problems with theoretical guarantees on solution quality. Several problems in machine learning, computer vision and data analysis can be formulated using NP-­‐hard combinatorial optimization problems. In many of these applications, approximate solutions for these NP-­‐hard problems are 'good enough'.", "title": "" }, { "docid": "f148a914c6be989a6e4ca41f073b32de", "text": "Hashing-based semantic similarity search is becoming increasingly important for building large-scale content-based retrieval system. The state-of-the-art supervised hashing techniques use flexible two-step strategy to learn hash functions. The first step learns binary codes for training data by solving binary optimization problems with millions of variables, thus usually requiring intensive computations. Despite simplicity and efficiency, locality-sensitive hashing (LSH) has never been recognized as a good way to generate such codes due to its poor performance in traditional approximate neighbor search. We claim in this paper that the true merit of LSH lies in transforming the semantic labels to obtain the binary codes, resulting in an effective and efficient two-step hashing framework. Specifically, we developed the locality-sensitive two-step hashing (LS-TSH) that generates the binary codes through LSH rather than any complex optimization technique. Theoretically, with proper assumption, LS-TSH is actually a useful LSH scheme, so that it preserves the label-based semantic similarity and possesses sublinear query complexity for hash lookup. Experimentally, LS-TSH could obtain comparable retrieval accuracy with state of the arts with two to three orders of magnitudes faster training speed.", "title": "" }, { "docid": "debcc046323ffbd9a093c8e07d37960e", "text": "This review discusses the theory and practical application of independent component analysis (ICA) to multi-channel EEG data. We use examples from an audiovisual attention-shifting task performed by young and old subjects to illustrate the power of ICA to resolve subtle differences between evoked responses in the two age groups. Preliminary analysis of these data using ICA suggests a loss of task specificity in independent component (IC) processes in frontal and somatomotor cortex during post-response periods in older as compared to younger subjects, trends not detected during examination of scalp-channel event-related potential (ERP) averages. We discuss possible approaches to component clustering across subjects and new ways to visualize mean and trial-by-trial variations in the data, including ERP-image plots of dynamics within and across trials as well as plots of event-related spectral perturbations in component power, phase locking, and coherence. We believe that widespread application of these and related analysis methods should bring EEG once again to the forefront of brain imaging, merging its high time and frequency resolution with enhanced cm-scale spatial resolution of its cortical sources.", "title": "" }, { "docid": "4f37a3931989c89104910bcf4c45d5ec", "text": "Internet of vehicles is a promising area related to D2D communication and the Internet of Things. We present a novel perspective on vehicular communications and social vehicle swarms, to study and analyze a socially aware Internet of vehicles with the assistance of an agent-based model intended to reveal hidden patterns behind superficial data. After discussing its components (its agents, environments, and rules), we introduce supportive technology and methods, deep reinforcement learning, privacy preserving data mining, and sub-cloud computing in order to detect the most significant and interesting information for each individual effectively, which is the key desire. Finally, several relevant research topics and challenges are discussed.", "title": "" }, { "docid": "1e4a74d8d4ae131467e12911fd6ac281", "text": "Google Scholar has been well received by the research community. Its promises of free, universal and easy access to scientific literature as well as the perception that it covers better than other traditional multidisciplinary databases the areas of the Social Sciences and the Humanities have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: two new bibliometric products that offer citation data at the individual level and at journal level. In this paper we show the results of a experiment undertaken to analyze Google Scholar's capacity to detect citation counting manipulation. For this, six documents were uploaded to an institutional web domain authored by a false researcher and referencing all the publications of the members of the EC3 research group at the University of Granada. The detection of Google Scholar of these papers outburst the citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such outburst and how it could affect the future development of such products not only at individual level but also at journal level, especially if Google Scholar persists with its lack of transparency.", "title": "" }, { "docid": "d56968c0512526ea891f0f031b99db04", "text": "Naive-Bayes and k-NN classifiers are two machine learning approaches for text classification. Rocchio is the classic method for text classification in information retrieval. Based on these three approaches and using classifier fusion methods, we propose a novel approach in text classification. Our approach is a supervised method, meaning that the list of categories should be defined and a set of training data should be provided for training the system. In this approach, documents are represented as vectors where each component is associated with a particular word. We proposed voting methods and OWA operator and decision template method for combining classifiers. Experimental results show that these methods decrese the classification error 15 percent as measured on 2000 training data from 20 newsgroups dataset.", "title": "" }, { "docid": "c733ea4c565325aa64aa82d15d791675", "text": "Korean dramas have played an influential role in Taiwanese society since they were first introduced into Taiwan. One of the most dominant themes in most Korean dramas is the theme of love. As a story topic, love accounts for about ninety percent of the themes dealt with by these dramas. By applying the theoretical idea of cultural proximity, and by using content analysis to analyze the underlying values contained in the dramas, this study examines the theme of love in these dramas. The data pool includes 10 popular Korean dramas aired between the years of 2008 and 2012. Using these 10 dramas as a sample, I examine whether contemporary feminist attitudes about women \" s autonomy play a role in how Taiwanese audiences identify with stories about love in Korean dramas. Through interviews with four television station managers from companies including LTV, ETTV, ii Videoland Drama and ELTA, I also gathered information about the process of localization within Korean dramas. In addition to the above strategies, my study incorporates secondary data to analyze related reports and statistical data about Korean dramas.", "title": "" }, { "docid": "29816f0358cfff1c1dddce203003ad41", "text": "Increasing volumes of trajectory data require analysis methods which go beyond the visual. Methods for computing trajectory analysis typically assume linear interpolation between quasi-regular sampling points. This assumption, however, is often not realistic, and can lead to a meaningless analysis for sparsely and/or irregularly sampled data. We propose to use the space-time prism model instead, allowing to represent the influence of speed on possible trajectories within a volume. We give definitions for the similarity of trajectories in this model and describe algorithms for its computation using the Fréchet and the equal time distance.", "title": "" }, { "docid": "e69ecf0d4d04a956b53f34673e353de3", "text": "Over the past decade, the advent of new technology has brought about the emergence of smart cities aiming to provide their stakeholders with technology-based solutions that are effective and efficient. Insofar as the objective of smart cities is to improve outcomes that are connected to people, systems and processes of businesses, government and other publicand private-sector entities, its main goal is to improve the quality of life of all residents. Accordingly, smart tourism has emerged over the past few years as a subset of the smart city concept, aiming to provide tourists with solutions that address specific travel related needs. Dubai is an emerging tourism destination that has implemented smart city and smart tourism platforms to engage various stakeholders. The objective of this study is to identify best practices related to Dubai’s smart city and smart tourism. In so doing, Dubai’s mission and vision along with key dimensions and pillars are identified in relation to the advancements in the literature while highlighting key resources and challenges. A Smart Tourism Dynamic Responsive System (STDRS) framework is proposed while suggesting how Dubai may able to enhance users’ involvement and their overall experience.", "title": "" }, { "docid": "597a3b52fd5114228d74398756d3359f", "text": "The authors report a meta-analysis of individual differences in detecting deception, confining attention to occasions when people judge strangers' veracity in real-time with no special aids. The authors have developed a statistical technique to correct nominal individual differences for differences introduced by random measurement error. Although researchers have suggested that people differ in the ability to detect lies, psychometric analyses of 247 samples reveal that these ability differences are minute. In terms of the percentage of lies detected, measurement-corrected standard deviations in judge ability are less than 1%. In accuracy, judges range no more widely than would be expected by chance, and the best judges are no more accurate than a stochastic mechanism would produce. When judging deception, people differ less in ability than in the inclination to regard others' statements as truthful. People also differ from one another as lie- and truth-tellers. They vary in the detectability of their lies. Moreover, some people are more credible than others whether lying or truth-telling. Results reveal that the outcome of a deception judgment depends more on the liar's credibility than any other individual difference.", "title": "" }, { "docid": "e679a2d77d45ce6a74893d8bcc189a82", "text": "We present a novel approach to real-time structured light range scanning. After an analysis of the underlying assumptions of existing structured light techniques, we derive a new set of illumination patterns based on coding the boundaries between projected stripes. These stripe boundary codes allow range scanning of moving objects, with only modest assumptions about scene continuity and reflectance. We describe an implementation that integrates these new codes with real-time algorithms for tracking stripe boundaries and determining depths. Our system uses a standard video camera and DLP projector, and produces dense range images at 60 Hz with 100 m accuracy over a 10 cm working volume. As an application, we demonstrate the creation of complete models of rigid objects: the objects are rotated in front of the scanner by hand, and successive range images are automatically aligned.", "title": "" }, { "docid": "f7ba998d8f4eb51619673edb66f7b3e3", "text": "We propose an extension of Convolutional Neural Networks (CNNs) to graph-structured data, including strided convolutions and data augmentation defined from inferred graph translations. Our method matches the accuracy of state-of-the-art CNNs when applied on images, without any prior about their 2D regular structure. On fMRI data, we obtain a significant gain in accuracy compared with existing graph-based alternatives.", "title": "" }, { "docid": "12229c2940f66bd7d8db63d542436062", "text": "We develop some versions of quantum devices simulators such as NEMO-VN, NEMO-VN1 and NEMO-VN2. The quantum device simulator – NEMO-VN2 focuses on carbon nanotube FET (CNTFET). CNTFETs have been studied in recent years as potential alternatives to CMOS devices because of their compelling properties. Studies of phonon scattering in CNTs and its influence in CNTFET have focused on metallic tubes or on long semiconducting tubes. Phonon scattering in short channel CNTFETs, which is important for nanoelectronic applications, remains unexplored. In this work the non-equilibrium Green function (NEGF) is used to perform a comprehensive study of CNT transistors. The program has been written by using graphic user interface (GUI) of Matlab. We find that the effect of scattering on current-voltage characteristics of CNTFET is significant. The degradation of drain current due to scattering has been observed. Some typical simulation results have been presented for illustration.", "title": "" }, { "docid": "d64b3b68f094ade7881f2bb0f2572990", "text": "Large-scale transactional systems still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective. A semantic layer built upon a basic blockchain infrastructure would join the benefits of flexible resource/service discovery and validation by consensus. This paper proposes a novel Service-oriented Architecture (SOA) based on a semantic blockchain. Registration, discovery, selection and payment operations are implemented as smart contracts, allowing decentralized execution and trust. Potential applications include material and immaterial resource marketplaces and trustless collaboration among autonomous entities, spanning many areas of interest for smart cities and communities.", "title": "" }, { "docid": "cddf4197bce8a4d965907d9c8f384e35", "text": "This paper examines the contemporary relationship between fashion brands and celebrities. Noting the historic role of celebrities in fashion and their current prevalence in the industry, the paper moves beyond discussion of the motives and effectiveness of celebrity endorsement, and instead explores its nature and practice in the fashion sector. The paper proposes a new definition of celebrity endorsement in fashion, offers a classification of celebrities involved in fashion brand endorsement, and presents a typology examining the contemporary means by which a fashion brand may collaborate with celebrities. The typology is defined in context of the nature, length and cost to the brand of the relationship between it and the celebrity. The methodology uses secondary sources and qualitative primary research in an exploratory agenda in order to propose conclusions and suggest ideas for further research.", "title": "" }, { "docid": "4f40700ccdc1b6a8a306389f1d7ea107", "text": "Skin cancer exists in different forms like Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most dangerous and unpredictable. In this paper, we implement an image processing technique for the detection of Melanoma Skin Cancer using the software MATLAB which is easy for implementation as well as detection of Melanoma skin cancer. The input to the system is the skin lesion image. This image proceeds with the image pre-processing methods such as conversion of RGB image to Grayscale image, noise removal and so on. Further Otsu thresholding is used to segment the images followed by feature extraction that includes parameters like Asymmetry, Border Irregularity, Color and Diameter (ABCD) and then Total Dermatoscopy Score (TDS) is calculated. The calculation of TDS determines the presence of Melanoma skin cancer by classifying it as benign, suspicious or highly suspicious skin lesion.", "title": "" }, { "docid": "96c10ca887c0210615d16655f62665e0", "text": "The two key challenges in hierarchical classification are to leverage the hierarchical dependencies between the class-labels for improving performance, and, at the same time maintaining scalability across large hierarchies. In this paper we propose a regularization framework for large-scale hierarchical classification that addresses both the problems. Specifically, we incorporate the hierarchical dependencies between the class-labels into the regularization structure of the parameters thereby encouraging classes nearby in the hierarchy to share similar model parameters. Furthermore, we extend our approach to scenarios where the dependencies between the class-labels are encoded in the form of a graph rather than a hierarchy. To enable large-scale training, we develop a parallel-iterative optimization scheme that can handle datasets with hundreds of thousands of classes and millions of instances and learning terabytes of parameters. Our experiments showed a consistent improvement over other competing approaches and achieved state-of-the-art results on benchmark datasets.", "title": "" }, { "docid": "1255c63b8fc0406b1f3a0161f59ebfb1", "text": "This paper proposes an EMI filter design software which can serve as an aid to the designer to quickly arrive at optimal filter sizes based on off-line measurement data or simulation results. The software covers different operating conditions-such as: different switching devices, different types of switching techniques, different load conditions and layout of the test setup. The proposed software design works for both silicon based and WBG based power converters.", "title": "" }, { "docid": "da72d905e403552106d04ca5b86a0845", "text": "The reconstitution of lost bone is a subject that is germane to many orthopedic conditions including fractures and non-unions, infection, inflammatory arthritis, osteoporosis, osteonecrosis, metabolic bone disease, tumors, and periprosthetic particle-associated osteolysis. In this regard, the processes of acute and chronic inflammation play an integral role. Acute inflammation is initiated by endogenous or exogenous adverse stimuli, and can become chronic in nature if not resolved by normal homeostatic mechanisms. Dysregulated inflammation leads to increased bone resorption and suppressed bone formation. Crosstalk among inflammatory cells (polymorphonuclear leukocytes and cells of the monocyte-macrophage-osteoclast lineage) and cells related to bone healing (cells of the mesenchymal stem cell-osteoblast lineage and vascular lineage) is essential to the formation, repair and remodeling of bone. In this review, the authors provide a comprehensive summary of the literature related to inflammation and bone repair. Special emphasis is placed on the underlying cellular and molecular mechanisms, and potential interventions that can favorably modulate the outcome of clinical conditions that involve bone repair.", "title": "" } ]
scidocsrr
d365c393d9a4dafe5cafa0a7cbe7a523
Using hidden Markov models for topic segmentation of meeting transcripts
[ { "docid": "0b0614f88f849aa5ecf135dcee55528a", "text": "This paper introduces a new statistical approach to automatically partitioning text into coherent segments. The approach is based on a technique that incrementally builds an exponential model to extract features that are correlated with the presence of boundaries in labeled training text. The models use two classes of features: topicality features that use adaptive language models in a novel way to detect broad changes of topic, and cue-word features that detect occurrences of specific words, which may be domain-specific, that tend to be used near segment boundaries. Assessment of our approach on quantitative and qualitative grounds demonstrates its effectiveness in two very different domains, Wall Street Journal news articles and television broadcast news story transcripts. Quantitative results on these domains are presented using a new probabilistically motivated error metric, which combines precision and recall in a natural and flexible way. This metric is used to make a quantitative assessment of the relative contributions of the different feature types, as well as a comparison with decision trees and previously proposed text segmentation algorithms.", "title": "" }, { "docid": "f4380a5acaba5b534d13e1a4f09afe4f", "text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.", "title": "" } ]
[ { "docid": "579333c5b2532b0ad04d0e3d14968a54", "text": "We present a learning to rank approach to classify folktales, such as fairy tales and urban legends, according to their story type, a concept that is widely used by folktale researchers to organize and classify folktales. A story type represents a collection of similar stories often with recurring plot and themes. Our work is guided by two frequently used story type classification schemes. Contrary to most information retrieval problems, the text similarity in this problem goes beyond topical similarity. We experiment with approaches inspired by distributed information retrieval and features that compare subject-verb-object triplets. Our system was found to be highly effective compared with a baseline system.", "title": "" }, { "docid": "869f52723b215ba8dc5c4c614b2c79a6", "text": "Cellular systems are becoming more heterogeneous with the introduction of low power nodes including femtocells, relays, and distributed antennas. Unfortunately, the resulting interference environment is also becoming more complicated, making evaluation of different communication strategies challenging in both analysis and simulation. Leveraging recent applications of stochastic geometry to analyze cellular systems, this paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers. A nearest out-of-cell interferer, out-of-cell interferers outside a guard region, and cross-tier interferers are included in the interference calculations. Bounding the interference power as a function of distance from the cell center, the total interference is characterized through its Laplace transform. An equivalent marked process is proposed for the out-of-cell interference under additional assumptions. To facilitate simplified calculations, the interference distribution is approximated using the Gamma distribution with second order moment matching. The Gamma approximation simplifies calculation of the success probability and average rate, incorporates small-scale and large-scale fading, and works with co-tier and cross-tier interference. Simulations show that the proposed model provides a flexible way to characterize outage probability and rate as a function of the distance to the cell edge.", "title": "" }, { "docid": "482ff6c78f7b203125781f5947990845", "text": "TH1 and TH17 cells mediate neuroinflammation in experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis. Pathogenic TH cells in EAE must produce the pro-inflammatory cytokine granulocyte-macrophage colony stimulating factor (GM-CSF). TH cell pathogenicity in EAE is also regulated by cell-intrinsic production of the immunosuppressive cytokine interleukin 10 (IL-10). Here we demonstrate that mice deficient for the basic helix-loop-helix (bHLH) transcription factor Bhlhe40 (Bhlhe40(-/-)) are resistant to the induction of EAE. Bhlhe40 is required in vivo in a T cell-intrinsic manner, where it positively regulates the production of GM-CSF and negatively regulates the production of IL-10. In vitro, GM-CSF secretion is selectively abrogated in polarized Bhlhe40(-/-) TH1 and TH17 cells, and these cells show increased production of IL-10. Blockade of IL-10 receptor in Bhlhe40(-/-) mice renders them susceptible to EAE. These findings identify Bhlhe40 as a critical regulator of autoreactive T-cell pathogenicity.", "title": "" }, { "docid": "5b88a7f862eab6fc632a506bbb99be70", "text": "In this paper we propose a methodology to control a novel class of actuators that we called passive noise rejection variable stiffness actuators (pnrVSA). Differently from nowadays classical VSA designs, this novel class of actuators mimics the human musculoskeletal ability to increase noise rejection without relying on feedback. To fully highlight the potentialities behind these actuators we consider movement planning under two constraints: (1) absence of feedback, i.e. purely open-loop planning1; (2) uncertain dynamic model. Under these constraints, movement planning can be formalized as an open-loop stochastic optimal control. Due to the lack of classical methods forcing the open-loop nature of the computed solution, we used here a slight modification of available methodologies based on importance sampling of trajectories using forward diffusion processes. Simulations show that the proposed algorithm can be effectively used to plan open-loop movements with pnrVSA. In particular, two different scenarios are considered: the control of a single joint pnrVSA and the control of a two degrees of freedom planar arm equipped with antagonist pnrVSAs at each joint. In both cases, movement has to be planned in presence of uncertain dynamics for unstable tasks. It is shown that open-loop stochastic optimal control can modulate the intrinsic stiffness of the system to cope with both instability and noise.", "title": "" }, { "docid": "33f86056827e1e8958ab17e11d7e4136", "text": "The successful integration of Information and Communications Technology (ICT) into the teaching and learning of English Language is largely dependent on the level of teacher’s ICT competence, the actual utilization of ICT in the language classroom and factors that challenge teachers to use it in language teaching. The study therefore assessed the Secondary School English language teachers’ ICT literacy, the extent of ICT utilization in English language teaching and the challenges that prevent language teachers to integrate ICT in teaching. To answer the problems, three sets of survey questionnaires were distributed to 30 English teachers in the 11 schools of Cluster 1 (CarCanMadCarLan). Data gathered were analyzed using descriptive statistics and frequency count. The results revealed that the teachers’ ICT literacy was moderate. The findings provided evidence that there was only a limited use of ICT in language teaching. Feedback gathered from questionnaires show that teachers faced many challenges that demotivate them from using ICT in language activities. Based on these findings, it is recommended the teachers must be provided with intensive ICT-based trainings to equip them with knowledge of ICT and its utilization in language teaching. School administrators as well as stakeholders may look for interventions to upgrade school’s ICTbased resources for its optimum use in teaching and learning. Most importantly, a larger school-wide ICT development plan may be implemented to ensure coherence of ICT implementation in the teaching-learning activities. ‘ICT & Innovations in Education’ International Journal International Electronic Journal | ISSN 2321 – 7189 | www.ictejournal.com Volume 2, Issue 1 | February 2014", "title": "" }, { "docid": "34d6b5908b68bcba17edac3abaa1fe8e", "text": "This paper provides a survey of modern LIght Detection And Ranging (LIDAR) sensors from a perspective of how they can be used for spacecraft relative navigation. In addition to LIDAR technology commonly used in space applications today (e.g. scanning, flash), this paper reviews emerging LIDAR technologies gaining traction in other non-aerospace fields. The discussion will include an overview of sensor operating principles and specific pros/cons for each type of LIDAR. This paper provides a comprehensive review of LIDAR technology as applied specifically to spacecraft relative navigation.", "title": "" }, { "docid": "e2b3001513059a02cf053cadab6abb85", "text": "Data mining is the process of discovering meaningful new correlation, patterns and trends by sifting through large amounts of data, using pattern recognition technologies as well as statistical and mathematical techniques. Cluster analysis is often used as one of the major data analysis technique widely applied for many practical applications in emerging areas of data mining. Two of the most delegated, partition based clustering algorithms namely k-Means and Fuzzy C-Means are analyzed in this research work. These algorithms are implemented by means of practical approach to analyze its performance, based on their computational time. The telecommunication data is the source data for this analysis. The connection oriented broad band data is used to find the performance of the chosen algorithms. The distance (Euclidian distance) between the server locations and their connections are rearranged after processing the data. The computational complexity (execution time) of each algorithm is analyzed and the results are compared with one another. By comparing the result of this practical approach, it was found that the results obtained are more accurate, easy to understand and above all the time taken to process the data was substantially high in Fuzzy C-Means algorithm than the k-Means. © 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2dde5d26ab14ee6be365b23402cc13e1", "text": "Compressive sensing is a revolutionary idea proposed recently to achieve much lower sampling rate for sparse signals. For large wireless sensor networks, the events are relatively sparse compared with the number of sources. Because of deployment cost, the number of sensors is limited, and due to energy constraint, not all the sensors are turned on all the time. In this paper, the first contribution is to formulate the problem for sparse event detection in wireless sensor networks as a compressive sensing problem. The number of (wake-up) sensors can be greatly reduced to the similar level of the number of sparse events, which is much smaller than the total number of sources. Second, we suppose the event has the binary nature, and employ the Bayesian detection using this prior information. Finally, we analyze the performance of the compressive sensing algorithms under the Gaussian noise. From the simulation results, we show that the sampling rate can reduce to 25% without sacrificing performance. With further decreasing the sampling rate, the performance is gradually reduced until 10% of sampling rate. Our proposed detection algorithm has much better performance than the l1-magic algorithm proposed in the literature.", "title": "" }, { "docid": "20ecae219ecf21429fb7c2697339fe50", "text": "Massively multiplayer game holds a huge market in the digital entertainment industry. Companies invest heavily in the game and graphics development since a successful online game can attract million of users, and this translates to a huge investment payoff. However, multiplayer online game is also subjected to various forms of hacks and cheats. Hackers can alter the graphic rendering to reveal information otherwise be hidden in a normal game, or cheaters can use software robot to play the game automatically and gain an unfair advantage. Currently, some popular online games release software patches or incorporate anti-cheating software to detect known cheats. This not only creates deployment difficulty but new cheats will still be able to breach the normal game logic until software patches are available. Moreover, the anti-cheating software themselves are also vulnerable to hacks. In this paper, we propose a scalable and efficient method to detect whether a player is cheating or not. The methodology is based on the dynamic Bayesian network approach. The detection framework relies solely on the game states and runs in the game server only. Therefore it is invulnerable to hacks and it is a much more deployable solution. To demonstrate the effectiveness of the propose method, we implement a prototype multiplayer game system and to detect whether a player is using the “aiming robot” for cheating or not. Experiments show that not only we can effectively detect cheaters, but the false positive rate is extremely low. We believe the proposed methodology and the prototype system provide a first step toward a systematic study of cheating detection and security research in the area of online multiplayer games.", "title": "" }, { "docid": "d60deca88b46171ad940b9ee8964dc77", "text": "Established in 1987, the EuroQol Group initially comprised a network of international, multilingual and multidisciplinary researchers from seven centres in Finland, the Netherlands, Norway, Sweden and the UK. Nowadays, the Group comprises researchers from Canada, Denmark, Germany, Greece, Japan, New Zealand, Slovenia, Spain, the USA and Zimbabwe. The process of shared development and local experimentation resulted in EQ-5D, a generic measure of health status that provides a simple descriptive profile and a single index value that can be used in the clinical and economic evaluation of health care and in population health surveys. Currently, EQ-5D is being widely used in different countries by clinical researchers in a variety of clinical areas. EQ-5D is also being used by eight out of the first 10 of the top 50 pharmaceutical companies listed in the annual report of Pharma Business (November/December 1999). Furthermore, EQ-5D is one of the handful of measures recommended for use in cost-effectiveness analyses by the Washington Panel on Cost Effectiveness in Health and Medicine. EQ-5D has now been translated into most major languages with the EuroQol Group closely monitoring the process.", "title": "" }, { "docid": "1c17535a4f1edc36b698295136e9711a", "text": "Massive digital acquisition and preservation of deteriorating historical and artistic documents is of particular importance due to their value and fragile condition. The study and browsing of such digital libraries is invaluable for scholars in the Cultural Heritage field but requires automatic tools for analyzing and indexing these datasets. We present two completely automatic methods requiring no human intervention: text height estimation and text line extraction. Our proposed methods have been evaluated on a huge heterogeneous corpus of illuminated medieval manuscripts of different writing styles and with various problematic attributes, such as holes, spots, ink bleed-through, ornamentation, background noise, and overlapping text lines. Our experimental results demonstrate that these two new methods are efficient and reliable, even when applied to very noisy and damaged old handwritten manuscripts.", "title": "" }, { "docid": "cdc1e3b629659bf342def1f262d7aa0b", "text": "In educational contexts, understanding the student’s learning must take account of the student’s construction of reality. Reality as experienced by the student has an important additional value. This assumption also applies to a student’s perception of evaluation and assessment. Students’ study behaviour is not only determined by the examination or assessment modes that are used. Students’ perceptions about evaluation methods also play a significant role. This review aims to examine evaluation and assessment from the student’s point of view. Research findings reveal that students’ perceptions about assessment significantly influence their approaches to learning and studying. Conversely, students’ approaches to study influence the ways in which they perceive evaluation and assessment. Findings suggest that students hold strong views about different assessment and evaluation formats. In this respect students favour multiple-choice format exams to essay type questions. However, when compared with more innovative assessment methods, students call the ‘fairness’ of these well-known evaluation modes into question.", "title": "" }, { "docid": "645f49ff21d31bb99cce9f05449df0d7", "text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.", "title": "" }, { "docid": "25346cdef3e97173dab5b5499c4d4567", "text": "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.", "title": "" }, { "docid": "14276adf4f5b3538f95cfd10902825ef", "text": "Subband adaptive filtering (SAF) techniques play a prominent role in designing active noise control (ANC) systems. They reduce the computational complexity of ANC algorithms, particularly, when the acoustic noise is a broadband signal and the system models have long impulse responses. In the commonly used uniform-discrete Fourier transform (DFT)-modulated (UDFTM) filter banks, increasing the number of subbands decreases the computational burden but can introduce excessive distortion, degrading performance of the ANC system. In this paper, we propose a new UDFTM-based adaptive subband filtering method that alleviates the degrading effects of the delay and side-lobe distortion introduced by the prototype filter on the system performance. The delay in filter bank is reduced by prototype filter design and the side-lobe distortion is compensated for by oversampling and appropriate stacking of subband weights. Experimental results show the improvement of performance and computational complexity of the proposed method in comparison to two commonly used subband and block adaptive filtering algorithms.", "title": "" }, { "docid": "4d8cc4d8a79f3d35ccc800c9f4f3dfdc", "text": "Many common events in our daily life affect us in positive and negative ways. For example, going on vacation is typically an enjoyable event, while being rushed to the hospital is an undesirable event. In narrative stories and personal conversations, recognizing that some events have a strong affective polarity is essential to understand the discourse and the emotional states of the affected people. However, current NLP systems mainly depend on sentiment analysis tools, which fail to recognize many events that are implicitly affective based on human knowledge about the event itself and cultural norms. Our goal is to automatically acquire knowledge of stereotypically positive and negative events from personal blogs. Our research creates an event context graph from a large collection of blog posts and uses a sentiment classifier and semi-supervised label propagation algorithm to discover affective events. We explore several graph configurations that propagate affective polarity across edges using local context, discourse proximity, and event-event co-occurrence. We then harvest highly affective events from the graph and evaluate the agreement of the polarities with human judgements.", "title": "" }, { "docid": "4c563b09a10ce0b444edb645ce411d42", "text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic", "title": "" }, { "docid": "28d1e4683ea4a3261f6a8a24f2870479", "text": "Memetic computation is a paradigm that uses the notion of meme(s) as units of information encoded in computational representations for the purpose of problem-solving. It covers a plethora of potentially rich meme-inspired computing methodologies, frameworks and operational algorithms including simple hybrids, adaptive hybrids and memetic automaton. In this paper, a comprehensive multi-facet survey of recent research in memetic computation is presented.", "title": "" }, { "docid": "13cbca0e2780a95c1e9d4928dc9d236c", "text": "Matching user accounts can help us build better users’ profiles and benefit many applications. It has attracted much attention from both industry and academia. Most of existing works are mainly based on rich user profile attributes. However, in many cases, user profile attributes are unavailable, incomplete or unreliable, either due to the privacy settings or just because users decline to share their information. This makes the existing schemes quite fragile. Users often share their activities on different social networks. This provides an opportunity to overcome the above problem. We aim to address the problem of user identification based on User Generated Content (UGC). We first formulate the problem of user identification based on UGCs and then propose a UGC-based user identification model. A supervised machine learning based solution is presented. It has three steps: firstly, we propose several algorithms to measure the spatial similarity, temporal similarity and content similarity of two UGCs; secondly, we extract the spatial, temporal and content features to exploit these similarities; afterwards, we employ the machine learning method to match user accounts, and conduct the experiments on three ground truth datasets. The results show that the proposed method has given excellent performance with F1 values reaching 89.79%, 86.78% and 86.24% on three ground truth datasets, respectively. This work presents the possibility of matching user accounts with high accessible online data. © 2018 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2c3bdb3dc3bf4aedc36a49e82a2dca50", "text": "We report the implementation of a text input application (speller) based on the P300 event related potential. We obtain high accuracies by using an SVM classifier and a novel feature. These techniques enable us to maintain fast performance without sacrificing the accuracy, thus making the speller usable in an online mode. In order to further improve the usability, we perform various studies on the data with a view to minimizing the training time required. We present data collected from nine healthy subjects, along with the high accuracies (of the order of 95% or more) measured online. We show that the training time can be further reduced by a factor of two from its current value of about 20 min. High accuracy, fast learning, and online performance make this P300 speller a potential communication tool for severely disabled individuals, who have lost all other means of communication and are otherwise cut off from the world, provided their disability does not interfere with the performance of the speller.", "title": "" } ]
scidocsrr
f799f16acca62915586c7f31513f16d3
Big data technologies and Management: What conceptual modeling can do
[ { "docid": "c41efa28806b3ac3d2b23d9e52b85193", "text": "The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.", "title": "" }, { "docid": "659f362b1f30c32cdaca90e3141596fb", "text": "Purpose – The paper aims to focus on so-called NoSQL databases in the context of cloud computing. Design/methodology/approach – Architectures and basic features of these databases are studied, particularly their horizontal scalability and concurrency model, that is mostly weaker than ACID transactions in relational SQL-like database systems. Findings – Some characteristics like a data model and querying capabilities of NoSQL databases are discussed in more detail. Originality/value – The paper shows vary different data models and query possibilities in a common terminology enabling comparison and categorization of NoSQL databases.", "title": "" } ]
[ { "docid": "104cf54cfa4bc540b17176593cdb77d8", "text": "Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.", "title": "" }, { "docid": "30bad49dc45651010b49e78951827f6a", "text": "In this paper we present a case study of frequent surges of unusually high rail-to-earth potential values at Taipei Rapid Transit System. The rail potential values observed and the resulting stray current flow associated with the diode-ground DC traction system during operation are contradictory to the moderate values on which the grounding of the DC traction system design was based. Thus we conducted both theoretical study and field measurements to obtain better understanding of the phenomenon, and to develop a more accurate algorithm for computing the rail-to-earth potential of the diode-ground DC traction systems.", "title": "" }, { "docid": "3bfc24c80cc7ba261ef6817a21ff5803", "text": "There is a concerted understanding of the ability of root exudates to influence the structure of rhizosphere microbial communities. However, our knowledge of the connection between plant development, root exudation and microbiome assemblage is limited. Here, we analyzed the structure of the rhizospheric bacterial community associated with Arabidopsis at four time points corresponding to distinct stages of plant development: seedling, vegetative, bolting and flowering. Overall, there were no significant differences in bacterial community structure, but we observed that the microbial community at the seedling stage was distinct from the other developmental time points. At a closer level, phylum such as Acidobacteria, Actinobacteria, Bacteroidetes, Cyanobacteria and specific genera within those phyla followed distinct patterns associated with plant development and root exudation. These results suggested that the plant can select a subset of microbes at different stages of development, presumably for specific functions. Accordingly, metatranscriptomics analysis of the rhizosphere microbiome revealed that 81 unique transcripts were significantly (P<0.05) expressed at different stages of plant development. For instance, genes involved in streptomycin synthesis were significantly induced at bolting and flowering stages, presumably for disease suppression. We surmise that plants secrete blends of compounds and specific phytochemicals in the root exudates that are differentially produced at distinct stages of development to help orchestrate rhizosphere microbiome assemblage.", "title": "" }, { "docid": "b6c306106133d23fc992fd5e88289204", "text": "Direct instruction approaches, as well as the design processes that support them, have been criticized for failing to reflect contemporary research and theory in teaching, learning, and technology. Learning systems are needed that encourage divergent reasoning, problem solving, and critical thinking. Student-centered learning environments have been touted as a means to support such processes. With the emergence of technology, many barriers to implementing innovative alternatives may be overcome. The purposes of this paper are to review and critically analyze research and theory related to technology-enhanced studentcentered learning environments and to identify their foundations and assumptions.", "title": "" }, { "docid": "061ac4487fba7837f44293a2d20b8dd9", "text": "This paper describes a model of cooperative behavior and describes how such a model can be applied in a natural language understanding system. We assume that agents attempt to recognize the plans of other agents and, then, use this plan when deciding what response to make. In particular, we show that, given a setting in which purposeful dialogues occur, this model can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts.", "title": "" }, { "docid": "172e7d3c18a1b6f2025f3f13719067d5", "text": "Investigating the nature of system intrusions in large distributed systems remains a notoriously difficult challenge. While monitoring tools (e.g., Firewalls, IDS) provide preliminary alerts through easy-to-use administrative interfaces, attack reconstruction still requires that administrators sift through gigabytes of system audit logs stored locally on hundreds of machines. At present, two fundamental obstacles prevent synergy between system-layer auditing and modern cluster monitoring tools: 1) the sheer volume of audit data generated in a data center is prohibitively costly to transmit to a central node, and 2) systemlayer auditing poses a “needle-in-a-haystack” problem, such that hundreds of employee hours may be required to diagnose a single intrusion. This paper presents Winnower, a scalable system for auditbased cluster monitoring that addresses these challenges. Our key insight is that, for tasks that are replicated across nodes in a distributed application, a model can be defined over audit logs to succinctly summarize the behavior of many nodes, thus eliminating the need to transmit redundant audit records to a central monitoring node. Specifically, Winnower parses audit records into provenance graphs that describe the actions of individual nodes, then performs grammatical inference over individual graphs using a novel adaptation of Deterministic Finite Automata (DFA) Learning to produce a behavioral model of many nodes at once. This provenance model can be efficiently transmitted to a central node and used to identify anomalous events in the cluster. We have implemented Winnower for Docker Swarm container clusters and evaluate our system against real-world applications and attacks. We show that Winnower dramatically reduces storage and network overhead associated with aggregating system audit logs, by as much as 98%, without sacrificing the important information needed for attack investigation. Winnower thus represents a significant step forward for security monitoring in distributed systems.", "title": "" }, { "docid": "344d10f48d2d40c66e2160df6ffe035a", "text": "Trichostasis spinulosa is a common disorder of follicular hyperkeratosis that is often confused clinically with similar disorders, such as keratosis pilaris and eruptive vellus hair cysts. Six patients from the UTMB dermatology clinic who had trichostasis spinulosa are presented. Two of the six also had keratosis pilaris and one had eruptive vellus hair cysts. The present study was undertaken to compare and contrast the clinical presentation and histopathologic appearance of these three disorders. The results of the study and review of the literature revealed differences in distribution of lesions and microscopic appearance of follicular and histopathologic material.", "title": "" }, { "docid": "406b1d13ecc9c9097079c8a24c15a332", "text": "We propose an automated breast cancer triage CAD system using machine vision on low-cost, portable ultrasound imaging devices. We demonstrate that the triage CAD software can effectively analyze images captured by minimally-trained operators and output one of three assessments - benign, probably benign (6-month follow-up recommended) and suspicious (biopsy recommended). This system opens up the possibility of offering practical, cost-effective breast cancer diagnosis for symptomatic women in economically developing countries.", "title": "" }, { "docid": "1d483a47ff5c735fd0ee78dfdb9bd4f0", "text": "This paper is concerned with graphical criteria that can be used to solve the problem of identifying casual effects from nonexperimental data in a causal Bayesian network structure, i.e., a directed acyclic graph that represents causal relationships. We first review Pearl’s work on this topic [Pearl, 1995], in which several useful graphical criteria are presented. Then we present a complete algorithm [Huang and Valtorta, 2006b] for the identifiability problem. By exploiting the completeness of this algorithm, we prove that the three basicdo-calculus rulesthat Pearl presents are complete, in the sense that, if a causal effect is identifiable, there exists a sequence of applications of the rules of the do-calculus that transforms the causal effect formula into a formula that only includes observational quantities.", "title": "" }, { "docid": "1a23c0ed6aea7ba2cf4d3021de4cfa8b", "text": "This article focuses on the traffic coordination problem at traffic intersections. We present a decentralized coordination approach, combining optimal control with model-based heuristics. We show how model-based heuristics can lead to low-complexity solutions that are suitable for a fast online implementation, and analyze its properties in terms of efficiency, feasibility and optimality. Finally, simulation results for different scenarios are also presented.", "title": "" }, { "docid": "f3f3aec72786299f3ef885e4b862ca2b", "text": "This paper presents the method that underlies our submission to the untrimmed video classification task of ActivityNet Challenge 2016. We follow the basic pipeline of temporal segment networks [ 16] and further raise the performance via a number of other techniques. Specifically, we use the latest deep model architecture, e.g., ResNet and Inception V3, and introduce new aggregation schemes (top-k and attention-weighted pooling). Additionally, we incorp rate the audio as a complementary channel, extracting relevant information via a CNN applied to the spectrograms. With these techniques, we derive an ensemble of deep models, which, together, attains a high classification accurac y (mAP93.23%) on the testing set and secured the first place in the challenge.", "title": "" }, { "docid": "1f5708382f0c4f70f500253554a8b3cb", "text": "The accuracy of object classifiers can significantly drop when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, adapting the classifiers to the scenario in which they must operate is of paramount importance. We present novel domain adaptation (DA) methods for object detection. As proof of concept, we focus on adapting the state-of-the-art deformable part-based model (DPM) for pedestrian detection. We introduce an adaptive structural SVM (A-SSVM) that adapts a pre-learned classifier between different domains. By taking into account the inherent structure in feature space (e.g., the parts in a DPM), we propose a structure-aware A-SSVM (SA-SSVM). Neither A-SSVM nor SA-SSVM needs to revisit the source-domain training data to perform the adaptation. Rather, a low number of target-domain training examples (e.g., pedestrians) are used. To address the scenario where there are no target-domain annotated samples, we propose a self-adaptive DPM based on a self-paced learning (SPL) strategy and a Gaussian Process Regression (GPR). Two types of adaptation tasks are assessed: from both synthetic pedestrians and general persons (PASCAL VOC) to pedestrians imaged from an on-board camera. Results show that our proposals avoid accuracy drops as high as 15 points when comparing adapted and non-adapted detectors.", "title": "" }, { "docid": "b79536c9e2207ffc82e700d24ea27682", "text": "Active learning strategies respond to the costly labelling task in a supervised classification by selecting the most useful unlabelled examples in training a predictive model. Many conventional active learning algorithms focus on refining the decision boundary, rather than exploring new regions that can be more informative. In this setting, we propose a sequential algorithm named EG − Active that can improve any Active learning algorithm by an optimal random exploration. Experimental results show a statistically significant and appreciable improvement in the performance of our new approach over the existing active feedback methods.", "title": "" }, { "docid": "0ce57a66924192a50728fb67023e0ed2", "text": "Most studies on TCP over multi-hop wireless ad hoc networks have only addressed the issue of performance degradation due to temporarily broken routes, which results in TCP inability to distinguish between losses due to link failures or congestion. This problem tends to become more serious as network mobility increases. In this work, we tackle the equally important capture problem to which there has been little or no solution, and is present mostly in static and low mobility multihop wireless networks. This is a result of the interplay between the MAC layer and TCP backoff policies, which causes nodes to unfairly capture the wireless shared medium, hence preventing neighboring nodes to access the channel. This has been shown to have major negative effects on TCP performance comparable to the impact of mobility. We propose a novel algorithm, called COPAS (COntention-based PAth Selection), which incorporates two mechanisms to enhance TCP performance by avoiding capture conditions. First, it uses disjoint forward (sender to receiver for TCP data) and reverse (receiver to sender for TCP ACKs) paths in order to minimize the conflicts of TCP data and ACK packets. Second, COPAS employs a dynamic contentionbalancing scheme where it continuously monitors and changes forward and reverse paths according to the level of MAC layer contention, hence minimizing the likelihood of capture. Through extensive simulation, COPAS is shown to improve TCP throughput by up to 90% while keeping routing overhead low.", "title": "" }, { "docid": "ca94b1bb1f4102ed6b4506441b2431fc", "text": "It is often a difficult task to accurately segment images with intensity inhomogeneity, because most of representative algorithms are region-based that depend on intensity homogeneity of the interested object. In this paper, we present a novel level set method for image segmentation in the presence of intensity inhomogeneity. The inhomogeneous objects are modeled as Gaussian distributions of different means and variances in which a sliding window is used to map the original image into another domain, where the intensity distribution of each object is still Gaussian but better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field with the original signal within the window. A maximum likelihood energy functional is then defined on the whole image region, which combines the bias field, the level set function, and the piecewise constant function approximating the true image signal. The proposed level set method can be directly applied to simultaneous segmentation and bias correction for 3 and 7T magnetic resonance images. Extensive evaluation on synthetic and real-images demonstrate the superiority of the proposed method over other representative algorithms.", "title": "" }, { "docid": "68cb8836a07846d19118d21383f6361a", "text": "Background: Dental rehabilitation of partially or totally edentulous patients with oral implants has become a routine treatment modality in the last decades, with reliable long-term results. However, unfavorable local conditions of the alveolar ridge, due to atrophy, periodontal disease, and trauma sequelae may provide insufficient bone volume or unfavorable vertical, horizontal, and sagittal intermaxillary relationships, which may render implant placement impossible or incorrect from a functional and esthetic viewpoint. The aim of the current review is to discuss the different strategies for reconstruction of the alveolar ridge defect for implant placement. Study design: The study design includes a literature review of the articles that address the association between Reconstruction of Mandibular Alveolar Ridge Defects and Implant Placement. Results: Yet, despite an increasing number of publications related to the correction of deficient alveolar ridges, much controversy still exists concerning which is the more suitable and reliable technique. This is often because the publications are of insufficient methodological quality (inadequate sample size, lack of well-defined exclusion and inclusion criteria, insufficient follow-up, lack of well-defined success criteria, etc.). Conclusion: On the basis of available data it is difficult to conclude that a particular surgical procedure offered better outcome as compared to another. Hence the practical use of the available bone augmentation procedures for dental implants depends on the clinician’s preference in general and the clinical findings in the patient in particular. Surgical techniques that reduce trauma, preserve and augment the alveolar ridge represent key areas in the goal to optimize implant results.", "title": "" }, { "docid": "bc83ea7c70a901d4b22c3aa13386e522", "text": "Code-switching (CS) refers to a linguistic phenomenon where a speaker uses different languages in an utterance or between alternating utterances. In this work, we study end-to-end (E2E) approaches to the Mandarin-English code-switching speech recognition (CSSR) task. We first examine the effectiveness of using data augmentation and byte-pair encoding (BPE) subword units. More importantly, we propose a multitask learning recipe, where a language identification task is explicitly learned in addition to the E2E speech recognition task. Furthermore, we introduce an efficient word vocabulary expansion method for language modeling to alleviate data sparsity issues under the code-switching scenario. Experimental results on the SEAME data, a Mandarin-English CS corpus, demonstrate the effectiveness of the proposed methods.", "title": "" }, { "docid": "05afb7a1f2ae89344c02f80e23a0398e", "text": "References 1. Guan, P., Weiss, A., Balan, A., Black, M.J.: Estimating human shape and pose from a single image. ICCV 2009 2. Pishchulin, L., Insafutdinov, E., Tang, S., Andres, B., Andriluka, M., Gehler, P., Schiele, B.: DeepCut: Joint subset partition and labeling for multi person pose estimation. CVPR 2016 3. Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: A skinned multi-person linear model. SIGGRAPH Asia 2015 4. Akhter, I., Black, M.J.: Pose-conditioned joint angle limits for 3D human pose reconstruction. CVPR 2015 5. Ramakrishna, V., Kanade, T., Sheikh, Y.: Reconstructing 3D Human Pose from 2D Image Landmarks. ECCV 2012 6. Zhou, X., Zhu, M., Leonardos, S., Derpanis, K., Daniilidis, K.: Sparse representation for 3D shape estimation: A convex relaxation approach. CVPR. 2015 Data: Projected joints from 1000 synthetic 3D models + noise.", "title": "" }, { "docid": "2d774ec62cdac08997cb8b86e73fe015", "text": "This paper focuses on modeling resolving and simulations of the inverse kinematics of an anthropomorphic redundant robotic structure with seven degrees of freedom and a workspace similar to human arm. Also the kinematical model and the kinematics equations of the robotic arm are presented. A method of resolving the redundancy of seven degrees of freedom robotic arm is presented using Fuzzy Logic toolbox from MATLAB®.", "title": "" }, { "docid": "171b5d589cc8751cb0516a5f6898724e", "text": "Mumps Update [October 2017]: The Healthcare Infection Control Practices Advisory Committee (HICPAC) voted to change the recommendation of isolation for persons with mumps from 9 days to 5 days based on a 2008 MMWR report. (https://www.cdc.gov/mmwr/preview/mmwrhtml/mm5740a3.htm accessed September 2018). Ebola Virus Disease Update [August 2014]: The recommendations in this guideline for Ebola has been superseded by these CDC documents: • Infection Prevention and Control Recommendations for Hospitalized Patients with Known or Suspected Ebola Virus Disease in U.S. Hospitals (https://www.cdc.gov/vhf/ebola/clinicians/evd/infection-control.html accessed September 2018) • Interim Guidance for Environmental Infection Control in Hospitals for Ebola Virus (https://www.cdc.gov/vhf/ebola/clinicians/cleaning/hospitals.html accessed September 2018) See CDC’s Ebola Virus Disease website (https://www.cdc.gov/vhf/ebola/ accessed September 2018) for current information on how Ebola virus is transmitted.", "title": "" } ]
scidocsrr
ae408b6340eee0c0a75498379482cc1a
Land Use Classification in Remote Sensing Images by Convolutional Neural Networks
[ { "docid": "698fb992c5ff7ecc8d2e153f6b385522", "text": "We investigate bag-of-visual-words (BOVW) approaches to land-use classification in high-resolution overhead imagery. We consider a standard non-spatial representation in which the frequencies but not the locations of quantized image features are used to discriminate between classes analogous to how words are used for text document classification without regard to their order of occurrence. We also consider two spatial extensions, the established spatial pyramid match kernel which considers the absolute spatial arrangement of the image features, as well as a novel method which we term the spatial co-occurrence kernel that considers the relative arrangement. These extensions are motivated by the importance of spatial structure in geographic data.\n The methods are evaluated using a large ground truth image dataset of 21 land-use classes. In addition to comparisons with standard approaches, we perform extensive evaluation of different configurations such as the size of the visual dictionaries used to derive the BOVW representations and the scale at which the spatial relationships are considered.\n We show that even though BOVW approaches do not necessarily perform better than the best standard approaches overall, they represent a robust alternative that is more effective for certain land-use classes. We also show that extending the BOVW approach with our proposed spatial co-occurrence kernel consistently improves performance.", "title": "" }, { "docid": "b6da971f13c1075ce1b4aca303e7393f", "text": "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.", "title": "" } ]
[ { "docid": "d02e87a00aaf29a86cf94ad0c539fd0d", "text": "Future advanced driver assistance systems will contain multiple sensors that are used for several applications, such as highly automated driving on freeways. The problem is that the sensors are usually asynchronous and their data possibly out-of-sequence, making fusion of the sensor data non-trivial. This paper presents a novel approach to track-to-track fusion for automotive applications with asynchronous and out-of-sequence sensors using information matrix fusion. This approach solves the problem of correlation between sensor data due to the common process noise and common track history, which eliminates the need to replace the global track estimate with the fused local estimate at each fusion cycle. The information matrix fusion approach is evaluated in simulation and its performance demonstrated using real sensor data on a test vehicle designed for highly automated driving on freeways.", "title": "" }, { "docid": "1971e12a6792991f77f59cbb42dedb32", "text": "The use of deep learning to solve the problems in literary arts has been a recent trend that gained a lot of attention and automated generation of music has been an active area. This project deals with the generation of music using raw audio files in the frequency domain relying on various LSTM architectures. Fully connected and convolutional layers are used along with LSTM’s to capture rich features in the frequency domain and increase the quality of music generated. The work is focused on unconstrained music generation and uses no information about musical structure(notes or chords) to aid learning.The music generated from various architectures are compared using blind fold tests. Using the raw audio to train models is the direction to tapping the enormous amount of mp3 files that exist over the internet without requiring the manual effort to make structured MIDI files. Moreover, not all audio files can be represented with MIDI files making the study of these models an interesting prospect to the future of such models.", "title": "" }, { "docid": "f071a3d699ba4b3452043b6efb14b508", "text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.", "title": "" }, { "docid": "bb72e4d6f967fb88473756cdcbb04252", "text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.", "title": "" }, { "docid": "1c415034b3e9e0e2013624c69c386f13", "text": "For a microgrid (MG) to participate in a real-time and demand-side bidding market, high-level control strategies aiming at optimizing the operation of the MG are necessary. One of the difficulties for research of a competitive MG power market is the absence of efficient computational tools. Although many commercial power system simulators are available, these power system simulators are usually not directly applicable to solve the optimal power dispatch problem for an MG power market and to perform MG power-flow study. This paper analyzes the typical MG market policies and investigates how these policies can be converted in such a way that one can use commercial power system software for MG power market study. The paper also develops a mechanism suitable for the power-flow study of an MG containing inverter-interfaced distributed energy sources. The extensive simulation analyses are conducted for grid-tied and islanded operations of a benchmark MG network.", "title": "" }, { "docid": "409f3b2768a8adf488eaa6486d1025a2", "text": "The aim of the study was to investigate prospectively the direction of the relationship between adolescent girls' body dissatisfaction and self-esteem. Participants were 242 female high school students who completed questionnaires at two points in time, separated by 2 years. The questionnaire contained measures of weight (BMI), body dissatisfaction (perceived overweight, figure dissatisfaction, weight satisfaction) and self-esteem. Initial body dissatisfaction predicted self-esteem at Time 1 and Time 2, and initial self-esteem predicted body dissatisfaction at Time 1 and Time 2. However, linear panel analysis (regression analyses controlling for Time 1 variables) found that aspects of Time 1 weight and body dissatisfaction predicted change in self-esteem, but not vice versa. It was concluded that young girls with heavier actual weight and perceptions of being overweight were particularly vulnerable to developing low self-esteem.", "title": "" }, { "docid": "a014644ccccb2a06d820ee975cfdfa88", "text": "Analyzing customer feedback is the best way to channelize the data into new marketing strategies that benefit entrepreneurs as well as customers. Therefore an automated system which can analyze the customer behavior is in great demand. Users may write feedbacks in any language, and hence mining appropriate information often becomes intractable. Especially in a traditional feature-based supervised model, it is difficult to build a generic system as one has to understand the concerned language for finding the relevant features. In order to overcome this, we propose deep Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) based approaches that do not require handcrafting of features. We evaluate these techniques for analyzing customer feedback sentences on four languages, namely English, French, Japanese and Spanish. Our empirical analysis shows that our models perform well in all the four languages on the setups of IJCNLP Shared Task on Customer Feedback Analysis. Our model achieved the second rank in French, with an accuracy of 71.75% and third ranks for all the other languages.", "title": "" }, { "docid": "23eb979ec3e17db2b162b659e296a10e", "text": "The authors would like to thank the Marketing Science Institute for their generous assistance in funding this research. We would also like to thank Claritas for providing us with data. We are indebted to Vincent Bastien, former CEO of Louis Vuitton, for the time he has spent with us critiquing our framework.", "title": "" }, { "docid": "31d055afdf6d40a5a2e897e9a78a0867", "text": "Photoluminescent graphene quantum dots (GQDs) have received enormous attention because of their unique chemical, electronic and optical properties. Here a series of GQDs were synthesized under hydrothermal processes in order to investigate the formation process and optical properties of N-doped GQDs. Citric acid (CA) was used as a carbon precursor and self-assembled into sheet structure in a basic condition and formed N-free GQD graphite framework through intermolecular dehydrolysis reaction. N-doped GQDs were prepared using a series of N-containing bases such as urea. Detailed structural and property studies demonstrated the formation mechanism of N-doped GQDs for tunable optical emissions. Hydrothermal conditions promote formation of amide between -NH₂ and -COOH with the presence of amine in the reaction. The intramoleculur dehydrolysis between neighbour amide and COOH groups led to formation of pyrrolic N in the graphene framework. Further, the pyrrolic N transformed to graphite N under hydrothermal conditions. N-doping results in a great improvement of PL quantum yield (QY) of GQDs. By optimized reaction conditions, the highest PL QY (94%) of N-doped GQDs was obtained using CA as a carbon source and ethylene diamine as a N source. The obtained N-doped GQDs exhibit an excitation-independent blue emission with single exponential lifetime decay.", "title": "" }, { "docid": "45712feb68b83cc054027807c1a30130", "text": "A solar energy semiconductor cooling box is presented in the paper. The cooling box is compact and easy to carry, can be made a special refrigeration unit which is smaller according to user needs. The characteristics of the cooling box are its simple use and maintenance, safe performance, decentralized power supply, convenient energy storage, no environmental pollution, and so on. In addition, compared with the normal mechanical refrigeration, the semiconductor refrigeration system which makes use of Peltier effect does not require pumps, compressors and other moving parts, and so there is no wear and noise. It does not require refrigerant so it will not produce environmental pollution, and it also eliminates the complex transmission pipeline. The concrete realization form of power are “heat - electric - cold”, “light - electric - cold”, “light - heat - electric - cold”. In order to achieve the purpose of cooling, solar cells generate electricity to drive the semiconductor cooling devices. The working principle is mainly photovoltaic effect and the Peltier effect.", "title": "" }, { "docid": "288ce84b9dd3244cce2044d53f35cd4b", "text": "Margaret-Anne Storey University of Victoria Victoria, BC, Canada [email protected] Abstract Modern software developers rely on an extensive set of social media tools and communication channels. The adoption of team communication platforms has led to the emergence of conversation-based tools and integrations, many of which are chatbots. Understanding how software developers manage their complex constellation of collaborators in conjunction with the practices and tools they use can bring valuable insights into socio-technical collaborative work in software development and other knowledge work domains.", "title": "" }, { "docid": "2afb992058eb720ff0baf4216e3a22c2", "text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: Summary. — A longitudinal anthropological study of cotton farming in Warangal District of Andhra Pradesh, India, compares a group of villages before and after adoption of Bt cotton. It distinguishes \" field-level \" and \" farm-level \" impacts. During this five-year period yields rose by 18% overall, with greater increases among poor farmers with the least access to information. Insecticide sprayings dropped by 55%, although predation by non-target pests was rising. However shifting from the field to the historically-situated context of the farm recasts insect attacks as a symptom of larger problems in agricultural decision-making. Bt cotton's opponents have failed to recognize real benefits at the field level, while its backers have failed to recognize systemic problems that Bt cotton may exacerbate.", "title": "" }, { "docid": "e5f2e7b7dfdfaee33a2187a0a7183cfb", "text": "BACKGROUND\nPossible associations between television viewing and video game playing and children's aggression have become public health concerns. We did a systematic review of studies that examined such associations, focussing on children and young people with behavioural and emotional difficulties, who are thought to be more susceptible.\n\n\nMETHODS\nWe did computer-assisted searches of health and social science databases, gateways, publications from relevant organizations and for grey literature; scanned bibliographies; hand-searched key journals; and corresponded with authors. We critically appraised all studies.\n\n\nRESULTS\nA total of 12 studies: three experiments with children with behavioural and emotional difficulties found increased aggression after watching aggressive as opposed to low-aggressive content television programmes, one found the opposite and two no clear effect, one found such children no more likely than controls to imitate aggressive television characters. One case-control study and one survey found that children and young people with behavioural and emotional difficulties watched more television than controls; another did not. Two studies found that children and young people with behavioural and emotional difficulties viewed more hours of aggressive television programmes than controls. One study on video game use found that young people with behavioural and emotional difficulties viewed more minutes of violence and played longer than controls. In a qualitative study children with behavioural and emotional difficulties, but not their parents, did not associate watching television with aggression. All studies had significant methodological flaws. None was based on power calculations.\n\n\nCONCLUSION\nThis systematic review found insufficient, contradictory and methodologically flawed evidence on the association between television viewing and video game playing and aggression in children and young people with behavioural and emotional difficulties. If public health advice is to be evidence-based, good quality research is needed.", "title": "" }, { "docid": "4ef20b58ce1418e25e503d929798b0e4", "text": "The findings of 54 research studies were integrated through meta-analysis to determine the effects of calculators on student achievement and attitude levels. Effect sizes were generated through Glassian techniques of meta-analysis, and Hedges and Olkin’s (1985) inferential statistical methods were used to test the significance of effect size data. Results revealed that students’ operational skills and problem-solving skills improved when calculators were an integral part of testing and instruction. The results for both skill types were mixed when calculators were not part of assessment, but in all cases, calculator use did not hinder the development of mathematical skills. Students using calculators had better attitudes toward mathematics than their noncalculator counterparts. Further research is needed in the retention of mathematics skills after instruction and transfer of skills to other mathematics-related subjects.", "title": "" }, { "docid": "04c34a13eecc8f652e3231fcc8cb9aaa", "text": "C. Midgley et al. (2001) raised important questions about the effects of performance-approach goals. The present authors disagree with their characterization of the research findings and implications for theory. They discuss 3 reasons to revise goal theory: (a) the importance of separating approach from avoidance strivings, (b) the positive potential of performance-approach goals, and (c) identification of the ways performance-approach goals can combine with mastery goals to promote optimal motivation. The authors review theory and research to substantiate their claim that goal theory is in need of revision, and they endorse a multiple goal perspective. The revision of goal theory is underway and offers a more complex, but necessary, perspective on important issues of motivation, learning, and achievement.", "title": "" }, { "docid": "6b6285cd8512a2376ae331fda3fedf20", "text": "The Facial Action Coding System (FACS) (Ekman & Friesen, 1978) is a comprehensive and widely used method of objectively describing facial activity. Little is known, however, about inter-observer reliability in coding the occurrence, intensity, and timing of individual FACS action units. The present study evaluated the reliability of these measures. Observational data came from three independent laboratory studies designed to elicit a wide range of spontaneous expressions of emotion. Emotion challenges included olfactory stimulation, social stress, and cues related to nicotine craving. Facial behavior was video-recorded and independently scored by two FACS-certified coders. Overall, we found good to excellent reliability for the occurrence, intensity, and timing of individual action units and for corresponding measures of more global emotion-specified combinations.", "title": "" }, { "docid": "598f73160eae35c94d2f77a7b9c0ecb3", "text": "Homocysteine (HCY) is a degradation product of the methionine pathway. The B vitamins, in particular vitamin B12 and folate, are the primary nutritional determinant of HCY levels and therefore their deficiencies result in hyperhomocysteinaemia (HHCY). Prevalence of hyperhomocysteinemia (HHCY) and related dietary deficiencies in B vitamins and folate increase with age and have been related to osteoporosis and abnormal development of epiphyseal cartilage and bone in rodents. Here we provide a review of experimental and population studies. The negative effects of HHCY and/or B vitamins and folate deficiencies on bone formation and remodeling are documented by cell models, including primary osteoblasts, osteoclast and bone progenitor cells as well as by animal and human studies. However, underlying pathophysiological mechanisms are complex and remain poorly understood. Whether these associations are the direct consequences of impaired one carbon metabolism is not clarified and more studies are still needed to translate these findings to human population. To date, the evidence is limited and somewhat conflicting, however further trials in groups most vulnerable to impaired one carbon metabolism are required.", "title": "" }, { "docid": "ffc521b597ab5332c3541a06a01c5531", "text": "This research deals with a vital and important issue in computer world. It is concerned with the software management processes that examine the area of software development through the development models, which are known as software development life cycle. It represents five of the development models namely, waterfall, Iteration, V-shaped, spiral and Extreme programming. These models have advantages and disadvantages as well. Therefore, the main objective of this research is to represent different models of software development and make a comparison between them to show the features and defects of each model.", "title": "" }, { "docid": "f57fbb53b069fe60d7dcd3d450fd3783", "text": "Host-based security tools such as anti-virus and intrusion detection systems are not adequately protected on today's computers. Malware is often designed to immediately disable any security tools upon installation, rendering them useless. While current research has focused on moving these vulnerable security tools into an isolated virtual machine, this approach cripples security tools by preventing them from doing active monitoring. This paper describes an architecture that takes a hybrid approach, giving security tools the ability to do active monitoring while still benefiting from the increased security of an isolated virtual machine. We discuss the architecture and a prototype implementation that can process hooks from a virtual machine running Windows XP on Xen. We conclude with a security analysis and show the performance of a single hook to be 28 musecs in the best case.", "title": "" } ]
scidocsrr
d1c7dc76c0dbaff5997a6593a952d6de
Multi-label hypothesis reuse
[ { "docid": "bcaa7d61466f21757226ef0239f14b5b", "text": "Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this paper, a multi-label lazy learning approach named Mlknn is presented, which is derived from the traditional k-Nearest Neighbor (kNN) algorithm. In detail, for each unseen instance, its k nearest neighbors in the training set are firstly identified. After that, based on statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance. Experiments on three different real-world multi-label learning problems, i.e. Yeast gene functional analysis, natural scene classification and automatic web page categorization, show that Ml-knn achieves superior performance to some well-established multi-label learning algorithms.", "title": "" }, { "docid": "0f10aa71d58858ea1d8d7571a7cbfe22", "text": "We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy.", "title": "" }, { "docid": "49c7d088e4122831eddfe864a44b69ca", "text": "Common approaches to multi-label classification learn independent classifiers for each category, and employ ranking or thresholding schemes for classification. Because they do not exploit dependencies between labels, such techniques are only well-suited to problems in which categories are independent. However, in many domains labels are highly interdependent. This paper explores multi-label conditional random field (CRF)classification models that directly parameterize label co-occurrences in multi-label classification. Experiments show that the models outperform their single-label counterparts on standard text corpora. Even when multi-labels are sparse, the models improve subset classification error by as much as 40%.", "title": "" } ]
[ { "docid": "e84ca42f96cca0fe3ed7c70d90554a8d", "text": "While the volume of scholarly publications has increased at a frenetic pace, accessing and consuming the useful candidate papers, in very large digital libraries, is becoming an essential and challenging task for scholars. Unfortunately, because of language barrier, some scientists (especially the junior ones or graduate students who do not master other languages) cannot efficiently locate the publications hosted in a foreign language repository. In this study, we propose a novel solution, cross-language citation recommendation via Hierarchical Representation Learning on Heterogeneous Graph (HRLHG), to address this new problem. HRLHG can learn a representation function by mapping the publications, from multilingual repositories, to a low-dimensional joint embedding space from various kinds of vertexes and relations on a heterogeneous graph. By leveraging both global (task specific) plus local (task independent) information as well as a novel supervised hierarchical random walk algorithm, the proposed method can optimize the publication representations by maximizing the likelihood of locating the important cross-language neighborhoods on the graph. Experiment results show that the proposed method can not only outperform state-of-the-art baseline models, but also improve the interpretability of the representation model for cross-language citation recommendation task.", "title": "" }, { "docid": "600d04e1d78084b36c9fb573fb9d699a", "text": "A mobile robot is designed to pick and place the objects through voice commands. This work would be practically useful to wheelchair bound persons. The pick and place robot is designed in a way that it is able to help the user to pick up an item that is placed at two different levels using an extendable arm. The robot would move around to pick up an item and then pass it back to the user or to a desired location as told by the user. The robot control is achieved through voice commands such as left, right, straight, etc. in order to help the robot to navigate around. Raspberry Pi 2 controls the overall design with 5 DOF servo motor arm. The webcam is used to navigate around which provides live streaming using a mobile application for the user to look into. Results show the ability of the robot to pick and place the objects up to a height of 23.5cm through proper voice commands.", "title": "" }, { "docid": "a47d9d5ddcd605755eb60d5499ad7f7a", "text": "This paper presents a 14MHz Class-E power amplifier to be used for wireless power transmission. The Class-E power amplifier was built to consider the VSWR and the frequency bandwidth. Tw o kinds of circuits were designed: the high and low quality factor amplifiers. The low quality factor amplifier is confirmed to have larger bandwidth than the high quality factor amplifier. It has also possessed less sensitive characteristics. Therefore, the low quality factor amplifier circuit was adopted and tested. The effect of gate driving input source is studied. The efficiency of the Class-E amplifier reaches 85.5% at 63W.", "title": "" }, { "docid": "5a3b8a2ec8df71956c10b2eb10eabb99", "text": "During a project examining the use of machine learning techniques for oil spill detection, we encountered several essential questions that we believe deserve the attention of the research community. We use our particular case study to illustrate such issues as problem formulation, selection of evaluation measures, and data preparation. We relate these issues to properties of the oil spill application, such as its imbalanced class distribution, that are shown to be common to many applications. Our solutions to these issues are implemented in the Canadian Environmental Hazards Detection System (CEHDS), which is about to undergo field testing.", "title": "" }, { "docid": "a0129e90268bd59895d3de66f5b04d7b", "text": "There is an emerging trend in higher education for the adoption of massive open online courses (MOOCs). However, despite this interest in learning at scale, there has been limited work investigating the impact MOOCs can play on student learning. In this study, we adopt a novel approach, using language and discourse as a tool to explore its association with two established measures related to learning: traditional academic performance and social centrality. We demonstrate how characteristics of language diagnostically reveal the performance and social position of learners as they interact in a MOOC. We use CohMetrix, a theoretically grounded, computational linguistic modeling tool, to explore students’ forum postings across five potent discourse dimensions. Using a Social Network Analysis (SNA) methodology, we determine learners’ social centrality. Linear mixed-effect modeling is used for all other analyses to control for individual learner and text characteristics. The results indicate that learners performed significantly better when they engaged in more expository style discourse, with surface and deep level cohesive integration, abstract language, and simple syntactic structures. However, measures of social centrality revealed a different picture. Learners garnered a more significant and central position in their social network when they engaged with more narrative style discourse with less overlap between words and ideas, simpler syntactic structures and abstract words. Implications for further research and practice are discussed regarding the misalignment between these two learning-related outcomes.", "title": "" }, { "docid": "532463ff1e5e91a2f9054cb86dcfa654", "text": "During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-di↵erence time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied for various application contexts including those requiring to model light/matter interactions on the nanoscale. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.", "title": "" }, { "docid": "7dfbb5e01383b5f50dbeb87d55ceb719", "text": "In recent years, a number of network forensics techniques have been proposed to investigate the increasing number of cybercrimes. Network forensics techniques assist in tracking internal and external network attacks by focusing on inherent network vulnerabilities and communication mechanisms. However, investigation of cybercrime becomes more challenging when cyber criminals erase the traces in order to avoid detection. Therefore, network forensics techniques employ mechanisms to facilitate investigation by recording every single packet and event that is disseminated into the network. As a result, it allows identification of the origin of the attack through reconstruction of the recorded data. In the current literature, network forensics techniques are studied on the basis of forensic tools, process models and framework implementations. However, a comprehensive study of cybercrime investigation using network forensics frameworks along with a critical review of present network forensics techniques is lacking. In other words, our study is motivated by the diversity of digital evidence and the difficulty of addressing numerous attacks in the network using network forensics techniques. Therefore, this paper reviews the fundamental mechanism of network forensics techniques to determine how network attacks are identified in the network. Through an extensive review of related literature, a thematic taxonomy is proposed for the classification of current network forensics techniques based on its implementation as well as target data sets involved in the conducting of forensic investigations. The critical aspects and significant features of the current network forensics techniques are investigated using qualitative analysis technique. We derive significant parameters from the literature for discussing the similarities and differences in existing network forensics techniques. The parameters include framework nature, mechanism, target dataset, target instance, forensic processing, time of investigation, execution definition, and objective function. Finally, open research challenges are discussed in network forensics to assist researchers in selecting the appropriate domains for further research and obtain ideas for exploring optimal techniques for investigating cyber-crimes. & 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f133afb99d9d1f44c03e542db05b3d1e", "text": "Recently popularized graph neural networks achieve the state-of-the-art accuracy on a number of standard benchmark datasets for graph-based semi-supervised learning, improving significantly over existing approaches. These architectures alternate between a propagation layer that aggregates the hidden states of the local neighborhood and a fully-connected layer. Perhaps surprisingly, we show that a linear model, that removes all the intermediate fullyconnected layers, is still able to achieve a performance comparable to the state-of-the-art models. This significantly reduces the number of parameters, which is critical for semi-supervised learning where number of labeled examples are small. This in turn allows a room for designing more innovative propagation layers. Based on this insight, we propose a novel graph neural network that removes all the intermediate fully-connected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph. The attention mechanism allows us to learn a dynamic and adaptive local summary of the neighborhood to achieve more accurate predictions. In a number of experiments on benchmark citation networks datasets, we demonstrate that our approach outperforms competing methods. By examining the attention weights among neighbors, we show that our model provides some interesting insights on how neighbors influence each other.", "title": "" }, { "docid": "6c4c56fcc697512105571bbe5103f7ab", "text": "Surgical anaesthesia with haemodynamic stability and opioid-free analgesia in fragile patients can theoretically be provided with lumbosacral plexus blockade. We compared a novel ultrasound-guided suprasacral technique for blockade of the lumbar plexus and the lumbosacral trunk with ultrasound-guided blockade of the lumbar plexus. The objective was to investigate whether the suprasacral technique is equally effective for anaesthesia of the terminal lumbar plexus nerves compared with a lumbar plexus block, and more effective for anaesthesia of the lumbosacral trunk. Twenty volunteers were included in a randomised crossover trial comparing the new suprasacral with a lumbar plexus block. The primary outcome was sensory dermatome anaesthesia of L2-S1. Secondary outcomes were peri-neural analgesic spread estimated with magnetic resonance imaging, sensory blockade of dermatomes L2-S3, motor blockade, volunteer discomfort, arterial blood pressure change, block performance time, lidocaine pharmacokinetics and complications. Only one volunteer in the suprasacral group had sensory blockade of all dermatomes L2-S1. Epidural spread was verified by magnetic resonance imaging in seven of the 34 trials (two suprasacral and five lumbar plexus blocks). Success rates of the sensory and motor blockade were 88-100% for the major lumbar plexus nerves with the suprasacral technique, and 59-88% with the lumbar plexus block (p > 0.05). Success rate of motor blockade was 50% for the lumbosacral trunk with the suprasacral technique and zero with the lumbar plexus block (p < 0.05). Both techniques are effective for blockade of the terminal nerves of the lumbar plexus. The suprasacral parallel shift technique is 50% effective for blockade of the lumbosacral trunk.", "title": "" }, { "docid": "aee250663a05106c4c0fad9d0f72828c", "text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.", "title": "" }, { "docid": "6ecca3e76a4c04db9a77f695d24ae141", "text": "Cette thèse aborde de façon générale les algorithmes d'apprentissage, avec un intérêt tout particulier pour les grandes bases de données. Après avoir for-mulé leprobì eme de l'apprentissage demanì ere mathématique, nous présentons plusieurs algorithmes d'apprentissage importants, en particulier les Multi Layer Perceptrons, les Mixture d'Experts ainsi que les Support Vector Machines. Nous considérons ensuite une méthode d'entraˆınement pour les Support Vector Machines , adaptée aux ensembles de données de tailles raisonnables. Cepen-dant, l'entraˆınement d'un tel modèle reste irréalisable sur de très grande bases de données. Inspirés par la stratégie \" diviser pour régner \" , nous proposons alors un modèle de la famille des Mixture d'Experts, permettant de séparer le probì eme d' apprentissage en sous-probì emes plus simples , tout en gardant de bonnes performances en généralisation. Malgré de très bonnes performances en pratique , cet algorithme n ' en reste pas moins difficilè a utiliser , ` a cause de son nombre important d ' hyper-paramètres. Pour cette raison , nous préférons nous intéresser ensuitè a l ' amélioration de l ' entraˆınement des Multi Layer Percep-trons , bien plus facilesà utiliser , et plus adaptés aux grandes bases de données que les Support Vector Machines. Enfin , nous montrons que l ' idée de la marge qui fait la force des Support Vector Machines peutêtre appliquéè a une cer-taine classe de Multi Layer Perceptrons , ce qui nous m ` enè a un algorithme très rapide et ayant de très bonnes performances en généralisation. Summary This thesis aims to address machine learning in general , with a particular focus on large models and large databases. After introducing the learning problem in a formal way , we first review several important machine learning algorithms , particularly Multi Layer Perceptrons , Mixture of Experts and Support Vector Machines. We then present a training method for Support Vector Machines , adapted to reasonably large datasets. However the training of such a model is still intractable on very large databases. We thus propose a divide and conquer approach based on a kind of Mixture of Experts in order to break up the training problem into small pieces , while keeping good generalization performance. This mixture model can be applied to any kind of existing machine learning algorithm. Even though it performs well in practice the major drawback of this algorithm is the number of hyper-parameters to tune , which makes it …", "title": "" }, { "docid": "90ef67a5bff849d7abf8a473ef4cbf62", "text": "In this paper, we propose a semi-supervised learning method where we train two neural networks in a multi-task fashion: a target network and a confidence network. The target network is optimized to perform a given task and is trained using a large set of unlabeled data that are weakly annotated. We propose to weight the gradient updates to the target network using the scores provided by the second confidence network, which is trained on a small amount of supervised data. Thus we avoid that the weight updates computed from noisy labels harm the quality of the target network model. We evaluate our learning strategy on two different tasks: document ranking and sentiment classification. The results demonstrate that our approach not only enhances the performance compared to the baselines but also speeds up the learning process from weak labels.", "title": "" }, { "docid": "642b43cea0f417cf24fccf33c658279f", "text": "Harlequin ichthyosis (HI) is an extremely rare genetic skin disorder and the most severe form of a group of disorders, which includes lamellar ichthyosis and congenital ichthyosiform erythroderma. It consists in an autosomal recessive disorder with the majority of affected individuals being homozygous for mutation in the ABCA12 gene. This condition presents a wide range of severity and symptoms. Affected neonates often do not survive beyond the first few days of life and it was usually considered as being fatal in the past, but, with the improvement of neonatal intensive care, the survival of these patients also improved. Our report is about a harlequin baby with new variants, which have not been previously described. He presents two variants in heterozygosity in the ABCA12 gene: c.3067del (p.Tyr1023Ilefs * 22) and c.318-2A>G p(.?), inherited from the father and mother. Several aspects concerning genetics, physiopathology, diagnosis, treatment and prognosis are discussed. An intensive neonatal care and early introduction of oral retinoids improve survival rates in this kind of disorder.", "title": "" }, { "docid": "01f25dcc13efd4c3a168b8acd9f0f2f7", "text": "This paper describes an approach for the problem of face pose discrimination using Support Vector Machines (SVM). Face pose discrimination means that one can label the face image as one of several known poses. Face images are drawn from the standard FERET data base. The training set consists of 150 images equally distributed among frontal, approximately 33.75 rotated left and right poses, respectively, and the test set consists of 450 images again equally distributed among the three different types of poses. SVM achieved perfect accuracy 100% discriminating between the three possible face poses on unseen test data, using either polynomials of degree 3 or Radial Basis Functions (RBFs) as kernel approximation functions.", "title": "" }, { "docid": "fdfb71f5905b2af2c01c6b4d1fe23d7e", "text": "Many believe the electric power system is undergoing a profound change driven by a number of needs. There's the need for environmental compliance and energy conservation. We need better grid reliability while dealing with an aging infrastructure. And we need improved operational effi ciencies and customer service. The changes that are happening are particularly signifi cant for the electricity distribution grid, where \"blind\" and manual operations, along with the electromechanical components, will need to be transformed into a \"smart grid.\" This transformation will be necessary to meet environmental targets, to accommodate a greater emphasis on demand response (DR), and to support plug-in hybrid electric vehicles (PHEVs) as well as distributed generation and storage capabilities. It is safe to say that these needs and changes present the power industry with the biggest challenge it has ever faced. On one hand, the transition to a smart grid has to be evolutionary to keep the lights on; on the other hand, the issues surrounding the smart grid are signifi cant enough to demand major changes in power systems operating philosophy.", "title": "" }, { "docid": "b250ac830e1662252069cc85128358a7", "text": "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.", "title": "" }, { "docid": "1707a7d04c479c211a2b01b946625628", "text": "Property-based Features Given a sentencerepresentation pair, for each property listed in Table 2, we compute if it holds for the representation. For each property that holds and for each n-gram in the sentence we trigger a feature. Consider the first example in Table 1. The features triggered for this example include touches-wall#two-boxes-have and touches-wall#touching-the-side computed from the property touches-wall and the tri-grams two boxes have and touching the side. We observe that the MaxEnt model learns a higher weight for features which combine similar properties of the world and the sentence, such as touches-wall#touching-the-side.", "title": "" }, { "docid": "eaf7b6b0cc18453538087cc90254dbd8", "text": "We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires.\n Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.", "title": "" } ]
scidocsrr
3449683b5db379fdac9c1a5e6b76fb2c
Friends FTW! friendship and competition in halo: reach
[ { "docid": "e994243e3124e4c0849eeb2b733c2a78", "text": "This article explores the ways social interaction plays an integral role in the game EverQuest. Through our research we argue that social networks form a powerful component of the gameplay and the gaming experience, one that must be seriously considered to understand the nature of massively multiplayer online games. We discuss the discrepancy between how the game is portrayed and how it is actually played. By examining the role of social networks and interactions we seek to explore how the friendships between the players could be considered the ultimate exploit of the game.", "title": "" } ]
[ { "docid": "37342f65a722eaca7359aacbfbe61091", "text": "Video-surveillance and traffic analysis systems can be heavily improved using vision-based techniques to extract, manage and track objects in the scene. However, problems arise due to shadows. In particular, moving shadows can affect the correct localization, measurements and detection of moving objects. This work aims to present a technique for shadow detection and suppression used in a system for moving visual object detection and tracking. The major novelty of the shadow detection technique is the analysis carried out in the HSV color space to improve the accuracy in detecting shadows. This paper exploits comparison of shadow suppression using RGB and HSV color space in moving object detection and results in this paper are more encouraging using HSV colour space over RGB colour space. Keywords— Shadow detection; HSV color space; RGB color space.", "title": "" }, { "docid": "950fc4239ced87fef76ac687af3b09ac", "text": "Software developers’ activities are in general recorded in software repositories such as version control systems, bug trackers and mail archives. While abundant information is usually present in such repositories, successful information extraction is often challenged by the necessity to simultaneously analyze different repositories and to combine the information obtained. We propose to apply process mining techniques, originally developed for business process analysis, to address this challenge. However, in order for process mining to become applicable, different software repositories should be combined, and “related” software development events should be matched: e.g., mails sent about a file, modifications of the file and bug reports that can be traced back to it. The combination and matching of events has been implemented in FRASR (Framework for Analyzing Software Repositories), augmenting the process mining framework ProM. FRASR has been successfully applied in a series of case studies addressing such aspects of the development process as roles of different developers and the way bug reports are handled.", "title": "" }, { "docid": "ab9ea123c5884e8bc744fcb71855f0b5", "text": "In this paper, we consider a typical image blind denoising problem, which is to remove unknown noise from noisy images. As we all know, discriminative learning based methods, such as DnCNN, can achieve state-of-the-art denoising results, but they are not applicable to this problem due to the lack of paired training data. To tackle the barrier, we propose a novel two-step framework. First, a Generative Adversarial Network (GAN) is trained to estimate the noise distribution over the input noisy images and to generate noise samples. Second, the noise patches sampled from the first step are utilized to construct a paired training dataset, which is used, in turn, to train a deep Convolutional Neural Network (CNN) for denoising. Extensive experiments have been done to demonstrate the superiority of our approach in image blind denoising.", "title": "" }, { "docid": "65dfecb5e0f4f658a19cd87fb94ff0ae", "text": "Although deep learning has produced dazzling successes for applications of image, speech, and video processing in the past few years, most trainings are with suboptimal hyper-parameters, requiring unnecessarily long training times. Setting the hyper-parameters remains a black art that requires years of experience to acquire. This report proposes several efficient ways to set the hyper-parameters that significantly reduce training time and improves performance. Specifically, this report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point. Then it discusses how to increase/decrease the learning rate/momentum to speed up training. Our experiments show that it is crucial to balance every manner of regularization for each dataset and architecture. Weight decay is used as a sample regularizer to show how its optimal value is tightly coupled with the learning rates and momentums.", "title": "" }, { "docid": "ed4178ec9be6f4f8e87a50f0bf1b9a41", "text": "PURPOSE\nTo report a case of central retinal artery occlusion (CRAO) in a patient with biopsy-verified Wegener's granulomatosis (WG) with positive C-ANCA.\n\n\nMETHODS\nA 55-year-old woman presented with a 3-day history of acute painless bilateral loss of vision; she also complained of fever and weight loss. Examination showed a CRAO in the left eye and angiographically documented choroidal ischemia in both eyes.\n\n\nRESULTS\nThe possibility of systemic vasculitis was not kept in mind until further studies were carried out; methylprednisolone pulse therapy was then started. Renal biopsy disclosed focal and segmental necrotizing vasculitis of the medium-sized arteries, supporting the diagnosis of WG, and cyclophosphamide pulse therapy was administered with gradual improvement, but there was no visual recovery.\n\n\nCONCLUSION\nCRAO as presenting manifestation of WG, in the context of retinal vasculitis, is very uncommon, but we should be aware of WG in the etiology of CRAO. This report shows the difficulty of diagnosing Wegener's granulomatosis; it requires a high index of suspicion, and we should obtain an accurate medical history and repeat serological and histopathological examinations. It emphasizes that inflammation of arteries leads to irreversible retinal infarction, and visual loss may occur.", "title": "" }, { "docid": "fd5d56ccb3a95cdac0d1aca67519b09b", "text": "The tragedy of the digital commons does not prevent the copious voluntary production of content that one witnesses in the web. We show through an analysis of a massive data set from YouTube that the productivity exhibited in crowdsourcing exhibits a strong positive dependence on attention, measured by the number of downloads. Conversely, a lack of attention leads to a decrease in the number of videos uploaded and the consequent drop in productivity, which in many cases asymptotes to no uploads whatsoever. Moreover, uploaders compare themselves to others when having low productivity and to themselves when exceeding a threshold. 1 ar X iv :0 80 9. 30 30 v1 [ cs .C Y ] 1 7 Se p 20 08 We are witnessing an inversion of the traditional way by which content has been generated and consumed over the centuries. From photography to news and encyclopedic knowledge, the centuries-old pattern has been one in which a relatively few people and organizations produce content and most people consume it. With the advent of the web and the ease with which one can migrate content to it, that pattern has reversed, leading to a situation whereby millions create content in the form of blogs, news, videos, music, etc. and relatively few can attend to it all. This phenomenon, which goes under the name of crowdsourcing, is exemplified by websites such as Digg, Flicker, YouTube, and Wikipedia, where content creation without the traditional quality filters manages to produce sought out movies, news and even knowledge that rivals the best encyclopedias. That such content is valued is confirmed by the fact that access to these sites accounts for a sizable percentage of internet traffic. For example, as of June, 2007 YouTube alone comprised approximately 20% of all HTTP traffic, or nearly 10% of all traffic on the Internet [2]. What makes crowdsourcing both interesting and puzzling is the underlying dilemma facing every contributor, which is best exemplified by the well-known tragedy of the commons. In such dilemmas, a group of people attempts to provide a common good in the absence of a central authority. In the case of crowdsourcing, the common good is in the form or videos, music, or encyclopedic knowledge that can be freely accessed by anyone. Furthermore, the good has jointness of supply, which means that its consumption by others does not affect the amounts that other users can use. And since it is nearly impossible to exclude non contributors from using the common good, it is rational for individuals not to upload content and free ride on the production of others. The dilemma ensues when every individual can reason this way and free ride on the efforts of others, making everyone worse off—thus the tragedy of the digital commons [1, 3, 7, 5, 10]. And yet paradoxically, there is ample evidence that while the ratio of contributions to downloads is indeed small, the growth in content provision persists at levels that are hard to understand if analyzed from a public goods point of view. One possible explanation for this puzzling behavior, which we explore in this paper, is that those contributing to the digital commons", "title": "" }, { "docid": "12363093cb0441e0817d4c92ab88e7fb", "text": "Imperforate hymen, a condition in which the hymen has no aperture, usually occurs congenitally, secondary to failure of development of a lumen. A case of a documented simulated \"acquired\" imperforate hymen is presented in this article. The patient, a 5-year-old girl, was the victim of sexual abuse. Initial examination showed tears, scars, and distortion of the hymen, laceration of the perineal body, and loss of normal anal tone. Follow-up evaluations over the next year showed progressive healing. By 7 months after the injury, the hymen was replaced by a thick, opaque scar with no orifice. Patients with an apparent imperforate hymen require a sensitive interview and careful visual inspection of the genital and anal areas to delineate signs of injury. The finding of an apparent imperforate hymen on physical examination does not eliminate the possibility of antecedent vaginal penetration and sexual abuse.", "title": "" }, { "docid": "f86fdc743f665e5f6fe13696f4502de4", "text": "The Web is rapidly transforming from a pure document collection to the largest connected public data space. Semantic annotations of web pages make it notably easier to extract and reuse data and are increasingly used by both search engines and social media sites to provide better search experiences through rich snippets, faceted search, task completion, etc. In our work, we study the novel problem of crawling structured data embedded inside HTML pages. We describe Anthelion, the first focused crawler addressing this task. We propose new methods of focused crawling specifically designed for collecting data-rich pages with greater efficiency. In particular, we propose a novel combination of online learning and bandit-based explore/exploit approaches to predict data-rich web pages based on the context of the page as well as using feedback from the extraction of metadata from previously seen pages. We show that these techniques significantly outperform state-of-the-art approaches for focused crawling, measured as the ratio of relevant pages and non-relevant pages collected within a given budget.", "title": "" }, { "docid": "a701b681b5fb570cf8c0668fe691ee15", "text": "Coagulation-flocculation is a relatively simple physical-chemical technique in treatment of old and stabilized leachate which has been practiced using a variety of conventional coagulants. Polymeric forms of metal coagulants which are increasingly applied in water treatment are not well documented in leachate treatment. In this research, capability of poly-aluminum chloride (PAC) in the treatment of stabilized leachate from Pulau Burung Landfill Site (PBLS), Penang, Malaysia was studied. The removal efficiencies for chemical oxygen demand (COD), turbidity, color and total suspended solid (TSS) obtained using PAC were compared with those obtained using alum as a conventional coagulant. Central composite design (CCD) and response surface method (RSM) were applied to optimize the operating variables viz. coagulant dosage and pH. Quadratic models developed for the four responses (COD, turbidity, color and TSS) studied indicated the optimum conditions to be PAC dosage of 2g/L at pH 7.5 and alum dosage of 9.5 g/L at pH 7. The experimental data and model predictions agreed well. COD, turbidity, color and TSS removal efficiencies of 43.1, 94.0, 90.7, and 92.2% for PAC, and 62.8, 88.4, 86.4, and 90.1% for alum were demonstrated.", "title": "" }, { "docid": "0d7816bde9b27e9b82797653d3e068b1", "text": "We introduce an ultrasonic sensor system that measures artificial potential fields (APF’s) directly. The APF is derived from the traveling-times of the transmitted pulses. Advantages of the sensor are that it needs only three transducers, that its design is simple, and that it measures a quantity that can be used directly for simple navigation, such as collision avoidance.", "title": "" }, { "docid": "00e5acdfb1e388b149bc729a7af108ee", "text": "Sleep is a growing area of research interest in medicine and neuroscience. Actually, one major concern is to find a correlation between several physiologic variables and sleep stages. There is a scientific agreement on the characteristics of the five stages of human sleep, based on EEG analysis. Nevertheless, manual stage classification is still the most widely used approach. This work proposes a new automatic sleep classification method based on unsupervised feature classification algorithms recently developed, and on EEG entropy measures. This scheme extracts entropy metrics from EEG records to obtain a feature vector. Then, these features are optimized in terms of relevance using the Q-α algorithm. Finally, the resulting set of features is entered into a clustering procedure to obtain a final segmentation of the sleep stages. The proposed method reached up to an average of 80% correctly classified stages for each patient separately while keeping the computational cost low. Entropy 2014, 16 6574", "title": "" }, { "docid": "6cc7205ad19d3de8fab076a752d82284", "text": "Visual odometry and mapping methods can provide accurate navigation and comprehensive environment (obstacle) information for autonomous flights of Unmanned Aerial Vehicle (UAV) in GPS-denied cluttered environments. This work presents a new light small-scale low-cost ARM-based stereo vision pre-processing system, which not only is used as onboard sensor to continuously estimate 6-DOF UAV pose, but also as onboard assistant computer to pre-process visual information, thereby saving more computational capability for the onboard host computer of the UAV to conduct other tasks. The visual odometry is done by one plugin specifically developed for this new system with a fixed baseline (12cm). In addition, the pre-processed infromation from this new system are sent via a Gigabit Ethernet cable to the onboard host computer of UAV for real-time environment reconstruction and obstacle detection with a octree-based 3D occupancy grid mapping approach, i.e. OctoMap. The visual algorithm is evaluated with the stereo video datasets from EuRoC Challenge III in terms of efficiency, accuracy and robustness. Finally, the new system is mounted and tested on a real quadrotor UAV to carry out the visual odometry and mapping task.", "title": "" }, { "docid": "97f2e2ceeb4c1e2b8d8fbc8a46159730", "text": "Novel scientific knowledge is constantly produced by the scientific community. Understanding the level of novelty characterized by scientific literature is key for modeling scientific dynamics and analyzing the growth mechanisms of scientific knowledge. Metrics derived from bibliometrics and citation analysis were effectively used to characterize the novelty in scientific development. However, time is required before we can observe links between documents such as citation links or patterns derived from the links, which makes these techniques more effective for retrospective analysis than predictive analysis. In this study, we present a new approach to measuring the novelty of a research topic in a scientific community over a specific period by tracking semantic changes of the terms and characterizing the research topic in their usage context. The semantic changes are derived from the text data of scientific literature by temporal embedding learning techniques. We validated the effects of the proposed novelty metric on predicting the future growth of scientific publications and investigated the relations between novelty and growth by panel data analysis applied in a largescale publication dataset (MEDLINE/PubMed). Key findings based on the statistical investigation indicate that the novelty metric has significant predictive effects on the growth of scientific literature and the predictive effects may last for more than ten years. We demonstrated the effectiveness and practical implications of the novelty metric in three case studies. ∗[email protected], [email protected]. Department of Information Science, Drexel University. 1 ar X iv :1 80 1. 09 12 1v 1 [ cs .D L ] 2 7 Ja n 20 18", "title": "" }, { "docid": "fe383fbca6d67d968807fb3b23489ad1", "text": "In this project, we attempt to apply machine-learning algorithms to predict Bitcoin price. For the first phase of our investigation, we aimed to understand and better identify daily trends in the Bitcoin market while gaining insight into optimal features surrounding Bitcoin price. Our data set consists of over 25 features relating to the Bitcoin price and payment network over the course of five years, recorded daily. Using this information we were able to predict the sign of the daily price change with an accuracy of 98.7%. For the second phase of our investigation, we focused on the Bitcoin price data alone and leveraged data at 10-minute and 10-second interval timepoints, as we saw an opportunity to evaluate price predictions at varying levels of granularity and noisiness. By predicting the sign of the future change in price, we are modeling the price prediction problem as a binomial classification task, experimenting with a custom algorithm that leverages both random forests and generalized linear models. These results had 50-55% accuracy in predicting the sign of future price change using 10 minute time intervals.", "title": "" }, { "docid": "b9a214ad1b6a97eccf6c14d3d778b2ff", "text": "In this paper a morphological tagging approach for document image invoice analysis is described. Tokens close by their morphology and confirmed in their location within different similar contexts make apparent some parts of speech representative of the structure elements. This bottom up approach avoids the use of an priori knowledge provided that there are redundant and frequent contexts in the text. The approach is applied on the invoice body text roughly recognized by OCR and automatically segmented. The method makes possible the detection of the invoice articles and their different fields. The regularity of the article composition and its redundancy in the invoice is a good help for its structure. The recognition rate of 276 invoices and 1704 articles, is over than 91.02% for articles and 92.56% for fields.", "title": "" }, { "docid": "783bc3d13f2ff4178b59df08076db67e", "text": "Gripping and holding of objects are key tasks for robotic manipulators. The development of universal grippers able to pick up unfamiliar objects of widely varying shape and surface properties remains, however, challenging. Most current designs are based on the multifingered hand, but this approach introduces hardware and software complexities. These include large numbers of controllable joints, the need for force sensing if objects are to be handled securely without crushing them, and the computational overhead to decide how much stress each finger should apply and where. Here we demonstrate a completely different approach to a universal gripper. Individual fingers are replaced by a single mass of granular material that, when pressed onto a target object, flows around it and conforms to its shape. Upon application of a vacuum the granular material contracts and hardens quickly to pinch and hold the object without requiring sensory feedback. We find that volume changes of less than 0.5% suffice to grip objects reliably and hold them with forces exceeding many times their weight. We show that the operating principle is the ability of granular materials to transition between an unjammed, deformable state and a jammed state with solid-like rigidity. We delineate three separate mechanisms, friction, suction, and interlocking, that contribute to the gripping force. Using a simple model we relate each of them to the mechanical strength of the jammed state. This advance opens up newpossibilities for the designof simple, yet highly adaptive systems that excel at fast gripping of complex objects.", "title": "" }, { "docid": "bea6b12875b90dea9489d85002abb4ec", "text": "This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips for accessing FPGA configuration. The backdoor was found amongst additional JTAG functionality and exists on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), our pioneered technique, we were able to extract the secret key to activate the backdoor, as well as other security keys such as the AES and the Passkey. This way an attacker can extract all the configuration data from the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property (IP) theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact they can be easily compromised or will have to be physically replaced after a redesign of the silicon itself.", "title": "" }, { "docid": "7080c996c4ff59ec50069187c93d7106", "text": "Magnesium and magnesium based alloys are lightweight metallic materials that are extremely biocompatib le and have similar mechanical properties to natural bone. These materials have the potential to function as an osteoconductive and biodegradable substitute in load bearing applicat ions in the field of hard t issue engineering. However, the effects of corrosion and degradation in the physiological environment of the body has prevented their wide spread applicat ion to date. The aim of this review is to examine the properties, chemical stability, degradation in situ and methods of improving the corrosion resistance of magnesium and its alloys for potential application in the orthopaedic field. To be an effective implant, the surface and sub-surface properties of the material needs to be carefully selected so that the degradation kinetics of the implant can be efficiently controlled. Several surface modification techniques are presented and their effectiveness in reducing the corrosion rate and methods of controlling the degradation period are discussed. Ideally, balancing the gradual loss of material and mechanical strength during degradation, with the increasing strength and stability of the newly forming bone tissue is the ultimate goal. If this goal can be achieved, then orthopaedic implants manufactured from magnesium based alloys have the potential to deliver successful clinical outcomes without the need for revision surgery.", "title": "" }, { "docid": "98ecbb9ca778967b81f27dcf8e78f6c3", "text": "Influence Maximization is the problem of finding a certain amount of people in a social network such that their aggregation influence through the network is maximized. In the past this problem has been widely studied under a number of different models. In 2003, Kempe \\emph{et al.} gave a $(1-{1 \\over e})$-approximation algorithm for the \\emph{linear threshold model} and the \\emph{independent cascade model}, which are the two main models in the social network analysis. In addition, Chen \\emph{et al.} proved that the problem of exactly computing the influence given a seed set in the two models is $\\#$P-hard. Both the \\emph{linear threshold model} and the \\emph{independent cascade model} are based on randomized propagation. However such information might be obtained by surveys or data mining techniques, which makes great difference on the properties of the problem. In this paper, we study the Influence Maximization problem in the \\emph{deterministic linear threshold model}. As a contrast, we show that in the \\emph{deterministic linear threshold model}, there is no polynomial time $n^{1-\\epsilon}$-approximation unless P=NP even at the simple case that one person needs at most two active neighbors to become active. This inapproximability result is derived with self-contained proofs without using PCP theorem. In the case that a person can be activated when one of its neighbors become active, there is a polynomial time ${e\\over e-1}$-approximation, and we prove it is the best possible approximation under a reasonable assumption in the complexity theory, $NP \\not\\subset DTIME(n^{\\log\\log n})$. We also show that the exact computation of the final influence given a seed set can be solved in linear time in the \\emph{deterministic linear threshold model}. The Least Seed Set problem, which aims to find a seed set with least number of people to activate all the required people in a given social network, is discussed. Using an analysis framework based on Set Cover, we show a $O($log$n)$-approximation in the case that a people become active when one of its neighbors is activated.", "title": "" }, { "docid": "41265cb36df924d32a029f0183c13f8a", "text": "Engineering employers say publicly at national level that they need more engineering graduates, with surveys by, for example, the Engineering Employers Federation, proving there is demand. This project investigated the apparent contradiction between this high demand for engineering graduates and an unemployment rate of about 13% amongst UK engineering graduates (HESA data, July 2010). Employability has received huge attention but there remains a distinct issue of why some engineers do not get graduate level work within a short time of graduation. This National HE STEM Programme project interviewed a selection of unemployed graduates, identified from the Destinations of Leavers from HE (DLHE) survey six months after graduation, in order to investigate their experiences and gain an understanding of factors impeding their entry into graduate engineering employment. Questions ranged from whether the graduate decided to put off looking for a graduate level job until after graduation (and therefore ‘missed the boat’), through to academic and personal skills attributes, motivation and regional location. The data was analysed in the context both of previous research, and of data from interviews with engineering employers and employed graduates. Emerging from this study is that there is no single reason for unemployment amongst engineering graduates, with key findings centring on the importance of: students’ early engagement with career planning and the final year application process; relevant work experience; the distinction between the MEng and the BEng in employers’ recruitment criteria; and the ability of graduates to articulate their skills and competences effectively.", "title": "" } ]
scidocsrr
f3b6400eabd04e985719980fe78b86b5
Accelerating Scientific Data Exploration via Visual Query Systems
[ { "docid": "d95cd76008dd65d5d7f00c82bad013d3", "text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.", "title": "" } ]
[ { "docid": "0e27a00b36626b0454b11f4f8b1fb522", "text": "Although active islanding detection techniques have smaller non-detection zones than passive techniques, active methods could degrade the system power quality and are not as simple and easy to implement as passive methods. The islanding detection strategy, proposed in this paper, combines the advantages of both active and passive islanding detection methods. The distributed generation (DG) interface was designed so that the DG maintains stable operation while being grid connected and loses its stability once islanded. Thus, the over/under voltage and variation in the reactive power method be sufficient to detect islanding. The main advantage of the proposed technique is that it relies on a simple approach for islanding detection and has negligible non-detection zone. The proposed system was simulated on MATLAB/SIMULINK and simulation results are presented to highlight the effectiveness of the proposed technique.", "title": "" }, { "docid": "f1559798e0338074f28ca4aaf953b6a1", "text": "Example classifications (test set) [And09] Andriluka et al. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009 [Eic09] Eichner et al. Articulated Human Pose Estimation and Search in (Almost) Unconstrained Still Images. In IJCV, 2012 [Sap10] Sapp et al. Cascaded models for articulated pose estimation. In ECCV, 2010 [Yan11] Yang and Ramanan. Articulated pose estimation with flexible mixturesof-parts. In CVPR, 2011. References Human Pose Estimation (HPE) Algorithm Input", "title": "" }, { "docid": "a2a3d94a44d14ea01fd9a66a645af1c0", "text": "The age of big data opens new opportunities in various fields. While the availability of a big dataset can be helpful in some scenarios, it introduces new challenges in digital forensics investigations. The existing tools and infrastructures cannot meet the expected response time when we investigate on a big dataset. Forensics investigators will face challenges while identifying necessary pieces of evidence from a big dataset, and collecting and analyzing those evidence. In this article, we propose the first working definition of big data forensics and systematically analyze the big data forensics domain to explore the challenges and issues in this forensics paradigm. We propose a conceptual model for supporting big data forensics investigations and present several use cases, where big data forensics can provide new insights to determine facts about criminal incidents.", "title": "" }, { "docid": "b0892ff39abac8a35c88a3b6aa6a9045", "text": "Video-based fire detection is currently a fairly common application with the growth in the number of installed surveillance video systems. Moreover, the related processing units are becoming more powerful. Smoke is an early sign of most fires; therefore, selecting an appropriate smoke-detection method is essential. However, detecting smoke without creating a false alarm remains a challenging problem for open or large spaces with the disturbances of common moving objects, such as pedestrians and vehicles. This study proposes a novel video-based smoke-detection method that can be incorporated into a surveillance system to provide early alerts. In this study, the process of extracting smoke features from candidate regions was accomplished by analyzing the spatial and temporal characteristics of video sequences for three important features: edge blurring, gradual energy changes, and gradual chromatic configuration changes. The proposed spatialtemporal analysis technique improves the feature extraction of gradual energy changes. In order to make the video smoke-detection results more reliable, these three features were combined using a support vector machine (SVM) technique and a temporal-based alarm decision unit (ADU) was also introduced. The effectiveness of the proposed algorithm was evaluated on a PC with an Intel R © Core2 Duo CPU (2.2 GHz) and 2 GB RAM. The average processing time was 32.27 ms per frame; i.e., the proposed algorithm can process 30.98 frames per second. Experimental results showed that the proposed system can detect smoke effectively with a low false-alarm rate and a short reaction time in many real-world scenarios.", "title": "" }, { "docid": "09e4658387efcf28d376c923351706d5", "text": "This study compares the EPID dosimetry algorithms of two commercial systems for pretreatment QA, and analyzes dosimetric measurements made with each system alongside the results obtained with a standard diode array. 126 IMRT fields are examined with both EPID dosimetry systems (EPIDose by Sun Nuclear Corporation, Melbourne FL, and Portal Dosimetry by Varian Medical Systems, Palo Alto CA) and the diode array, MapCHECK (also by Sun Nuclear Corporation). Twenty-six VMAT arcs of varying modulation complexity are examined with the EPIDose and MapCHECK systems. Optimization and commissioning testing of the EPIDose physics model is detailed. Each EPID IMRT QA system is tested for sensitivity to critical TPS beam model errors. Absolute dose gamma evaluation (3%, 3 mm, 10% threshold, global normalization to the maximum measured dose) yields similar results (within 1%-2%) for all three dosimetry modalities, except in the case of off-axis breast tangents. For these off-axis fields, the Portal Dosimetry system does not adequately model EPID response, though a previously-published correction algorithm improves performance. Both MapCHECK and EPIDose are found to yield good results for VMAT QA, though limitations are discussed. Both the Portal Dosimetry and EPIDose algorithms, though distinctly different, yield similar results for the majority of clinical IMRT cases, in close agreement with a standard diode array. Portal dose image prediction may overlook errors in beam modeling beyond the calculation of the actual fluence, while MapCHECK and EPIDose include verification of the dose calculation algorithm, albeit in simplified phantom conditions (and with limited data density in the case of the MapCHECK detector). Unlike the commercial Portal Dosimetry package, the EPIDose algorithm (when sufficiently optimized) allows accurate analysis of EPID response for off-axis, asymmetric fields, and for orthogonal VMAT QA. Other forms of QA are necessary to supplement the limitations of the Portal Vision Dosimetry system.", "title": "" }, { "docid": "2fbd1ba2f656e3c32839032754992974", "text": "We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users’ demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the minimum average rate (for a uniform file popularity) and the minimum peak rate required on the bottleneck link for a given cache size available at each user. In particular, we propose a novel caching scheme, which strictly improves the state of the art by exploiting commonality among user demands. We then demonstrate the exact optimality of our proposed scheme through a matching converse, by dividing the set of all demands into types, and showing that the placement phase in the proposed caching scheme is universally optimal for all types. Using these techniques, we also fully characterize the rate-memory tradeoff for a decentralized setting, in which users fill out their cache content without any coordination.", "title": "" }, { "docid": "e519d705cd52b4eb24e4e936b849b3ce", "text": "Computer manufacturers spend a huge amount of time, resources, and money in designing new systems and newer configurations, and their ability to reduce costs, charge competitive prices and gain market share depends on how good these systems perform. In this work, we develop predictive models for estimating the performance of systems by using performance numbers from only a small fraction of the overall design space. Specifically, we first develop three models, two based on artificial neural networks and another based on linear regression. Using these models, we analyze the published Standard Performance Evaluation Corporation (SPEC) benchmark results and show that by using the performance numbers of only 2% and 5% of the machines in the design space, we can estimate the performance of all the systems within 9.1% and 4.6% on average, respectively. Then, we show that the performance of future systems can be estimated with less than 2.2% error rate on average by using the data of systems from a previous year. We believe that these tools can accelerate the design space exploration significantly and aid in reducing the corresponding research/development cost and time-to-market.", "title": "" }, { "docid": "6e9ba961906276190f56831f702d433c", "text": "Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast wholeslide-images of extreme digital resolution (100, 000 pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease localization in the context of weakly supervised learning, where only image-level labels are available during training. Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge. We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique from the field of semantic segmentation and object detection.", "title": "" }, { "docid": "91ec5e5551054b287d1311027b3181d3", "text": "Position and orientation profiles are two principal descriptions of shape in space. We describe how a structured light system, coupled with the illumination of a pseudorandom pattern and a suitable choice of feature points, can allow not only the position but also the orientation of individual surface elements to be determined independently. Unlike traditional designs which use the centroids of the illuminated pattern elements as the feature points, the proposed design uses the grid points between the pattern elements instead. The grid points have the essences that their positions in the image data are inert to the effect of perspective distortion, their individual extractions are not directly dependent on one another, and the grid points possess strong symmetry that can be exploited for their precise localization in the image data. Most importantly, the grid lines of the illuminated pattern that form the grid points can aid in determining surface normals. In this paper, we describe how each of the grid points can be labeled with a unique color code, what symmetry they possess and how the symmetry can be exploited for their precise localization at subpixel accuracy in the image data, and how 3D orientation in addition to 3D position can be determined at each of them. Both the position and orientation profiles can be determined with only a single pattern illumination and a single image capture.", "title": "" }, { "docid": "b6da9901abb01572b631085f97fdd1d4", "text": "Protection against high voltage-standing-wave-ratios (VSWR) is of great importance in many power amplifier applications. Despite excellent thermal and voltage breakdown properties even gallium nitride devices may need such measures. This work focuses on the timing aspect when using barium-strontium-titanate (BST) varactors to limit power dissipation and gate current. A power amplifier was designed and fabricated, implementing a varactor and a GaN-based voltage switch as varactor modulator for VSWR protection. The response time until the protection is effective was measured by switching the voltages at varactor, gate and drain of the transistor, respectively. It was found that it takes a minimum of 50 μs for the power amplifier to reach a safe condition. Pure gate pinch-off or drain voltage reduction solutions were slower and bias-network dependent. For a thick-film BST MIM varactor, optimized for speed and power, a switching time of 160 ns was achieved.", "title": "" }, { "docid": "1ba1b3bb1ef0fb0b6b10b8f4dcaa6716", "text": "Lichen sclerosus et atrophicus (LSA) is a chronic inflammatory scarring disease with a predilection for the anogenital area; however, 15%-20% of LSA cases are extragenital. The folliculocentric variant is rarely reported and less well understood. The authors report a rare case of extragenital, folliculocentric LSA in a 10-year-old girl. The patient presented to the dermatology clinic for evaluation of an asymptomatic eruption of the arms and legs, with no vaginal or vulvar involvement. Physical examination revealed the presence of numerous 2-4 mm, mostly perifollicular, hypopigmented, slightly atrophic papules and plaques. Many of the lesions had a central keratotic plug. Cutaneous histopathological examination showed features of LSA. Based on clinical and histological findings, folliculocentric extragenital LSA was diagnosed.", "title": "" }, { "docid": "e4f648d12495a2d7615fe13c84f35bbe", "text": "We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation. Our approach requires no changes to the model architecture of a standard NMT system, but instead introduces a new component, the contextual parameter generator (CPG), that generates the parameters of the system (e.g., weights in a neural network). This parameter generator accepts source and target language embeddings as input, and generates the parameters for the encoder and the decoder, respectively. The rest of the model remains unchanged and is shared across all languages. We show how this simple modification enables the system to use monolingual data for training and also perform zero-shot translation. We further show it is able to surpass state-of-theart performance for both the IWSLT-15 and IWSLT-17 datasets and that the learned language embeddings are able to uncover interesting relationships between languages.", "title": "" }, { "docid": "987024b9cca47797813f27da08d9a7c6", "text": "Image segmentation plays a crucial role in many medical imaging applications by automating or facilitating the delineation of anatomical structures and other regions of interest. We present herein a critical appraisal of the current status of semi-automated and automated methods for the segmentation of anatomical medical images. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. The use of image segmentation in different imaging modalities is also described along with the difficulties encountered in each modality. We conclude with a discussion on the future of image segmentation methods in biomedical research.", "title": "" }, { "docid": "2a45084e4260aa8ab0d6589b60f97372", "text": "While the focus of electronic commerce has often been on “dot coms” or pure Internet based companies, a major transformation is under way in many traditional “bricks-and-mortar” organizations. The latter are investing heavily in Internet based technologies and applications in order to attain new heights of efficiency, productivity and business value. While anecdotes in the business press suggest that some firms have achieved unprecedented performance gains by leveraging the Internet, there is no systematic evidence in the Information Technology (IT) productivity or business value literature regarding the payoffs from Internet enabled business initiatives. We propose an exploratory model of electronic business value involving IT applications, processes, business partner readiness, and operational and financial performance measures. This model is rooted in IT business value and productivity research, and is empirically tested with data from over 1000 firms in manufacturing, retail, distribution and wholesale sectors. We find that electronic business initiatives involving customer-facing technologies lead to operational excellence in customer interactions and improved financial performance. Further, supplier related operational excellence is a key determinant of customer excellence, suggesting the related nature of customer and supplier related performance. Customer and supplier readiness to engage in online business have strong positive impacts on customer and supplier related operational excellence respectively, indicating the need for all entities in a value chain to simultaneously adopt Internet applications and business practices. To the best of our knowledge, this is the first study to address the business value of Internet initiatives.", "title": "" }, { "docid": "bb9d60abf3b8d6e88d5079366b3a0f91", "text": "Dynamic network analysis (DNA) varies from traditional social network analysis in that it can handle large dynamic multi-mode, multi-link networks with varying levels of uncertainty. DNA, like quantum mechanics, would be a theory in which relations are probabilistic, the measurement of a node changes its properties, movement in one part of the system propagates through the system, and so on. However, unlike quantum mechanics, the nodes in the DNA, the atoms, can learn. An approach to DNA is described that builds DNA theory through the combined use of multi-agent modeling, machine learning, and meta-matrix approach to network representation. A set of candidate metric for describing the DNA are defined. Then, a model built using this approach is presented. Results concerning the evolution and destabilization of networks are described.", "title": "" }, { "docid": "8cea62bdb8b4ce82a8b2d931ef20b0f2", "text": "This paper addresses the Volume dimension of Big Data. It presents a preliminary work on finding segments of retailers from a large amount of Electronic Funds Transfer at Point Of Sale (EFTPOS) transaction data. To the best of our knowledge, this is the first time a work on Big EFTPOS Data problem has been reported. A data reduction technique using the RFM (Recency, Frequency, Monetary) analysis as applied to a large data set is presented. Ways to optimise clustering techniques used to segment the big data set through data partitioning and parallelization are explained. Preliminary analysis on the segments of the retailers output from the clustering experiments demonstrates that further drilling down into the retailer segments to find more insights into their business behaviours is warranted.", "title": "" }, { "docid": "0e68fa08edfc2dcb52585b13d0117bf1", "text": "Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement of CP (which we call SimplE) to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge can be incorporated into these embeddings through weight tying. We prove SimplE is fully expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques. SimplE’s code is available on GitHub at https://github.com/Mehran-k/SimplE.", "title": "" }, { "docid": "51fc49d6196702f87e7dae215fa93108", "text": "Automatic classification of cancer lesions in tissues observed using gastroenterology imaging is a non-trivial pattern recognition task involving filtering, segmentation, feature extraction and classification. In this paper we measure the impact of a variety of segmentation algorithms (mean shift, normalized cuts, level-sets) on the automatic classification performance of gastric tissue into three classes: cancerous, pre-cancerous and normal. Classification uses a combination of color (hue-saturation histograms) and texture (local binary patterns) features, applied to two distinct imaging modalities: chromoendoscopy and narrow-band imaging. Results show that mean-shift obtains an interesting performance for both scenarios producing low classification degradations (6%), full image classification is highly inaccurate reinforcing the importance of segmentation research for Gastroenterology, and confirm that Patch Index is an interesting measure of the classification potential of small to medium segmented regions.", "title": "" }, { "docid": "716f22e6f26cd43f41a57395d85974bd", "text": "One characteristic attribute of mobile platforms equipped with a set of independent steering wheels is their omnidirectionality and the ability to realize complex translational and rotational trajectories. An accurate coordination of steering angle and spinning rate of each wheel is necessary for a consistent motion. Since the orientations of the wheels must align to the Instantaneous Center of Rotation (ICR), the current location and velocity of this specific point is essential for describing the state of the platform. However, singular configurations of the controlled system exist depending on the ICR, leading to unfeasible control inputs, i.e., infinite steering rates. Within this work we address and analyze this problem in general. Furthermore, we propose a solution for mobile platforms with variable footprint. An existing controller based on dynamic feedback linearization is augmented by a new potential field-based algorithm for singularity avoidance which uses the tunable leg lengths as an additional control input to minimize deviations from the nominal motion trajectory. Simulations and experimental results on the mobile platform of DLR's humanoid manipulator Justin support our approach.", "title": "" }, { "docid": "810dd7b98f55ac6ccd4040f1e6c8f10d", "text": "This report describes simple mechanisms that allow autonomous software agents to en gage in bargaining behaviors in market based environments Groups of agents with such mechanisms could be used in applications including market based control internet com merce and economic modelling After an introductory discussion of the rationale for this work and a brief overview of key concepts from economics work in market based control is reviewed to highlight the need for bargaining agents Following this the early experimental economics work of Smith and the recent results of Gode and Sunder are de scribed Gode and Sunder s work using zero intelligence zi traders that act randomly within a structured market appears to imply that convergence to the theoretical equilib rium price is determined more by market structure than by the intelligence of the traders in that market if this is true developing mechanisms for bargaining agents is of very limited relevance However it is demonstrated here that the average transaction prices of zi traders can vary signi cantly from the theoretical equilibrium level when supply and demand are asymmetric and that the degree of di erence from equilibrium is predictable from a pri ori statistical analysis In this sense it is shown here that Gode and Sunder s results are artefacts of their experimental regime Following this zero intelligence plus zip traders are introduced like zi traders these simple agents make stochastic bids Unlike zi traders they employ an elementary form of machine learning Groups of zip traders interacting in experimental markets similar to those used by Smith and Gode and Sunder are demonstrated and it is shown that the performance of zip traders is signi cantly closer to the human data than is the performance of Gode and Sunder s zi traders This document reports on work done during February to September while the author held a Visiting Academic post at Hewlett Packard Laboratories Bristol Filton Road Bristol BS QZ U K", "title": "" } ]
scidocsrr
662f9740a651f471831e93654284877c
Illumination Invariant Imaging : Applications in Robust Vision-based Localisation , Mapping and Classification for Autonomous Vehicles
[ { "docid": "cc4c58f1bd6e5eb49044353b2ecfb317", "text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.", "title": "" }, { "docid": "36e72fe58858b4caf4860a3bba5fced4", "text": "When operating over extended periods of time, an autonomous system will inevitably be faced with severe changes in the appearance of its environment. Coping with such changes is more and more in the focus of current robotics research. In this paper, we foster the development of robust place recognition algorithms in changing environments by describing a new dataset that was recorded during a 728 km long journey in spring, summer, fall, and winter. Approximately 40 hours of full-HD video cover extreme seasonal changes over almost 3000 km in both natural and man-made environments. Furthermore, accurate ground truth information are provided. To our knowledge, this is by far the largest SLAM dataset available at the moment. In addition, we introduce an open source Matlab implementation of the recently published SeqSLAM algorithm and make it available to the community. We benchmark SeqSLAM using the novel dataset and analyse the influence of important parameters and algorithmic steps.", "title": "" }, { "docid": "61a9bc06d96eb213ed5142bfa47920b9", "text": "This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.", "title": "" } ]
[ { "docid": "7d0559e06a360acc96153b38d8c01501", "text": "Wearable health tech provides doctors with the ability to remotely supervise their patients' wellness. It also makes it much easier to authorize someone else to take appropriate actions to ensure the person's wellness than ever before. Information Technology may soon change the way medicine is practiced, improving the performance, while reducing the price of healthcare. We analyzed the secrecy demands of wearable devices, including Smartphone, smart watch and their computing techniques, that can soon change the way healthcare is provided. However, before this is adopted in practice, all devices must be equipped with sufficient privacy capabilities related to healthcare service. In this paper, we formulated a new improved conceptual framework for wearable healthcare systems. This framework consists of ten principles and nine checklists, capable of providing complete privacy protection package to wearable device owners. We constructed this framework based on the analysis of existing mobile technology, the results of which are combined with the existing security standards. The approach also incorporates the market share percentage level of every app and its respective OS. This framework is evaluated based on the stringent CIA and HIPAA principles for information security. This evaluation is followed by testing the capability to revoke rights of subjects to access objects and ability to determine the set of available permissions for a particular subject for all models Finally, as the last step, we examine the complexity of the required initial setup.", "title": "" }, { "docid": "2923d1776422a1f44395f169f0d61995", "text": "Rolling upgrade consists of upgrading progressively the servers of a distributed system to reduce service downtime.Upgrading a subset of servers requires a well-engineered cluster membership protocol to maintain, in the meantime, the availability of the system state. Existing cluster membership reconfigurations, like CoreOS etcd, rely on a primary not only for reconfiguration but also for storing information. At any moment, there can be at most one primary, whose replacement induces disruption. We propose Rollup, a non-disruptive rolling upgrade protocol with a fast consensus-based reconfiguration. Rollup relies on a candidate leader only for the reconfiguration and scalable biquorums for service requests. While Rollup implements a non-disruptive cluster membership protocol, it does not offer a full-fledged coordination service. We analyzed Rollup theoretically and experimentally on an isolated network of 26 physical machines and an Amazon EC2 cluster of 59 virtual machines. Our results show an 8-fold speedup compared to a rolling upgrade based on a primary for reconfiguration.", "title": "" }, { "docid": "a1b7f477c339f30587a2f767327b4b41", "text": "Software game is a kind of application that is used not only for entertainment, but also for serious purposes that can be applicable to different domains such as education, business, and health care. Multidisciplinary nature of the game development processes that combine sound, art, control systems, artificial intelligence (AI), and human factors, makes the software game development practice different from traditional software development. However, the underline software engineering techniques help game development to achieve maintainability, flexibility, lower effort and cost, and better design. The purpose of this study is to assesses the state of the art research on the game development software engineering process and highlight areas that need further consideration by researchers. In the study, we used a systematic literature review methodology based on well-known digital libraries. The largest number of studies have been reported in the production phase of the game development software engineering process life cycle, followed by the pre-production phase. By contrast, the post-production phase has received much less research activity than the pre-production and production phases. The results of this study suggest that the game development software engineering process has many aspects that need further attention from researchers; that especially includes the postproduction phase.", "title": "" }, { "docid": "f717225fa7518383e0db362e673b9af4", "text": "The web has become the world's largest repository of knowledge. Web usage mining is the process of discovering knowledge from the interactions generated by the user in the form of access logs, cookies, and user sessions data. Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). Accurate web log mining results and efficient online navigational pattern prediction are undeniably crucial for tuning up websites and consequently helping in visitors’ retention. Like any other data mining task, web log mining starts with data cleaning and preparation and it ends up discovering some hidden knowledge which cannot be extracted using conventional methods. After applying web mining on web sessions we will get navigation patterns which are important for web users such that appropriate actions can be adopted. Due to huge data in web, discovery of patterns and there analysis for further improvement in website becomes a real time necessity. The main focus of this paper is using of hybrid prediction engine to classify users on the basis of discovered patterns from web logs. Our proposed framework is to overcome the problem arise due to using of any single algorithm, we will give results based on comparison of two different algorithms like Longest Common Sequence (LCS) algorithm and Frequent Pattern (Growth) algorithm. Keywords— Web Usage Mining, Navigation Pattern, Frequent Pattern (Growth) Algorithm. ________________________________________________________________________________________________________", "title": "" }, { "docid": "bde4436370b1d5e1423d1b9c710a47ad", "text": "This paper provides a review of the literature addressing sensorless operation methods of PM brushless machines. The methods explained are state-of-the-art of open and closed loop control strategies. The closed loop review includes those methods based on voltage and current measurements, those methods based on back emf measurements, and those methods based on novel techniques not included in the previous categories. The paper concludes with a comparison table including all main features for all control strategies", "title": "" }, { "docid": "87b23719131fc8ab0bd60949be1595e8", "text": "To understand how implicit and explicit biofeedback work in games, we developed a first-person shooter (FPS) game to experiment with different biofeedback techniques. While this area has seen plenty of discussion, there is little rigorous experimentation addressing how biofeedback can enhance human-computer interaction. In our two-part study, (N=36) subjects first played eight different game stages with two implicit biofeedback conditions, with two simulation-based comparison and repetition rounds, then repeated the two biofeedback stages when given explicit information on the biofeedback. The biofeedback conditions were respiration and skin-conductance (EDA) adaptations. Adaptation targets were four balanced player avatar attributes. We collected data with psycho¬physiological measures (electromyography, respiration, and EDA), a game experience questionnaire, and game-play measures.\n According to our experiment, implicit biofeedback does not produce significant effects in player experience in an FPS game. In the explicit biofeedback conditions, players were more immersed and positively affected, and they were able to manipulate the game play with the biosignal interface. We recommend exploring the possibilities of using explicit biofeedback interaction in commercial games.", "title": "" }, { "docid": "197ad51ef4b33978903a2ece4a64c350", "text": "It has been suggested that Brain-Computer Interfaces (BCI) may one day be suitable for controlling a neuroprosthesis. For closed-loop operation of BCI, a tactile feedback channel that is compatible with neuroprosthetic applications is desired. Operation of an EEG-based BCI using only vibrotactile feedback, a commonly used method to convey haptic senses of contact and pressure, is demonstrated with a high level of accuracy. A Mu-rhythm based BCI using a motor imagery paradigm was used to control the position of a virtual cursor. The cursor position was shown visually as well as transmitted haptically by modulating the intensity of a vibrotactile stimulus to the upper limb. A total of six subjects operated the BCI in a two-stage targeting task, receiving only vibrotactile biofeedback of performance. The location of the vibration was also systematically varied between the left and right arms to investigate location-dependent effects on performance. Subjects are able to control the BCI using only vibrotactile feedback with an average accuracy of 56% and as high as 72%. These accuracies are significantly higher than the 15% predicted by random chance if the subject had no voluntary control of their Mu-rhythm. The results of this study demonstrate that vibrotactile feedback is an effective biofeedback modality to operate a BCI using motor imagery. In addition, the study shows that placement of the vibrotactile stimulation on the biceps ipsilateral or contralateral to the motor imagery introduces a significant bias in the BCI accuracy. This bias is consistent with a drop in performance generated by stimulation of the contralateral limb. Users demonstrated the capability to overcome this bias with training.", "title": "" }, { "docid": "7182dfe75bc09df526da51cd5c8c8d20", "text": "Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bidirectional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer’s sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.", "title": "" }, { "docid": "803a5dbedf309cec97d130438e687002", "text": "Affective computing is a newly trend the main goal is exploring the human emotion things. The human emotion is leaded into a key position of behavior clue, and hence it should be included within the sensible model when an intelligent system aims to simulate or forecast human responses. This research utilizes decision tree one of data mining model to classify the emotion. This research integrates and manipulates the Thayer's emotion mode and color theory into the decision tree model, C4.5 for an innovative emotion detecting system. This paper uses 320 data in four emotion groups to train and build the decision tree for verifying the accuracy in this system. The result reveals that C4.5 decision tree model can be effective classified the emotion by feedback color from human. For the further research, colors will not the only human behavior clues, even more than all the factors from human interaction.", "title": "" }, { "docid": "8d30afbccfa76492b765f69d34cd6634", "text": "Commonsense knowledge is vital to many natural language processing tasks. In this paper, we present a novel open-domain conversation generation model to demonstrate how large-scale commonsense knowledge can facilitate language understanding and generation. Given a user post, the model retrieves relevant knowledge graphs from a knowledge base and then encodes the graphs with a static graph attention mechanism, which augments the semantic information of the post and thus supports better understanding of the post. Then, during word generation, the model attentively reads the retrieved knowledge graphs and the knowledge triples within each graph to facilitate better generation through a dynamic graph attention mechanism. This is the first attempt that uses large-scale commonsense knowledge in conversation generation. Furthermore, unlike existing models that use knowledge triples (entities) separately and independently, our model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs. Experiments show that the proposed model can generate more appropriate and informative responses than stateof-the-art baselines.", "title": "" }, { "docid": "692207fdd7e27a04924000648f8b1bbf", "text": "Many animals, on air, water, or land, navigate in three-dimensional (3D) environments, yet it remains unclear how brain circuits encode the animal's 3D position. We recorded single neurons in freely flying bats, using a wireless neural-telemetry system, and studied how hippocampal place cells encode 3D volumetric space during flight. Individual place cells were active in confined 3D volumes, and in >90% of the neurons, all three axes were encoded with similar resolution. The 3D place fields from different neurons spanned different locations and collectively represented uniformly the available space in the room. Theta rhythmicity was absent in the firing patterns of 3D place cells. These results suggest that the bat hippocampus represents 3D volumetric space by a uniform and nearly isotropic rate code.", "title": "" }, { "docid": "3bbb7d9e7ec90a4d9ab28dad1727fe70", "text": "Space-frequency (SF) codes that exploit both spatial and frequency diversity can be designed using orthogonal frequency division multiplexing (OFDM). However, OFDM is sensitive to frequency offset (FO), which generates intercarrier interference (ICI) among subcarriers. We investigate the pair-wise error probability (PEP) performance of SF codes over quasistatic, frequency selective Rayleigh fading channels with FO. We prove that the conventional SF code design criteria remain valid. The negligible performance loss for small FOs (less than 1%), however, increases with FO and with signal to noise ratio (SNR). While diversity can be used to mitigate ICI, as FO increases, the PEP does not rapidly decay with SNR. Therefore, we propose a new class of SF codes called ICI self-cancellation SF (ISC-SF) codes to combat ICI effectively even with high FO (10%). ISC-SF codes are constructed from existing full diversity space-time codes. Importantly, our code design provide a satisfactory tradeoff among error correction ability, ICI reduction and spectral efficiency. Furthermore, we demonstrate that ISC-SF codes can also mitigate the ICI caused by phase noise and time varying channels. Simulation results affirm the theoretical analysis.", "title": "" }, { "docid": "6ad90319d07abce021eda6f3a1d3886e", "text": "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Fréchet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65.", "title": "" }, { "docid": "a898f3e513b2c738c476cfb9a519d4dd", "text": "In addition to training our policy on the goals that were generated in the current iteration, we also save a list (“regularized replay buffer”) of goals that were generated during previous iterations (update replay). These goals are also used to train our policy, so that our policy does not forget how to achieve goals that it has previously learned. When we generate goals for our policy to train on, we sample two thirds of the goals from the Goal GAN and we sample the one third of the goals uniformly from the replay buffer. To prevent the replay buffer from concentrating in a small portion of goal space, we only insert new goals that are further away than from the goals already in the buffer, where we chose the goal-space metric and to be the same as the ones introduced in Section 3.1.", "title": "" }, { "docid": "63d9f909fe0d5d614fd13b8a6676fab3", "text": "Awareness of other vehicle's intention may help human drivers or autonomous vehicles judge the risk and avoid traffic accidents. This paper proposed an approach to predicting driver's intentions using Hidden Markov Model (HMM) which is able to access the control and the state of the vehicle. The driver performs maneuvers including stop/non-stop, change lane left/right and turn left/right in a simulator in both highway and urban environments. Moreover, the structure of the road (curved road) is also taken into account for classification. Experiments were conducted with different input sets (steering wheel data with and without vehicle state data) to compare the system performance.", "title": "" }, { "docid": "0502b30d45e6f51a7eb0eeec1f0af2e9", "text": "Identification and extraction of singing voice from within musical mixtures is a key challenge in sourc e separation and machine audition. Recently, deep neural network s (DNN) have been used to estimate 'ideal' binary masks for carefully controlled cocktail party speech separation problems. However, it is not yet known whether these methods are capab le of generalizing to the discrimination of voice and non -voice in the context of musical mixtures. Here, we trained a con volutional DNN (of around a billion parameters) to provide probabilistic estimates of the ideal binary mask for separation o f vocal sounds from real-world musical mixtures. We contrast our DNN results with more traditional linear methods. Our approach may be useful for automatic removal of vocal sounds from musical mixtures for 'karaoke' type applications.", "title": "" }, { "docid": "cf020ec1d5fbaa42d4699b16d27434d0", "text": "Direct methods for restoration of images blurred by motion are analyzed and compared. The term direct means that the considered methods are performed in a one-step fashion without any iterative technique. The blurring point-spread function is assumed to be unknown, and therefore the image restoration process is called blind deconvolution. What is believed to be a new direct method, here called the whitening method, was recently developed. This method and other existing direct methods such as the homomorphic and the cepstral techniques are studied and compared for a variety of motion types. Various criteria such as quality of restoration, sensitivity to noise, and computation requirements are considered. It appears that the recently developed method shows some improvements over other older methods. The research presented here clarifies the differences among the direct methods and offers an experimental basis for choosing which blind deconvolution method to use. In addition, some improvements on the methods are suggested.", "title": "" }, { "docid": "950b2ce943f125a3b8e952f32fd45715", "text": "Domain adaptation plays an important role for speech recognition models, in particular, for domains that have low resources. We propose a novel generative model based on cyclicconsistent generative adversarial network (CycleGAN) for unsupervised non-parallel speech domain adaptation. The proposed model employs multiple independent discriminators on the power spectrogram, each in charge of different frequency bands. As a result we have 1) better discriminators that focus on fine-grained details of the frequency features, and 2) a generator that is capable of generating more realistic domainadapted spectrogram. We demonstrate the effectiveness of our method on speech recognition with gender adaptation, where the model only has access to supervised data from one gender during training, but is evaluated on the other at test time. Our model is able to achieve an average of 7.41% on phoneme error rate, and 11.10% word error rate relative performance improvement as compared to the baseline, on TIMIT and WSJ dataset, respectively. Qualitatively, our model also generates more natural sounding speech, when conditioned on data from the other domain.", "title": "" }, { "docid": "cfc2c98e3422d32ca4c30fea1f18b74a", "text": "While it is known that academic searchers differ from typical web searchers, little is known about the search behavior of academic searchers over longer periods of time. In this study we take a look at academic searchers through a large-scale log analysis on a major academic search engine. We focus on two aspects: query reformulation patterns and topic shifts in queries. We first analyze how each of these aspects evolve over time. We identify important query reformulation patterns: revisiting and issuing new queries tend to happen more often over time. We also find that there are two distinct types of users: one type of users becomes increasingly focused on the topics they search for as time goes by, and the other becomes increasingly diversifying. After analyzing these two aspects separately, we investigate whether, and to which degree, there is a correlation between topic shifts and query reformulations. Surprisingly, users’ preferences of query reformulations correlate little with their topic shift tendency. However, certain reformulations may help predict the magnitude of the topic shift that happens in the immediate next timespan. Our results shed light on academic searchers’ information seeking behavior and may benefit search personalization.", "title": "" } ]
scidocsrr
9769f8fd969f8b42a3643e01d04ea6fc
CLUSTERGEN: a statistical parametric synthesizer using trajectory modeling
[ { "docid": "6d517b4459ee29c5554280e8339adbcc", "text": "This paper describes an HMM-based speech synthesis system (HTS), in which speech waveform is generated from HMMs themselves, and applies it to English speech synthesis using the general speech synthesis architecture of Festival. Similarly to other datadriven speech synthesis approaches, HTS has a compact language dependent module: a list of contextual factors. Thus, it could easily be extended to other languages, though the first version of HTS was implemented for Japanese. The resulting run-time engine of HTS has the advantage of being small: less than 1 M bytes, excluding text analysis part. Furthermore, HTS can easily change voice characteristics of synthesized speech by using a speaker adaptation technique developed for speech recognition. The relation between the HMM-based approach and other unit selection approaches is also discussed.", "title": "" } ]
[ { "docid": "2cea5f37c8c03fc0b6abc9e5d70bb1b3", "text": "This paper summarize our approach to author profiling task – a part of evaluation lab PAN’13. We have used ensemble-based classification on large features set. All the features are roughly described and experimental section provides evaluation of different methods and classification approaches.", "title": "" }, { "docid": "0fdc468347fc6c50767687d5364a098e", "text": "We study a generalization of the setting of regenerating codes, motivated by applications to storage systems consisting of clusters of storage nodes. There are n clusters in total, with m nodes per cluster. A data file is coded and stored across the mn nodes, with each node storing α symbols. For availability of data, we demand that the file is retrievable by downloading the entire content from any subset of k clusters. Nodes represent entities that can fail, and here we distinguish between intra-cluster and inter-cluster bandwidth-costs during node repair. Node-repair is accomplished by downloading β symbols each from any set of d other clusters. The replacement-node also downloads content from any set of ` surviving nodes in the same cluster during the repair process. We identity the optimal trade-off between storage-overhead and inter-cluster (IC) repair-bandwidth under functional repair, and also present optimal exact-repair code constructions for a class of parameters. Our results imply that it is possible to simultaneously achieve both optimal storage overhead and optimal minimum IC bandwidth, for sufficiently large values of nodes per cluster. The simultaneous optimality comes at the expense of intra-cluster bandwidth, and we obtain lower bounds on the necessary intra-cluster repair-bandwidth. Simulation results based on random linear network codes suggest optimality of the bounds on intra-cluster repair-bandwidth.", "title": "" }, { "docid": "2a5f555c00d98a87fe8dd6d10e27dc38", "text": "Neurodegeneration is a phenomenon that occurs in the central nervous system through the hallmarks associating the loss of neuronal structure and function. Neurodegeneration is observed after viral insult and mostly in various so-called 'neurodegenerative diseases', generally observed in the elderly, such as Alzheimer's disease, multiple sclerosis, Parkinson's disease and amyotrophic lateral sclerosis that negatively affect mental and physical functioning. Causative agents of neurodegeneration have yet to be identified. However, recent data have identified the inflammatory process as being closely linked with multiple neurodegenerative pathways, which are associated with depression, a consequence of neurodegenerative disease. Accordingly, pro‑inflammatory cytokines are important in the pathophysiology of depression and dementia. These data suggest that the role of neuroinflammation in neurodegeneration must be fully elucidated, since pro‑inflammatory agents, which are the causative effects of neuroinflammation, occur widely, particularly in the elderly in whom inflammatory mechanisms are linked to the pathogenesis of functional and mental impairments. In this review, we investigated the role played by the inflammatory process in neurodegenerative diseases.", "title": "" }, { "docid": "a0f4b7f3f9f2a5d430a3b8acead2b746", "text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse", "title": "" }, { "docid": "eea288f275b0eab62dddd64a469a2d63", "text": "Glucose control serves as the primary method of diabetes management. Current digital therapeutic approaches for subjects with Type 1 diabetes mellitus (T1DM) such as the artificial pancreas and bolus calculators leverage machine learning techniques for predicting subcutaneous glucose for improved control. Deep learning has recently been applied in healthcare and medical research to achieve state-of-the-art results in a range of tasks including disease diagnosis, and patient state prediction among others. In this work, we present a deep learning model that is capable of predicting glucose levels over a 30-minute horizon with leading accuracy for simulated patient cases (RMSE = 10.02±1.28 [mg/dl] and MARD = 5.95±0.64%) and real patient cases (RMSE = 21.23±1.15 [mg/dl] and MARD = 10.53±1.28%). In addition, the model also provides competitive performance in forecasting adverse glycaemic events with minimal time lag both in a simulated patient dataset (MCChyperglycaemia = 0.82±0.06 and MCChypoglycaemia = 0.76±0.13) and in a real patient dataset (MCChyperglycaemia = 0.79±0.04 and MCChypoglycaemia = 0.28±0.11). This approach is evaluated on a dataset of 10 simulated cases generated from the UVa/Padova simulator and a clinical dataset of 5 real cases each containing glucose readings, insulin bolus, and meal (carbohydrate) data. Performance of the recurrent convolutional neural network is benchmarked against four state-of-the-art algorithms: support vector regression (SVR), latent variable (LVX) model, autoregressive model (ARX), and neural network for predicting glucose algorithm (NNPG).", "title": "" }, { "docid": "c67010d61ec7f9ea839bbf7d2dce72a1", "text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. More recently, there have been proposals to explore mmWave spectrum (3-300GHz) for commercial mobile applications due to its unique advantages such as spectrum availability and small component sizes. In this paper, we discuss system design aspects such as antenna array design, base station and mobile station requirements. We also provide system performance and SINR geometry results to demonstrate the feasibility of an outdoor mmWave mobile broadband communication system. We note that with adaptive antenna array beamforming, multi-Gbps data rates can be supported for mobile cellular deployments.", "title": "" }, { "docid": "61359ded391acaaaab0d4b9a0d851b8c", "text": "A laparoscopic Heller myotomy with partial fundoplication is considered today in most centers in the United States and abroad the treatment of choice for patients with esophageal achalasia. Even though the operation has initially a very high success rate, dysphagia eventually recurs in some patients. In these cases, it is important to perform a careful work-up to identify the cause of the failure and to design a tailored treatment plan by either endoscopic means or revisional surgery. The best results are obtained by a team approach, in Centers where radiologists, gastroenterologists, and surgeons have experience in the diagnosis and treatment of this disease.", "title": "" }, { "docid": "8f29de514e2a266a02be4b75d62be44f", "text": "In this work, we apply word embeddings and neural networks with Long Short-Term Memory (LSTM) to text classification problems, where the classification criteria are decided by the context of the application. We examine two applications in particular. The first is that of Actionability, where we build models to classify social media messages from customers of service providers as Actionable or Non-Actionable. We build models for over 30 different languages for actionability, and most of the models achieve accuracy around 85%, with some reaching over 90% accuracy. We also show that using LSTM neural networks with word embeddings vastly outperform traditional techniques. Second, we explore classification of messages with respect to political leaning, where social media messages are classified as Democratic or Republican. The model is able to classify messages with a high accuracy of 87.57%. As part of our experiments, we vary different hyperparameters of the neural networks, and report the effect of such variation on the accuracy. These actionability models have been deployed to production and help company agents provide customer support by prioritizing which messages to respond to. The model for political leaning has been opened and made available for wider use.", "title": "" }, { "docid": "c974e6b4031fde2b8e1de3ade33caef4", "text": "A large literature has considered predictability of the mean or volatility of stock returns but little is known about whether the distribution of stock returns more generally is predictable. We explore this issue in a quantile regression framework and consider whether a range of economic state variables are helpful in predicting different quantiles of stock returns representing left tails, right tails or shoulders of the return distribution. Many variables are found to have an asymmetric effect on the return distribution, affecting lower, central and upper quantiles very differently. Out-of-sample forecasts suggest that upper quantiles of the return distribution can be predicted by means of economic state variables although the center of the return distribution is more difficult to predict. Economic gains from utilizing information in time-varying quantile forecasts are demonstrated through portfolio selection and option trading experiments. ∗We thank Torben Andersen, Tim Bollerslev, Peter Christoffersen as well as seminar participants at HEC, University of Montreal, University of Toronto, Goldman Sachs and CREATES, University of Aarhus, for helpful comments.", "title": "" }, { "docid": "247eced239dfd8c1631d80a592593471", "text": "In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation.1", "title": "" }, { "docid": "049f0308869c53bbb60337874789d569", "text": "In machine learning, one of the main requirements is to build computational models with a high ability to generalize well the extracted knowledge. When training e.g. artificial neural networks, poor generalization is often characterized by over-training. A common method to avoid over-training is the hold-out crossvalidation. The basic problem of this method represents, however, appropriate data splitting. In most of the applications, simple random sampling is used. Nevertheless, there are several sophisticated statistical sampling methods suitable for various types of datasets. This paper provides a survey of existing sampling methods applicable to the data splitting problem. Supporting experiments evaluating the benefits of the selected data splitting techniques involve artificial neural networks of the back-propagation type.", "title": "" }, { "docid": "bbdd4ffd6797d00c3547626959118b92", "text": "A vision system was designed to detect multiple lanes on structured highway using an “estimate and detect” scheme. It detected the lane in which the vehicle was driving (the central lane) and estimated the possible position of two adjacent lanes. Then the detection was made based on these estimations. The vehicle was first recognized if it was driving on a straight road or in a curve using its GPS position and the OpenStreetMap digital map. The two cases were processed differently. For straight road, the central lane was detected in the original image using Hough transformation and a simplified perspective transformation was designed to make estimations. In the case of curve path, a complete perspective transformation was performed and the central lane was detected by scanning at each row in the top view image. The system was able to detected lane marks that were not distinct or even obstructed by other vehicles.", "title": "" }, { "docid": "9952748e3d86ac550a30c2e59ac1ccd3", "text": "Targeting Interleukin-1 in Heart Disease Print ISSN: 0009-7322. Online ISSN: 1524-4539 Copyright © 2013 American Heart Association, Inc. All rights reserved. is published by the American Heart Association, 7272 Greenville Avenue, Dallas, TX 75231 Circulation doi: 10.1161/CIRCULATIONAHA.113.003199 2013;128:1910-1923 Circulation. http://circ.ahajournals.org/content/128/17/1910 World Wide Web at: The online version of this article, along with updated information and services, is located on the", "title": "" }, { "docid": "8c46f24d8e710c5fb4e25be76fc5b060", "text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.", "title": "" }, { "docid": "432e8e346b2407cef8b6deabeea5d94e", "text": "Plant-based psychedelics, such as psilocybin, have an ancient history of medicinal use. After the first English language report on LSD in 1950, psychedelics enjoyed a short-lived relationship with psychology and psychiatry. Used most notably as aids to psychotherapy for the treatment of mood disorders and alcohol dependence, drugs such as LSD showed initial therapeutic promise before prohibitive legislature in the mid-1960s effectively ended all major psychedelic research programs. Since the early 1990s, there has been a steady revival of human psychedelic research: last year saw reports on the first modern brain imaging study with LSD and three separate clinical trials of psilocybin for depressive symptoms. In this circumspective piece, RLC-H and GMG share their opinions on the promises and pitfalls of renewed psychedelic research, with a focus on the development of psilocybin as a treatment for depression.", "title": "" }, { "docid": "3420aa0f36f8114a7c3962bf443bf884", "text": "In this paper, for the first time, 600 ∼ 6500 V IGBTs utilizing a new vertical structure of “Light Punch-Through (LPT) (II)” with Thin Wafer Process Technology demonstrate high total performance with low overall loss and high safety operating area (SOA) capability. This collector structure enables a wide position in the trade-off characteristics between on-state voltage (VCE(sat)) and turn-off loss (EOFF) without utilizing any conventional carrier lifetime technique. In addition, this device concept achieves a wide operating junction temperature (@218 ∼ 423 K) of IGBT without the snap-back phenomena (≤298 K) and thermal destruction (≥398 K). From the viewpoint of the high performance of IGBT, the breaking limitation of any Si wafer size, the proposed LPT(II) concept that utilizes an FZ silicon wafer and Thin Wafer Technology is the most promising candidate as a vertical structure of IGBT for the any voltage class.", "title": "" }, { "docid": "a049749849761dc4cd65d4442fd135f8", "text": "Local classifiers are sometimes called lazy learners because they do not train a classifier until presented with a test sample. However, such methods are generally not completely lazy because the neighborhood size k (or other locality parameter) is usually chosen by cross validation on the training set, which can require significant preprocessing and risks overfitting. We propose a simple alternative to cross validation of the neighborhood size that requires no preprocessing: instead of committing to one neighborhood size, average the discriminants for multiple neighborhoods. We show that this forms an expected estimated posterior that minimizes the expected Bregman loss with respect to the uncertainty about the neighborhood choice. We analyze this approach for six standard and state-of-the-art local classifiers, including discriminative adaptive metric kNN (DANN), a local support vector machine (SVM-KNN), hyperplane distance nearest neighbor (HKNN), and a new local Bayesian quadratic discriminant analysis (local BDA). The empirical effectiveness of this technique versus cross validation is confirmed with experiments on seven benchmark data sets, showing that similar classification performance can be attained without any training.", "title": "" }, { "docid": "567f48fef5536e9f44a6c66deea5375b", "text": "The principle of control signal amplification is found in all actuation systems, from engineered devices through to the operation of biological muscles. However, current engineering approaches require the use of hard and bulky external switches or valves, incompatible with both the properties of emerging soft artificial muscle technology and those of the bioinspired robotic systems they enable. To address this deficiency a biomimetic molecular-level approach is developed that employs light, with its excellent spatial and temporal control properties, to actuate soft, pH-responsive hydrogel artificial muscles. Although this actuation is triggered by light, it is largely powered by the resulting excitation and runaway chemical reaction of a light-sensitive acid autocatalytic solution in which the actuator is immersed. This process produces actuation strains of up to 45% and a three-fold chemical amplification of the controlling light-trigger, realising a new strategy for the creation of highly functional soft actuating systems.", "title": "" }, { "docid": "ad78f226f21bd020e625659ad3ddbf74", "text": "We study the approach to jamming in hard-sphere packings and, in particular, the pair correlation function g(2) (r) around contact, both theoretically and computationally. Our computational data unambiguously separate the narrowing delta -function contribution to g(2) due to emerging interparticle contacts from the background contribution due to near contacts. The data also show with unprecedented accuracy that disordered hard-sphere packings are strictly isostatic: i.e., the number of exact contacts in the jamming limit is exactly equal to the number of degrees of freedom, once rattlers are removed. For such isostatic packings, we derive a theoretical connection between the probability distribution of interparticle forces P(f) (f) , which we measure computationally, and the contact contribution to g(2) . We verify this relation for computationally generated isostatic packings that are representative of the maximally random jammed state. We clearly observe a maximum in P(f) and a nonzero probability of zero force, shedding light on long-standing questions in the granular-media literature. We computationally observe an unusual power-law divergence in the near-contact contribution to g(2) , persistent even in the jamming limit, with exponent -0.4 clearly distinguishable from previously proposed inverse-square-root divergence. Additionally, we present high-quality numerical data on the two discontinuities in the split-second peak of g(2) and use a shared-neighbor analysis of the graph representing the contact network to study the local particle clusters responsible for the peculiar features. Finally, we present the computational data on the contact contribution to g(2) for vacancy-diluted fcc crystal packings and also investigate partially crystallized packings along the transition from maximally disordered to fully ordered packings. We find that the contact network remains isostatic even when ordering is present. Unlike previous studies, we find that ordering has a significant impact on the shape of P(f) for small forces.", "title": "" }, { "docid": "f582f73b7a7a252d6c17766a9c5f8dee", "text": "The modern image search system requires semantic understanding of image, and a key yet under-addressed problem is to learn a good metric for measuring the similarity between images. While deep metric learning has yielded impressive performance gains by extracting high level abstractions from image data, a proper objective loss function becomes the central issue to boost the performance. In this paper, we propose a novel angular loss, which takes angle relationship into account, for learning better similarity metric. Whereas previous metric learning methods focus on optimizing the similarity (contrastive loss) or relative similarity (triplet loss) of image pairs, our proposed method aims at constraining the angle at the negative point of triplet triangles. Several favorable properties are observed when compared with conventional methods. First, scale invariance is introduced, improving the robustness of objective against feature variance. Second, a third-order geometric constraint is inherently imposed, capturing additional local structure of triplet triangles than contrastive loss or triplet loss. Third, better convergence has been demonstrated by experiments on three publicly available datasets.", "title": "" } ]
scidocsrr
57a18a8a899b95092f68ebc9351a9765
Bandwidth Enhancement of Small-Size Planar Tablet Computer Antenna Using a Parallel-Resonant Spiral Slit
[ { "docid": "75d486862b8d9eca63502ac6cbb936dc", "text": "A coupled-fed shorted monopole with its feed structure as an effective radiator for eight-band LTE/WWAN (LTE700/GSM850/900/1800/ 1900/UMTS/LTE2300/2500) operation in the laptop computer is presented. The radiating feed structure capacitively excites the shorted monopole. The feed structure mainly comprises a long feeding strip and a loop feed therein. The loop feed is formed at the front section of the feeding strip and connected to a 50-Ω mini-cable to feed the antenna. Both the feeding strip and loop feed contribute wideband resonant modes to combine with those generated by the shorted monopole for the desired eight-band operation. The antenna size above the top shielding metal wall of the laptop display is 4 × 10 × 80 mm3 and is suitable to be embedded inside the casing of the laptop computer. The proposed antenna is fabricated and tested, and good radiation performances of the fabricated antenna are obtained.", "title": "" }, { "docid": "bc69fe2a1791b8d7e0e262f8110df9d4", "text": "A small-size coupled-fed loop antenna suitable to be printed on the system circuit board of the mobile phone for penta-band WWAN operation (824-960/1710-2170 MHz) is presented. The loop antenna requires only a small footprint of 15 x 25 mm2 on the circuit board, and it can also be in close proximity to the surrounding ground plane printed on the circuit board. That is, very small or no isolation distance is required between the antenna's radiating portion and the nearby ground plane. This can lead to compact integration of the internal on-board printed antenna on the circuit board of the mobile phone, especially the slim mobile phone. The loop antenna also shows a simple structure; it is formed by a loop strip of about 87 mm with its end terminal short-circuited to the ground plane and its front section capacitively coupled to a feeding strip which is also an efficient radiator to contribute a resonant mode for the antenna's upper band to cover the GSM1800/1900/UMTS bands (1710-2170 MHz). Through the coupling excitation, the antenna can also generate a 0.25-wavelength loop resonant mode to form the antenna's lower band to cover the GSM850/900 bands (824-960 MHz). Details of the proposed antenna are presented. The SAR results for the antenna with the presence of the head and hand phantoms are also studied.", "title": "" }, { "docid": "7cc3d7722f978545a6735ae4982ffc62", "text": "A multiband printed monopole slot antenna promising for operating as an internal antenna in the thin-profile laptop computer for wireless wide area network (WWAN) operation is presented. The proposed antenna is formed by three monopole slots operated at their quarter-wavelength modes and arranged in a compact planar configuration. A step-shaped microstrip feedline is applied to excite the three monopole slots at their respective optimal feeding position, and two wide operating bands at about 900 and 1900 MHz are obtained for the antenna to cover all the five operating bands of GSM850/900/1800/1900/UMTS for WWAN operation. The antenna is easily printed on a small-size FR4 substrate and shows a length of 60 mm only and a height of 12 mm when mounted at the top edge of the system ground plane or supporting metal frame of the laptop display. Details of the proposed antenna are presented and studied.", "title": "" } ]
[ { "docid": "ef44e3456962ed4a857614b0782ed4d2", "text": "A sketching system for spline-based free-form surfaces on the Responsive Workbench is presented. We propose 3D tools for curve drawing and deformation techniques for curves and surfaces, adapted to the needs of designers. The user directly draws curves in the virtual environment, using a tracked stylus as an input device. A curve network can be formed, describing the skeleton of a virtual model. The non-dominant hand positions and orients the model while the dominant hand uses the editing tools. The curves and the resulting skinning surfaces can interactively be deformed.", "title": "" }, { "docid": "fc62b094df3093528c6846e405f55e39", "text": "Correctly classifying a skin lesion is one of the first steps towards treatment. We propose a novel convolutional neural network (CNN) architecture for skin lesion classification designed to learn based on information from multiple image resolutions while leveraging pretrained CNNs. While traditional CNNs are generally trained on a single resolution image, our CNN is composed of multiple tracts, where each tract analyzes the image at a different resolution simultaneously and learns interactions across multiple image resolutions using the same field-of-view. We convert a CNN, pretrained on a single resolution, to work for multi-resolution input. The entire network is fine-tuned in a fully learned end-to-end optimization with auxiliary loss functions. We show how our proposed novel multi-tract network yields higher classification accuracy, outperforming state-of-the-art multi-scale approaches when compared over a public skin lesion dataset.", "title": "" }, { "docid": "6080612b8858d633c3f63a3d019aef58", "text": "Color images provide large information for human visual perception compared to grayscale images. Color image enhancement methods enhance the visual data to increase the clarity of the color image. It increases human perception of information. Different color image contrast enhancement methods are used to increase the contrast of the color images. The Retinex algorithms enhance the color images similar to the scene perceived by the human eye. Multiscale retinex with color restoration (MSRCR) is a type of retinex algorithm. The MSRCR algorithm results in graying out and halo artifacts at the edges of the images. So here the focus is on improving the MSRCR algorithm by combining it with contrast limited adaptive histogram equalization (CLAHE) using image.", "title": "" }, { "docid": "3e9aa3bcc728f8d735f6b02e0d7f0502", "text": "Linda Marion is a doctoral student at Drexel University. E-mail: [email protected]. Abstract This exploratory study examined 250 online academic librarian employment ads posted during 2000 to determine current requirements for technologically oriented jobs. A content analysis software program was used to categorize the specific skills and characteristics listed in the ads. The results were analyzed using multivariate analysis (cluster analysis and multidimensional scaling). The results, displayed in a three-dimensional concept map, indicate 19 categories comprised of both computer related skills and behavioral characteristics that can be interpreted along three continua: (1) technical skills to people skills; (2) long-established technologies and behaviors to emerging trends; (3) technical service competencies to public service competencies. There was no identifiable “digital librarian” category.", "title": "" }, { "docid": "eb1045f1e85d7197a2952c6580604f75", "text": "There's a large push toward offering solutions and services in the cloud due to its numerous advantages. However, there are no clear guidelines for designing and deploying cloud solutions that can seamlessly operate to handle Web-scale traffic. The authors review industry best practices and identify principles for operating Web-scale cloud solutions by deriving design patterns that enable each principle in cloud solutions. In addition, using a seemingly straightforward cloud service as an example, they explain the application of the identified patterns.", "title": "" }, { "docid": "10b4d77741d40a410b30b0ba01fae67f", "text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: [email protected] (Oke). © 2006 AAEP.", "title": "" }, { "docid": "bd5b8680feac7b5ff806a6a40b9f73ae", "text": "Human variation in content selection in summarization has given rise to some fundamental research questions: How can one incorporate the observed variation in suitable evaluation measures? How can such measures reflect the fact that summaries conveying different content can be equally good and informative? In this article, we address these very questions by proposing a method for analysis of multiple human abstracts into semantic content units. Such analysis allows us not only to quantify human variation in content selection, but also to assign empirical importance weight to different content units. It serves as the basis for an evaluation method, the Pyramid Method, that incorporates the observed variation and is predictive of different equally informative summaries. We discuss the reliability of content unit annotation, the properties of Pyramid scores, and their correlation with other evaluation methods.", "title": "" }, { "docid": "def6cd29f4679acdc7d944d9a7e734e4", "text": "Question Answering (QA) is one of the most challenging and crucial tasks in Natural Language Processing (NLP) that has a wide range of applications in various domains, such as information retrieval and entity extraction. Traditional methods involve linguistically based NLP techniques, and recent researchers apply Deep Learning on this task and have achieved promising result. In this paper, we combined Dynamic Coattention Network (DCN) [1] and bilateral multiperspective matching (BiMPM) model [2], achieved an F1 score of 63.8% and exact match (EM) of 52.3% on test set.", "title": "" }, { "docid": "e4f4fe27fff75bd7ed079f3094deaedb", "text": "This paper considers the scenario that multiple data owners wish to apply a machine learning method over the combined dataset of all owners to obtain the best possible learning output but do not want to share the local datasets owing to privacy concerns. We design systems for the scenario that the stochastic gradient descent (SGD) algorithm is used as the machine learning method because SGD (or its variants) is at the heart of recent deep learning techniques over neural networks. Our systems differ from existing systems in the following features: (1) any activation function can be used, meaning that no privacy-preserving-friendly approximation is required; (2) gradients computed by SGD are not shared but the weight parameters are shared instead; and (3) robustness against colluding parties even in the extreme case that only one honest party exists. We prove that our systems, while privacy-preserving, achieve the same learning accuracy as SGD and hence retain the merit of deep learning with respect to accuracy. Finally, we conduct several experiments using benchmark datasets, and show that our systems outperform previous system in terms of learning accuracies. keywords: privacy preservation, stochastic gradient descent, distributed trainers, neural networks.", "title": "" }, { "docid": "98ce0c1bc955b7aa64e1820b56a1be6c", "text": "Lipid nanoparticles (LNPs) have attracted special interest during last few decades. Solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs) are two major types of Lipid-based nanoparticles. SLNs were developed to overcome the limitations of other colloidal carriers, such as emulsions, liposomes and polymeric nanoparticles because they have advantages like good release profile and targeted drug delivery with excellent physical stability. In the next generation of the lipid nanoparticle, NLCs are modified SLNs which improve the stability and capacity loading. Three structural models of NLCs have been proposed. These LNPs have potential applications in drug delivery field, research, cosmetics, clinical medicine, etc. This article focuses on features, structure and innovation of LNPs and presents a wide discussion about preparation methods, advantages, disadvantages and applications of LNPs by focusing on SLNs and NLCs.", "title": "" }, { "docid": "1d1e89d6f1db290f01d296394d03a71b", "text": "Ontology mapping is seen as a solution provider in today’s landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.", "title": "" }, { "docid": "722a2b6f773473d032d202ce7aded43c", "text": "Detection of skin cancer in the earlier stage is very Important and critical. In recent days, skin cancer is seen as one of the most Hazardous form of the Cancers found in Humans. Skin cancer is found in various types such as Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most unpredictable. The detection of Melanoma cancer in early stage can be helpful to cure it. Computer vision can play important role in Medical Image Diagnosis and it has been proved by many existing systems. In this paper, we present a computer aided method for the detection of Melanoma Skin Cancer using Image processing tools. The input to the system is the skin lesion image and then by applying novel image processing techniques, it analyses it to conclude about the presence of skin cancer. The Lesion Image analysis tools checks for the various Melanoma parameters Like Asymmetry, Border, Colour, Diameter, (ABCD) etc. by texture, size and shape analysis for image segmentation and feature stages. The extracted feature parameters are used to classify the image as Normal skin and Melanoma cancer lesion.", "title": "" }, { "docid": "57d40d18977bc332ba16fce1c3cf5a66", "text": "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.", "title": "" }, { "docid": "4519e039416fe4548e08a15b30b8a14f", "text": "The R-tree, one of the most popular access methods for rectangles, is based on the heuristic optimization of the area of the enclosing rectangle in each inner node. By running numerous experiments in a standardized testbed under highly varying data, queries and operations, we were able to design the R*-tree which incorporates a combined optimization of area, margin and overlap of each enclosing rectangle in the directory. Using our standardized testbed in an exhaustive performance comparison, it turned out that the R*-tree clearly outperforms the existing R-tree variants. Guttman's linear and quadratic R-tree and Greene's variant of the R-tree. This superiority of the R*-tree holds for different types of queries and operations, such as map overlay, for both rectangles and multidimensional points in all experiments. From a practical point of view the R*-tree is very attractive because of the following two reasons 1 it efficiently supports point and spatial data at the same time and 2 its implementation cost is only slightly higher than that of other R-trees.", "title": "" }, { "docid": "a7760563ce223473a3723e048b85427a", "text": "The concept of “task” is at the core of artificial intelligence (AI): Tasks are used for training and evaluating AI systems, which are built in order to perform and automatize tasks we deem useful. In other fields of engineering theoretical foundations allow thorough evaluation of designs by methodical manipulation of well understood parameters with a known role and importance; this allows an aeronautics engineer, for instance, to systematically assess the effects of wind speed on an airplane’s performance and stability. No framework exists in AI that allows this kind of methodical manipulation: Performance results on the few tasks in current use (cf. board games, question-answering) cannot be easily compared, however similar or different. The issue is even more acute with respect to artificial general intelligence systems, which must handle unanticipated tasks whose specifics cannot be known beforehand. A task theory would enable addressing tasks at the class level, bypassing their specifics, providing the appropriate formalization and classification of tasks, environments, and their parameters, resulting in more rigorous ways of measuring, comparing, and evaluating intelligent behavior. Even modest improvements in this direction would surpass the current ad-hoc nature of machine learning and AI evaluation. Here we discuss the main elements of the argument for a task theory and present an outline of what it might look like for physical tasks.", "title": "" }, { "docid": "4b33d61fce948b8c7942ca6180765a59", "text": "We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works.", "title": "" }, { "docid": "7417b84c36671fde36a88ccf661c99e1", "text": "The power MOSFET on 4H-SiC is an attractive high-speed and low-dissipation power switching device. The problem to be solved before realizing the 4H-SiC power MOSFET with low on-resistance is low channel mobility at the SiO2/SiC interface. This work has succeeded in increasing the channel mobility in the buried channel IEMOSFET on carbon-face substrate, and has achieved an extremely low on-resistance of 1.8 mΩcm2 with a blocking voltage of 660 V", "title": "" }, { "docid": "235899b940c658316693d0a481e2d954", "text": "BACKGROUND\nImmunohistochemical markers are often used to classify breast cancer into subtypes that are biologically distinct and behave differently. The aim of this study was to estimate mortality for patients with the major subtypes of breast cancer as classified using five immunohistochemical markers, to investigate patterns of mortality over time, and to test for heterogeneity by subtype.\n\n\nMETHODS AND FINDINGS\nWe pooled data from more than 10,000 cases of invasive breast cancer from 12 studies that had collected information on hormone receptor status, human epidermal growth factor receptor-2 (HER2) status, and at least one basal marker (cytokeratin [CK]5/6 or epidermal growth factor receptor [EGFR]) together with survival time data. Tumours were classified as luminal and nonluminal tumours according to hormone receptor expression. These two groups were further subdivided according to expression of HER2, and finally, the luminal and nonluminal HER2-negative tumours were categorised according to expression of basal markers. Changes in mortality rates over time differed by subtype. In women with luminal HER2-negative subtypes, mortality rates were constant over time, whereas mortality rates associated with the luminal HER2-positive and nonluminal subtypes tended to peak within 5 y of diagnosis and then decline over time. In the first 5 y after diagnosis the nonluminal tumours were associated with a poorer prognosis, but over longer follow-up times the prognosis was poorer in the luminal subtypes, with the worst prognosis at 15 y being in the luminal HER2-positive tumours. Basal marker expression distinguished the HER2-negative luminal and nonluminal tumours into different subtypes. These patterns were independent of any systemic adjuvant therapy.\n\n\nCONCLUSIONS\nThe six subtypes of breast cancer defined by expression of five markers show distinct behaviours with important differences in short term and long term prognosis. Application of these markers in the clinical setting could have the potential to improve the targeting of adjuvant chemotherapy to those most likely to benefit. The different patterns of mortality over time also suggest important biological differences between the subtypes that may result in differences in response to specific therapies, and that stratification of breast cancers by clinically relevant subtypes in clinical trials is urgently required.", "title": "" }, { "docid": "4b4cea4f58f33b9ace117fddd936d006", "text": "The paper presents a complete solution for recognition of textual and graphic structures in various types of documents acquired from the Internet. In the proposed approach, the document structure recognition problem is divided into sub-problems. The first one is localizing logical structure elements within the document. The second one is recognizing segmented logical structure elements. The input to the method is an image of document page, the output is the XML file containing all graphic and textual elements included in the document, preserving the reading order of document blocks. This file contains information about the identity and position of all logical elements in the document image. The paper describes all details of the proposed method and shows the results of the experiments validating its effectiveness. The results of the proposed method for paragraph structure recognition are comparable to the referenced methods which offer segmentation only.", "title": "" }, { "docid": "2f8430ae99d274bb1a08b031dfd1c11b", "text": "BACKGROUND\nCleft-lip nasal deformity (CLND) affects the overall facial appearance and attractiveness. The CLND nose shares some features in part with the aging nose.\n\n\nOBJECTIVES\nThis questionnaire survey examined: 1) the panel perceptions of the role of secondary cleft rhinoplasty in nasal rejuvenation; and 2) the influence of a medical background in cleft care, age and gender of the panel members on the estimated age of the CLND nose.\n\n\nSTUDY DESIGN\nUsing a cross-sectional study design, we enrolled a random sample of adult laypersons and health care providers. The predictor variables were secondary cleft rhinoplasty (before/after) and a medical background in cleft care (yes/no). The outcome variable was the estimated age of nose in photographs derived from 8 German nonsyndromic CLND patients. Other study variables included age, gender, and career of the assessors. Appropriate descriptive and univariate statistics were computed, and a P value of <.05 was considered to be statistically significant.\n\n\nRESULTS\nThe sample consisted of 507 lay volunteers and 51 medical experts (407 [72.9%] were female; mean age ± SD = 24.9 ± 8.2 y). The estimated age of the CLND noses was higher than their real age. The rhinoplasty decreased the estimated age to a statistically significant degree (P < .0001). A medical background, age, and gender of the participants were not individually associated with their votes (P > .05).\n\n\nCONCLUSIONS\nThe results of this study suggest that CLND noses lack youthful appearance. Secondary cleft rhinoplasty rejuvenates the nose and makes it come close to the actual age of the patients.", "title": "" } ]
scidocsrr
522bb46a58652c1f314665fd7088ede0
Track k: medical information systems.
[ { "docid": "cdc3e4b096be6775547a8902af52e798", "text": "OBJECTIVE\nThe aim of the study was to present a systematic review of studies that investigate the effects of robot-assisted therapy on motor and functional recovery in patients with stroke.\n\n\nMETHODS\nA database of articles published up to October 2006 was compiled using the following Medline key words: cerebral vascular accident, cerebral vascular disorders, stroke, paresis, hemiplegia, upper extremity, arm, and robot. References listed in relevant publications were also screened. Studies that satisfied the following selection criteria were included: (1) patients were diagnosed with cerebral vascular accident; (2) effects of robot-assisted therapy for the upper limb were investigated; (3) the outcome was measured in terms of motor and/or functional recovery of the upper paretic limb; and (4) the study was a randomized clinical trial (RCT). For each outcome measure, the estimated effect size (ES) and the summary effect size (SES) expressed in standard deviation units (SDU) were calculated for motor recovery and functional ability (activities of daily living [ADLs]) using fixed and random effect models. Ten studies, involving 218 patients, were included in the synthesis. Their methodological quality ranged from 4 to 8 on a (maximum) 10-point scale.\n\n\nRESULTS\nMeta-analysis showed a nonsignificant heterogeneous SES in terms of upper limb motor recovery. Sensitivity analysis of studies involving only shoulder-elbow robotics subsequently demonstrated a significant homogeneous SES for motor recovery of the upper paretic limb. No significant SES was observed for functional ability (ADL).\n\n\nCONCLUSION\nAs a result of marked heterogeneity in studies between distal and proximal arm robotics, no overall significant effect in favor of robot-assisted therapy was found in the present meta-analysis. However, subsequent sensitivity analysis showed a significant improvement in upper limb motor function after stroke for upper arm robotics. No significant improvement was found in ADL function. However, the administered ADL scales in the reviewed studies fail to adequately reflect recovery of the paretic upper limb, whereas valid instruments that measure outcome of dexterity of the paretic arm and hand are mostly absent in selected studies. Future research into the effects of robot-assisted therapy should therefore distinguish between upper and lower robotics arm training and concentrate on kinematical analysis to differentiate between genuine upper limb motor recovery and functional recovery due to compensation strategies by proximal control of the trunk and upper limb.", "title": "" } ]
[ { "docid": "b0b024072e7cde0b404a9be5862ecdd1", "text": "Recent studies have led to the recognition of the epidermal growth factor receptor HER3 as a key player in cancer, and consequently this receptor has gained increased interest as a target for cancer therapy. We have previously generated several Affibody molecules with subnanomolar affinity for the HER3 receptor. Here, we investigate the effects of two of these HER3-specific Affibody molecules, Z05416 and Z05417, on different HER3-overexpressing cancer cell lines. Using flow cytometry and confocal microscopy, the Affibody molecules were shown to bind to HER3 on three different cell lines. Furthermore, the receptor binding of the natural ligand heregulin (HRG) was blocked by addition of Affibody molecules. In addition, both molecules suppressed HRG-induced HER3 and HER2 phosphorylation in MCF-7 cells, as well as HER3 phosphorylation in constantly HER2-activated SKBR-3 cells. Importantly, Western blot analysis also revealed that HRG-induced downstream signalling through the Ras-MAPK pathway as well as the PI3K-Akt pathway was blocked by the Affibody molecules. Finally, in an in vitro proliferation assay, the two Affibody molecules demonstrated complete inhibition of HRG-induced cancer cell growth. Taken together, our findings demonstrate that Z05416 and Z05417 exert an anti-proliferative effect on two breast cancer cell lines by inhibiting HRG-induced phosphorylation of HER3, suggesting that the Affibody molecules are promising candidates for future HER3-targeted cancer therapy.", "title": "" }, { "docid": "efb305d95cf7197877de0b2fb510f33a", "text": "Drug-induced cardiotoxicity is emerging as an important issue among cancer survivors. For several decades, this topic was almost exclusively associated with anthracyclines, for which cumulative dose-related cardiac damage was the limiting step in their use. Although a number of efforts have been directed towards prediction of risk, so far no consensus exists on the strategies to prevent and monitor chemotherapy-related cardiotoxicity. Recently, a new dimension of the problem has emerged when drugs targeting the activity of certain tyrosine kinases or tumor receptors were recognized to carry an unwanted effect on the cardiovascular system. Moreover, the higher than expected incidence of cardiac dysfunction occurring in patients treated with a combination of old and new chemotherapeutics (e.g. anthracyclines and trastuzumab) prompted clinicians and researchers to find an effective approach to the problem. From the pharmacological standpoint, putative molecular mechanisms involved in chemotherapy-induced cardiotoxicity will be reviewed. From the clinical standpoint, current strategies to reduce cardiotoxicity will be critically addressed. In this perspective, the precise identification of the antitarget (i.e. the unwanted target causing heart damage) and the development of guidelines to monitor patients undergoing treatment with cardiotoxic agents appear to constitute the basis for the management of drug-induced cardiotoxicity.", "title": "" }, { "docid": "cf1c04b4d0c61632d7a3969668d5e751", "text": "A 3 dB power divider/combiner in substrate integrated waveguide (SIW) technology is presented. The divider consists of an E-plane SIW bifurcation with an embedded thick film resistor. The transition divides a full-height SIW into two SIWs of half the height. The resistor provides isolation between these two. The divider is fabricated in a multilayer process using high frequency substrates. For the resistor carbon paste is printed on the middle layer of the stack-up. Simulation and measurement results are presented. The measured divider exhibits an isolation of better than 22 dB within a bandwidth of more than 3GHz at 20 GHz.", "title": "" }, { "docid": "7c27bfa849ba0bd49f9ddaec9beb19b5", "text": "Very High Spatial Resolution (VHSR) large-scale SAR image databases are still an unresolved issue in the Remote Sensing field. In this work, we propose such a dataset and use it to explore patch-based classification in urban and periurban areas, considering 7 distinct semantic classes. In this context, we investigate the accuracy of large CNN classification models and pre-trained networks for SAR imaging systems. Furthermore, we propose a Generative Adversarial Network (GAN) for SAR image generation and test, whether the synthetic data can actually improve classification accuracy.", "title": "" }, { "docid": "eb101664f08f0c5c7cf6bcf8e058b180", "text": "Rapidly progressive renal failure (RPRF) is an initial clinical diagnosis in patients who present with progressive renal impairment of short duration. The underlying etiology may be a primary renal disease or a systemic disorder. Important differential diagnoses include vasculitis (systemic or renal-limited), systemic lupus erythematosus, multiple myeloma, thrombotic microangiopathy and acute interstitial nephritis. Good history taking, clinical examination and relevant investigations including serology and ultimately kidney biopsy are helpful in clinching the diagnosis. Early definitive diagnosis of RPRF is essential to reverse the otherwise relentless progression to end-stage kidney disease.", "title": "" }, { "docid": "9441113599194d172b6f618058b2ba88", "text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.", "title": "" }, { "docid": "d4fff9c75f3e8e699bbf5815b81e77b0", "text": "We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.", "title": "" }, { "docid": "69624d1ab7b438d5ff4b5192f492a11a", "text": "1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.", "title": "" }, { "docid": "d035f857c5f9a57957314a574bb2b6ff", "text": "uted through the environments’ material and cultural artifacts and through other people in collaborative efforts to complete complex tasks (Latour, 1987; Pea, 1993). For example, Hutchins (1995a) documents how the task of landing a plane can be best understood through investigating a unit of analysis that includes the pilot, the manufactured tools, and the social context. In this case, the tools and social context are not merely “aides” to the pilot’s cognition but rather essential features of a composite. Similarly, tools such as calculators enable students to complete computational tasks in ways that would be distinctly different if the calculators were absent (Pea, 1993). In these cases, cognitive activity is “stretched over” actors and artifacts. Hence, human activity is best understood by considering both artifacts and actors together through cycles of task completion because the artifacts and actors are essentially intertwined in action contexts (Lave, 1988). In addition to material tools, action is distributed across language, theories of action, and interpretive schema, providing the “mediational means” that enable and transform intelligent social activity (Brown & Duguid, 1991; Leont’ev, 1975, 1981; Vygotsky, 1978; Wertsch, 1991). These material and cultural artifacts form identifiable aspects of the “sociocultural” context as products of particular social and cultural situations (Vygotsky, 1978; Wertsch, 1991). Actors develop common understandings and draw on cultural, social, and historical norms in order to think and act. Thus, even when a particular cognitive task is undertaken by an individual apparently in solo, the individual relies on a variety of sociocultural artifacts such as computational methods and language that are social in origin (Wertsch, 1991). HowWhile there is an expansive literature about what school structures, programs, and processes are necessary for instructional change, we know less about how these changes are undertaken or enacted by school leaders in their daily work. To study school leadership we must attend to leadership practice rather than chiefly or exclusively to school structures, programs, and designs. An in-depth analysis of the practice of school leaders is necessary to render an account of how school leadership works. Knowing what leaders do is one thing, but without a rich understanding of how and why they do it, our understanding of leadership is incomplete. To do that, it is insufficient to simply observe school leadership in action and generate thick descriptions of the observed practice. We need to observe from within a conceptual framework. In our opinion, the prevailing framework of individual agency, focused on positional leaders such as principals, is inadequate because leadership is not just a function of what these leaders know and do. Hence, our intent in this paper is to frame an exploration of how leaders think and act by developing a distributed perspective on leadership practice. The Distributed Leadership Study, a study we are currently conducting in Chicago, uses the distributed framework outlined in this paper to frame a program of research that examines the practice of leadership in urban elementary schools working to change mathematics, science, and literacy instruction (see http://www.letus.org/ dls/index.htm). This 4-year longitudinal study, funded by the National Science Foundation and the Spencer Foundation, is designed to make the “black box” of leadership practice more transparent through an in-depth analysis of leadership practice. This research identifies the tasks, actors, actions, and interactions of school leadership as they unfold together in the daily life of schools. The research program involves in-depth observations and interviews with formal and informal leaders and classroom teachers as well as a social network analysis in schools in the Chicago metropolitan area. We outline the distributed framework below, beginning with a brief review of the theoretical underpinnings for this work—distributed cognition and activity theory—which we then use to re-approach the subject of leadership practice. Next we develop our distributed theory of leadership around four ideas: leadership tasks and functions, task enactment, social distribution of task enactment, and situational distribution of task enactment. Our central argument is that school leadership is best understood as a distributed practice, stretched over the school’s social and situational contexts.", "title": "" }, { "docid": "e808fa6ebe5f38b7672fad04c5f43a3a", "text": "A series of GeoVoCamps, run at least twice a year in locations in the U.S., have focused on ontology design patterns as an approach to inform metadata and data models, and on applications in the GeoSciences. In this note, we will redraw the brief history of the series as well as rationales for the particular approach which was chosen, and report on the ongoing uptake of the approach.", "title": "" }, { "docid": "95746fa1170e0498e92a443e6fc92336", "text": "A paradigm shift is taking place in medicine from using synthetic implants and tissue grafts to a tissue engineering approach that uses degradable porous material scaffolds integrated with biological cells or molecules to regenerate tissues. This new paradigm requires scaffolds that balance temporary mechanical function with mass transport to aid biological delivery and tissue regeneration. Little is known quantitatively about this balance as early scaffolds were not fabricated with precise porous architecture. Recent advances in both computational topology design (CTD) and solid free-form fabrication (SFF) have made it possible to create scaffolds with controlled architecture. This paper reviews the integration of CTD with SFF to build designer tissue-engineering scaffolds. It also details the mechanical properties and tissue regeneration achieved using designer scaffolds. Finally, future directions are suggested for using designer scaffolds with in vivo experimentation to optimize tissue-engineering treatments, and coupling designer scaffolds with cell printing to create designer material/biofactor hybrids.", "title": "" }, { "docid": "f3348f2323a5a97980551f00367703d1", "text": "Bacterial samples had been isolated from clinically detected diseased juvenile Pangasius, collected from Mymensingh, Bangladesh. Primarily, the isolates were found as Gram-negative, motile, oxidase-positive, fermentative, and O/129 resistant Aeromonas bacteria. The species was exposed as Aeromonas hydrophila from esculin hydrolysis test. Ten isolates of A. hydrophila were identified from eye lesions, kidney, and liver of the infected fishes. Further characterization of A. hydrophila was accomplished using API-20E and antibiotic sensitivity test. Isolates were highly resistant to amoxyclav among ten different antibiotics. All isolates were found as immensely pathogenic to healthy fishes while intraperitoneal injection. Histopathologically, necrotic hematopoietic tissues with pyknotic nuclei, mild hemorrhage, and wide vacuolation in kidney, liver, and muscle were principally noticed due to Aeromonad infection. So far, this is the first full note on characterizing A. hydrophila from diseased farmed Pangasius in Bangladesh. The present findings will provide further direction to develop theranostic strategies of A. hydrophila infection.", "title": "" }, { "docid": "bb28519ca1161bafb9b3812b1fd66ed1", "text": "Considering the variations of inertia in real applications, an adaptive control scheme for the permanent-magnet synchronous motor speed-regulation system is proposed in this paper. First, a composite control method, i.e., the extended-state-observer (ESO)-based control method, is employed to ensure the performance of the closed-loop system. The ESO can estimate both the states and the disturbances simultaneously so that the composite speed controller can have a corresponding part to compensate for the disturbances. Then, considering the case of variations of load inertia, an adaptive control scheme is developed by analyzing the control performance relationship between the feedforward compensation gain and the system inertia. By using inertia identification techniques, a fuzzy-inferencer-based supervisor is designed to automatically tune the feedforward compensation gain according to the identified inertia. Simulation and experimental results both show that the proposed method achieves a better speed response in the presence of inertia variations.", "title": "" }, { "docid": "9b8317646ce6cad433e47e42198be488", "text": "OBJECTIVE\nDigital mental wellbeing interventions are increasingly being used by the general public as well as within clinical treatment. Among these, mindfulness and meditation programs delivered through mobile device applications are gaining popularity. However, little is known about how people use and experience such applications and what are the enabling factors and barriers to effective use. To address this gap, the study reported here sought to understand how users adopt and experience a popular mobile-based mindfulness intervention.\n\n\nMETHODS\nA qualitative semi-structured interview study was carried out with 16 participants aged 25-38 (M=32.5) using the commercially popular mindfulness application Headspace for 30-40days. All participants were employed and living in a large UK city. The study design and interview schedule were informed by an autoethnography carried out by the first author for thirty days before the main study began. Results were interpreted in terms of the Reasoned Action Approach to understand behaviour change.\n\n\nRESULTS\nThe core concern of users was fitting the application into their busy lives. Use was also influenced by patterns in daily routines, on-going reflections about the consequences of using the app, perceived self-efficacy, emotion and mood states, personal relationships and social norms. Enabling factors for use included positive attitudes towards mindfulness and use of the app, realistic expectations and positive social influences. Barriers to use were found to be busy lifestyles, lack of routine, strong negative emotions and negative perceptions of mindfulness.\n\n\nCONCLUSIONS\nMobile wellbeing interventions should be designed with consideration of people's beliefs, affective states and lifestyles, and should be flexible to meet the needs of different users. Designers should incorporate features in the design of applications that manage expectations about use and that support users to fit app use into a busy lifestyle. The Reasoned Action Approach was found to be a useful theory to inform future research and design of persuasive mental wellbeing technologies.", "title": "" }, { "docid": "865ca372a2b073e672c535a94c04c2ad", "text": "The work presented here involves the design of a Multi Layer Perceptron (MLP) based pattern classifier for recognition of handwritten Bangla digits using a 76 element feature vector. Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten Bangla numerals here includes 24 shadow features, 16 centroid features and 36 longest-run features. On experimentation with a database of 6000 samples, the technique yields an average recognition rate of 96.67% evaluated after three-fold cross validation of results. It is useful for applications related to OCR of handwritten Bangla Digit and can also be extended to include OCR of handwritten characters of Bangla alphabet.", "title": "" }, { "docid": "8c47d9a93e3b9d9f31b77b724bf45578", "text": "A high-sensitivity fully passive 868-MHz wake-up radio (WUR) front-end for wireless sensor network nodes is presented. The front-end does not have an external power source and extracts the entire energy from the radio-frequency (RF) signal received at the antenna. A high-efficiency differential RF-to-DC converter rectifies the incident RF signal and drives the circuit blocks including a low-power comparator and reference generators; and at the same time detects the envelope of the on-off keying (OOK) wake-up signal. The front-end is designed and simulated 0.13μm CMOS and achieves a sensitivity of -33 dBm for a 100 kbps wake-up signal.", "title": "" }, { "docid": "17f171d0d91c1d914600a238f6446650", "text": "One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design philosophy, which allows us to design the ARMA coefficients independently from the underlying graph, renders the ARMA graph filters suitable in static and, particularly, time-varying settings. The latter occur when the graph signal and/or graph are changing over time. We show that in case of a time-varying graph signal our approach extends naturally to a two-dimensional filter, operating concurrently in the graph and regular time domains. We also derive sufficient conditions for filter stability when the graph and signal are time-varying. The analytical and numerical results presented in this paper illustrate that ARMA graph filters are practically appealing for static and time-varying settings, accompanied by strong theoretical guarantees. Keywords— distributed graph filtering, signal processing on graphs, time-varying graph signals, time-varying graphs", "title": "" }, { "docid": "257b4e500cb0342835cd139e4eb11570", "text": "The capability of avoid obstacles is the one of the key issues in autonomous search-and-rescue robots research area. In this study, the avoiding obstacles capability has been provided to the virtula robots in USARSim environment. The aim is finding the minimum movement when robot faces an obstacle in path. For obstacle avoidance we used an real time path planning method which is called Vector Field Histogram (VFH). After experiments we observed that VFH method is successful method for obstacle avoidance. Moreover, the usage of VFH method is highly incresing the amount of the visited places per unit time.", "title": "" }, { "docid": "ce9238236040aed852b1c8f255088b61", "text": "This paper proposes a high efficiency LLC resonant inverter for induction heating applications by using asymmetrical voltage cancellation control. The proposed control method is implemented in a full-bridge topology for induction heating application. The operating frequency is automatically adjusted to maintain a small constant lagging phase angle under load parameter variation. The output power is controlled using the asymmetrical voltage cancellation technique. The LLC resonant tank is designed without the use of output transformer. This results in an increase of the net efficiency of the induction heating system. The validity of the proposed method is verified through computer simulation and hardware experiment at the operating frequency of 93 to 96 kHz.", "title": "" }, { "docid": "6806ff9626d68336dce539a8f2c440af", "text": "Obesity and hypertension, major risk factors for the metabolic syndrome, render individuals susceptible to an increased risk of cardiovascular complications, such as adverse cardiac remodeling and heart failure. There has been much investigation into the role that an increase in the renin-angiotensin-aldosterone system (RAAS) plays in the pathogenesis of metabolic syndrome and in particular, how aldosterone mediates left ventricular hypertrophy and increased cardiac fibrosis via its interaction with the mineralocorticoid receptor (MR). Here, we review the pertinent findings that link obesity with elevated aldosterone and the development of cardiac hypertrophy and fibrosis associated with the metabolic syndrome. These studies illustrate a complex cross-talk between adipose tissue, the heart, and the adrenal cortex. Furthermore, we discuss findings from our laboratory that suggest that cardiac hypertrophy and fibrosis in the metabolic syndrome may involve cross-talk between aldosterone and adipokines (such as adiponectin).", "title": "" } ]
scidocsrr
2c574cc023094e7773ecd17a6bb84cda
Parallelizing MCMC via Weierstrass Sampler
[ { "docid": "20deb56f6d004a8e33d1e1a4f579c1ba", "text": "Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious “momentum” variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor — a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.", "title": "" } ]
[ { "docid": "72b93e02049b837a7990225494883708", "text": "Cloud computing is emerging as a major trend in the ICT industry. While most of the attention of the research community is focused on considering the perspective of the Cloud providers, offering mechanisms to support scaling of resources and interoperability and federation between Clouds, the perspective of developers and operators willing to choose the Cloud without being strictly bound to a specific solution is mostly neglected.\n We argue that Model-Driven Development can be helpful in this context as it would allow developers to design software systems in a cloud-agnostic way and to be supported by model transformation techniques into the process of instantiating the system into specific, possibly, multiple Clouds. The MODAClouds (MOdel-Driven Approach for the design and execution of applications on multiple Clouds) approach we present here is based on these principles and aims at supporting system developers and operators in exploiting multiple Clouds for the same system and in migrating (part of) their systems from Cloud to Cloud as needed. MODAClouds offers a quality-driven design, development and operation method and features a Decision Support System to enable risk analysis for the selection of Cloud providers and for the evaluation of the Cloud adoption impact on internal business processes. Furthermore, MODAClouds offers a run-time environment for observing the system under execution and for enabling a feedback loop with the design environment. This allows system developers to react to performance fluctuations and to re-deploy applications on different Clouds on the long term.", "title": "" }, { "docid": "e118177a0fc9fad704b2be958b01a873", "text": "Safety stories specify safety requirements, using the EARS (Easy Requirements Specification) format. Software practitioners can use them in agile projects at lower levels of safety criticality to deal effectively with safety concerns.", "title": "" }, { "docid": "c08518b806c93dde1dd04fdf3c9c45bb", "text": "Purpose – The objectives of this article are to develop a multiple-item scale for measuring e-service quality and to study the influence of perceived quality on consumer satisfaction levels and the level of web site loyalty. Design/methodology/approach – First, there is an explanation of the main attributes of the concepts examined, with special attention being paid to the multi-dimensional nature of the variables and the relationships between them. This is followed by an examination of the validation processes of the measuring instruments. Findings – The validation process of scales suggested that perceived quality is a multidimensional construct: web design, customer service, assurance and order management; that perceived quality influences on satisfaction; and that satisfaction influences on consumer loyalty. Moreover, no differences in these conclusions were observed if the total sample is divided between buyers and information searchers. Practical implications – First, the need to develop user-friendly web sites which ease consumer purchasing and searching, thus creating a suitable framework for the generation of higher satisfaction and loyalty levels. Second, the web site manager should enhance service loyalty, customer sensitivity, personalised service and a quick response to complaints. Third, the web site should uphold sufficient security levels in communications and meet data protection requirements regarding the privacy. Lastly, the need for correct product delivery and product manipulation or service is recommended. Originality/value – Most relevant studies about perceived quality in the internet have focused on web design aspects. Moreover, the existing literature regarding internet consumer behaviour has not fully analysed profits generated by higher perceived quality in terms of user satisfaction and loyalty.", "title": "" }, { "docid": "a6ce059863bc504242dff00025791b01", "text": "We examined allelic polymorphisms of the serotonin transporter (5-HTT) gene and antidepressant response to 6 weeks' treatment with the selective serotonin reuptake inhibitor (SSRI) drugs fluoxetine or paroxetine. We genotyped 120 patients and 252 normal controls, using polymerase chain reaction of genomic DNA with primers flanking the second intron and promoter regions of the 5-HTT gene. Diagnosis of depression was not associated with 5-HTT polymorphisms. Patients homozygous l/l in intron 2 or homozygous s/s in the promoter region showed better responses than all others (p < 0.0001, p = 0.0074, respectively). Lack of the l/l allele form in intron 2 most powerfully predicted non-response (83.3%). Response to SSRI drugs is related to allelic variation in the 5-HTT gene in depressed Korean patients.", "title": "" }, { "docid": "d3f256c026125f98ccb09fd6403ee5a0", "text": "Endocytic mechanisms control the lipid and protein composition of the plasma membrane, thereby regulating how cells interact with their environments. Here, we review what is known about mammalian endocytic mechanisms, with focus on the cellular proteins that control these events. We discuss the well-studied clathrin-mediated endocytic mechanisms and dissect endocytic pathways that proceed independently of clathrin. These clathrin-independent pathways include the CLIC/GEEC endocytic pathway, arf6-dependent endocytosis, flotillin-dependent endocytosis, macropinocytosis, circular doral ruffles, phagocytosis, and trans-endocytosis. We also critically review the role of caveolae and caveolin1 in endocytosis. We highlight the roles of lipids, membrane curvature-modulating proteins, small G proteins, actin, and dynamin in endocytic pathways. We discuss the functional relevance of distinct endocytic pathways and emphasize the importance of studying these pathways to understand human disease processes.", "title": "" }, { "docid": "20df8d71b963a432f4a0ea5fc129463a", "text": "This study provided a comparative analysis of three social network sites, the open-to-all Facebook, the professionally oriented LinkedIn and the exclusive, members-only ASmallWorld.The analysis focused on the underlying structure or architecture of these sites, on the premise that it may set the tone for particular types of interaction.Through this comparative examination, four themes emerged, highlighting the private/public balance present in each social networking site, styles of self-presentation in spaces privately public and publicly private, cultivation of taste performances as a mode of sociocultural identification and organization and the formation of tight or loose social settings. Facebook emerged as the architectural equivalent of a glasshouse, with a publicly open structure, looser behavioral norms and an abundance of tools that members use to leave cues for each other. LinkedIn and ASmallWorld produced tighter spaces, which were consistent with the taste ethos of each network and offered less room for spontaneous interaction and network generation.", "title": "" }, { "docid": "dc6ee3d45fa76aafe45507b0778018d5", "text": "Traditional endpoint protection will not address the looming cybersecurity crisis because it ignores the source of the problem--the vast online black market buried deep within the Internet.", "title": "" }, { "docid": "c42edb326ec95c257b821cc617e174e6", "text": "recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios from business process modelling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions. Iman Avazpour Faculty of ICT, Centre for Computing and Engineering Software and Systems (SUCCESS), Swinburne University of Technology, Hawthorn, Victoria 3122, Australia e-mail: iavazpour@swin.", "title": "" }, { "docid": "097cab15476b850df18e625530c25821", "text": "The Internet of Things (IoT) has been growing in recent years with the improvements in several different applications in the military, marine, intelligent transportation, smart health, smart grid, smart home and smart city domains. Although IoT brings significant advantages over traditional information and communication (ICT) technologies for Intelligent Transportation Systems (ITS), these applications are still very rare. Although there is a continuous improvement in road and vehicle safety, as well as improvements in IoT, the road traffic accidents have been increasing over the last decades. Therefore, it is necessary to find an effective way to reduce the frequency and severity of traffic accidents. Hence, this paper presents an intelligent traffic accident detection system in which vehicles exchange their microscopic vehicle variables with each other. The proposed system uses simulated data collected from vehicular ad-hoc networks (VANETs) based on the speeds and coordinates of the vehicles and then, it sends traffic alerts to the drivers. Furthermore, it shows how machine learning methods can be exploited to detect accidents on freeways in ITS. It is shown that if position and velocity values of every vehicle are given, vehicles' behavior could be analyzed and accidents can be detected easily. Supervised machine learning algorithms such as Artificial Neural Networks (ANN), Support Vector Machine (SVM), and Random Forests (RF) are implemented on traffic data to develop a model to distinguish accident cases from normal cases. The performance of RF algorithm, in terms of its accuracy, was found superior to ANN and SVM algorithms. RF algorithm has showed better performance with 91.56% accuracy than SVM with 88.71% and ANN with 90.02% accuracy.", "title": "" }, { "docid": "a19f4e5f36b04fed7937be1c90ce3581", "text": "This paper describes a map-matching algorithm designed to support the navigational functions of a real-time vehicle performance and emissions monitoring system currently under development, and other transport telematics applications. The algorithm is used together with the outputs of an extended Kalman filter formulation for the integration of GPS and dead reckoning data, and a spatial digital database of the road network, to provide continuous, accurate and reliable vehicle location on a given road segment. This is irrespective of the constraints of the operational environment, thus alleviating outage and accuracy problems associated with the use of stand-alone location sensors. The map-matching algorithm has been tested using real field data and has been found to be superior to existing algorithms, particularly in how it performs at road intersections.", "title": "" }, { "docid": "42c0f8504f26d46a4cc92d3c19eb900d", "text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.", "title": "" }, { "docid": "d8780989fc125b69beb456986819d624", "text": "The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are derived. The exploration–exploitation tradeoff is discussed and illustrated. Examples of performance on benchmark functions superior to previously published results are given.  2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "eec0aecb9b41fa1b2db390bdab2c4c44", "text": "Wi-Fi Tracking: Fingerprinting Attacks and CounterMeasures The recent spread of everyday-carried Wi-Fi-enabled devices (smartphones, tablets and wearable devices) comes with a privacy threat to their owner, and to society as a whole. These devices continuously emit signals which can be captured by a passive attacker using cheap hardware and basic knowledge. These signals contain a unique identi er, called the MAC address. To mitigate the threat, device vendors are currently deploying a countermeasure on new devices: MAC address randomization. Unfortunately, we show that this mitigation, in its current state, is insu cient to prevent tracking. To do so, we introduce several attacks, based on the content and the timing of emitted signals. In complement, we study implementations of MAC address randomization in some recent devices, and nd a number of shortcomings limiting the e ciency of these implementations at preventing device tracking. At the same time, we perform two real-world studies. The rst one considers the development of actors exploiting this issue to install Wi-Fi tracking systems. We list some real-world installations and discuss their various aspects, including regulation, privacy implications, consent and public acceptance. The second one deals with the spread of MAC address randomization in the devices population. Finally, we present two tools: an experimental Wi-Fi tracking system for testing and public awareness raising purpose, and a tool estimating the uniqueness of a device based on the content of its emitted signals even if the identi er is randomized.", "title": "" }, { "docid": "0d509af77c0bb093d534cd95102b8941", "text": "A compelling body of evidence indicates that observing a task-irrelevant action makes the execution of that action more likely. However, it remains unclear whether this 'automatic imitation' effect is indeed automatic or whether the imitative action is voluntary. The present study tested the automaticity of automatic imitation by asking whether it occurs in a strategic context where it reduces payoffs. Participants were required to play rock-paper-scissors, with the aim of achieving as many wins as possible, while either one or both players were blindfolded. While the frequency of draws in the blind-blind condition was precisely that expected at chance, the frequency of draws in the blind-sighted condition was significantly elevated. Specifically, the execution of either a rock or scissors gesture by the blind player was predictive of an imitative response by the sighted player. That automatic imitation emerges in a context where imitation reduces payoffs accords with its 'automatic' description, and implies that these effects are more akin to involuntary than to voluntary actions. These data represent the first evidence of automatic imitation in a strategic context, and challenge the abstraction from physical aspects of social interaction typical in economic and game theory.", "title": "" }, { "docid": "83ae128f71bb154177881012dfb6a680", "text": "Cell imbalance in large battery packs degrades their capacity delivery, especially for cells connected in series where the weakest cell dominates their overall capacity. In this article, we present a case study of exploiting system reconfigurations to mitigate the cell imbalance in battery packs. Specifically, instead of using all the cells in a battery pack to support the load, selectively skipping cells to be discharged may actually enhance the pack’s capacity delivery. Based on this observation, we propose CSR, a Cell Skipping-assisted Reconfiguration algorithm that identifies the system configuration with (near)-optimal capacity delivery. We evaluate CSR using large-scale emulation based on empirically collected discharge traces of 40 lithium-ion cells. CSR achieves close-to-optimal capacity delivery when the cell imbalance in the battery pack is low and improves the capacity delivery by about 20% and up to 1x in the case of a high imbalance.", "title": "" }, { "docid": "d0cbdd5230d97d16b9955013699df5aa", "text": "There has been a great deal of recent interest in statistical models of 2D landmark data for generating compact deformable models of a given object. This paper extends this work to a class of parametrised shapes where there are no landmarks available. A rigorous statistical framework for the eigenshape model is introduced, which is an extension to the conventional Linear Point Distribution Model. One of the problems associated with landmark free methods is that a large degree of variability in any shape descriptor may be due to the choice of parametrisation. An automated training method is described which utilises an iterative feedback method to overcome this problem. The result is an automatically generated compact linear shape model. The model has been successfully applied to a problem of tracking the outline of a walking pedestrian in real time.", "title": "" }, { "docid": "e7d36dc01a3e20c3fb6d2b5245e46705", "text": "A gender gap in mathematics achievement persists in some nations but not in others. In light of the underrepresentation of women in careers in science, technology, mathematics, and engineering, increasing research attention is being devoted to understanding gender differences in mathematics achievement, attitudes, and affect. The gender stratification hypothesis maintains that such gender differences are closely related to cultural variations in opportunity structures for girls and women. We meta-analyzed 2 major international data sets, the 2003 Trends in International Mathematics and Science Study and the Programme for International Student Assessment, representing 493,495 students 14-16 years of age, to estimate the magnitude of gender differences in mathematics achievement, attitudes, and affect across 69 nations throughout the world. Consistent with the gender similarities hypothesis, all of the mean effect sizes in mathematics achievement were very small (d < 0.15); however, national effect sizes showed considerable variability (ds = -0.42 to 0.40). Despite gender similarities in achievement, boys reported more positive math attitudes and affect (ds = 0.10 to 0.33); national effect sizes ranged from d = -0.61 to 0.89. In contrast to those of previous tests of the gender stratification hypothesis, our results point to specific domains of gender equity responsible for gender gaps in math. Gender equity in school enrollment, women's share of research jobs, and women's parliamentary representation were the most powerful predictors of cross-national variability in gender gaps in math. Results are situated within the context of existing research demonstrating apparently paradoxical effects of societal gender equity and highlight the significance of increasing girls' and women's agency cross-nationally.", "title": "" }, { "docid": "7c7beabf8bcaa2af706b6c1fd92ee8dd", "text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19", "title": "" }, { "docid": "12b1f774967739ea12a1ddcfe43f2faf", "text": "Herbal drug authentication is an important task in traditional medicine; however, it is challenged by the limitations of traditional authentication methods and the lack of trained experts. DNA barcoding is conspicuous in almost all areas of the biological sciences and has already been added to the British pharmacopeia and Chinese pharmacopeia for routine herbal drug authentication. However, DNA barcoding for the Korean pharmacopeia still requires significant improvements. Here, we present a DNA barcode reference library for herbal drugs in the Korean pharmacopeia and developed a species identification engine named KP-IDE to facilitate the adoption of this DNA reference library for the herbal drug authentication. Using taxonomy records, specimen records, sequence records, and reference records, KP-IDE can identify an unknown specimen. Currently, there are 6,777 taxonomy records, 1,054 specimen records, 30,744 sequence records (ITS2 and psbA-trnH) and 285 reference records. Moreover, 27 herbal drug materials were collected from the Seoul Yangnyeongsi herbal medicine market to give an example for real herbal drugs authentications. Our study demonstrates the prospects of the DNA barcode reference library for the Korean pharmacopeia and provides future directions for the use of DNA barcoding for authenticating herbal drugs listed in other modern pharmacopeias.", "title": "" }, { "docid": "2b4b822d722fac299ae7504078d87fd0", "text": "LETOR is a package of benchmark data sets for research on LEarning TO Rank, which contains standard features, relevance judgments, data partitioning, evaluation tools, and several baselines. Version 1.0 was released in April 2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec. 2008. This version, 4.0, was released in July 2009. Very different from previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page collection (~25M pages) and two query sets from Million Query track of TREC 2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short. There are about 1700 queries in MQ2007 with labeled documents and about 800 queries in MQ2008 with labeled documents. If you have any questions or suggestions about the datasets, please kindly email us ([email protected]). Our goal is to make the dataset reliable and useful for the community.", "title": "" } ]
scidocsrr
bf623450847729a11f23edf73f994522
Memory Engram Cells Have Come of Age
[ { "docid": "f006b6e0768e001d9593b14c8800cfde", "text": "Do learning and retrieval of a memory activate the same neurons? Does the number of reactivated neurons correlate with memory strength? We developed a transgenic mouse that enables the long-lasting genetic tagging of c-fos-active neurons. We found neurons in the basolateral amygdala that are activated during Pavlovian fear conditioning and are reactivated during memory retrieval. The number of reactivated neurons correlated positively with the behavioral expression of the fear memory, indicating a stable neural correlate of associative memory. The ability to manipulate these neurons genetically should allow a more precise dissection of the molecular mechanisms of memory encoding within a distributed neuronal network.", "title": "" }, { "docid": "da3201add57485d574c71c6fa95fc28c", "text": "Two experiments (modeled after J. Deese's 1959 study) revealed remarkable levels of false recall and false recognition in a list learning paradigm. In Experiment 1, subjects studied lists of 12 words (e.g., bed, rest, awake); each list was composed of associates of 1 nonpresented word (e.g., sleep). On immediate free recall tests, the nonpresented associates were recalled 40% of the time and were later recognized with high confidence. In Experiment 2, a false recall rate of 55% was obtained with an expanded set of lists, and on a later recognition test, subjects produced false alarms to these items at a rate comparable to the hit rate. The act of recall enhanced later remembering of both studied and nonstudied material. The results reveal a powerful illusion of memory: People remember events that never happened.", "title": "" } ]
[ { "docid": "a33486dfec199cd51e885d6163082a96", "text": "In this study, the aim is to examine the most popular eSport applications at a global scale. In this context, the App Store and Google Play Store application platforms which have the highest number of users at a global scale were focused on. For this reason, the eSport applications included in these two platforms constituted the sampling of the present study. A data collection form was developed by the researcher of the study in order to collect the data in the study. This form included the number of the countries, the popularity ratings of the application, the name of the application, the type of it, the age limit, the rating of the likes, the company that developed it, the version and the first appearance date. The study was conducted with the Qualitative Research Method, and the Case Study design was made use of in this process; and the Descriptive Analysis Method was used to analyze the data. As a result of the study, it was determined that the most popular eSport applications at a global scale were football, which ranked the first, basketball, billiards, badminton, skateboarding, golf and dart. It was also determined that the popularity of the mobile eSport applications changed according to countries and according to being free or paid. It was determined that the popularity of these applications differed according to the individuals using the App Store and Google Play Store application markets. As a result, it is possible to claim that mobile eSport applications have a wide usage area at a global scale and are accepted widely. In addition, it was observed that the interest in eSport applications was similar to that in traditional sports. However, in the present study, a certain date was set, and the interest in mobile eSport applications was analyzed according to this specific date. In future studies, different dates and different fields like educational sciences may be set to analyze the interest in mobile eSport applications. In this way, findings may be obtained on the change of the interest in mobile eSport applications according to time. The findings of the present study and similar studies may have the quality of guiding researchers and system/software developers in terms of showing the present status of the topic and revealing the relevant needs.", "title": "" }, { "docid": "13c0ada0fafb6babdd50847a779abfee", "text": "Pyro is a probabilistic programming language built on Python as a platform for developing advanced probabilistic models in AI research. To scale to large datasets and high-dimensional models, Pyro uses stochastic variational inference algorithms and probability distributions built on top of PyTorch, a modern GPU-accelerated deep learning framework. To accommodate complex or model-specific algorithmic behavior, Pyro leverages Poutine, a library of composable building blocks for modifying the behavior of probabilistic programs.", "title": "" }, { "docid": "7fe0c40d6f62d24b4fb565d3341c1422", "text": "Instead of a standard support vector machine (SVM) that classifies points by assigning them to one of two disjoint half-spaces, points are classified by assigning them to the closest of two parallel planes (in input or feature space) that are pushed apart as far as possible. This formulation, which can also be interpreted as regularized least squares and considered in the much more general context of regularized networks [8, 9], leads to an extremely fast and simple algorithm for generating a linear or nonlinear classifier that merely requires the solution of a single system of linear equations. In contrast, standard SVMs solve a quadratic or a linear program that require considerably longer computational time. Computational results on publicly available datasets indicate that the proposed proximal SVM classifier has comparable test set correctness to that of standard SVM classifiers, but with considerably faster computational time that can be an order of magnitude faster. The linear proximal SVM can easily handle large datasets as indicated by the classification of a 2 million point 10-attribute set in 20.8 seconds. All computational results are based on 6 lines of MATLAB code.", "title": "" }, { "docid": "4f400f8e774ebd050ba914011da73514", "text": "This paper summarizes the method of polyp detection in colonoscopy images and provides preliminary results to participate in ISBI 2015 Grand Challenge on Automatic Polyp Detection in Colonoscopy videos. The key aspect of the proposed method is to learn hierarchical features using convolutional neural network. The features are learned in different scales to provide scale-invariant features through the convolutional neural network, and then each pixel in the colonoscopy image is classified as polyp pixel or non-polyp pixel through fully connected network. The result is refined via smooth filtering and thresholding step. Experimental result shows that the proposed neural network can classify patches of polyp and non-polyp region with an accuracy of about 90%.", "title": "" }, { "docid": "fabc65effd31f3bb394406abfa215b3e", "text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).", "title": "" }, { "docid": "f264d5b90dfb774e9ec2ad055c4ebe62", "text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.", "title": "" }, { "docid": "fa6c797c1aad378198363ada5435f361", "text": "The first workshop on Interactive Data Mining is held in Melbourne, Australia, on February 15, 2019 and is co-located with 12th ACM International Conference on Web Search and Data Mining (WSDM 2019). The goal of this workshop is to share and discuss research and projects that focus on interaction with and interactivity of data mining systems. The program includes invited speaker, presentation of research papers, and a discussion session.", "title": "" }, { "docid": "6172f0048a770cadc0220c3cf1ff5e2b", "text": "The interpretation of the resource-conflict link that has become most publicized—the rebel greed hypothesis—depends on just one of many plausible mechanisms that could underlie a relationship between resource dependence and violence. The author catalogues a large range of rival possible mechanisms, highlights a set of techniques that may be used to identify these mechanisms, and begins to employ these techniques to distinguish between rival accounts of the resource-conflict linkages. The author uses finer natural resource data than has been used in the past, gathering and presenting new data on oil and diamonds production and on oil stocks. The author finds evidence that (1) conflict onset is more responsive to the impacts of past natural resource production than to the potential for future production, supporting a weak states mechanism rather than a rebel greed mechanism; (2) the impact of natural resources on conflict cannot easily be attributed entirely to the weak states mechanism, and in particular, the impact of natural resources is independent of state strength; (3) the link between primary commodities and conflict is driven in part by agricultural dependence rather than by natural resources more narrowly defined, a finding consistent with a “sparse networks” mechanism; (4) natural resources are associated with shorter wars, and natural resource wars are more likely to end with military victory for one side than other wars. This is consistent with evidence that external actors have incentives to work to bring wars to a close when natural resource supplies are threatened. The author finds no evidence that resources are associated with particular difficulties in negotiating ends to conflicts, contrary to arguments that loot-seeking rebels aim to prolong wars.", "title": "" }, { "docid": "9a85994a8668a6cbb5646570fc20177c", "text": "This paper investigates the application of linear learning techniques to the place recognition problem. We present two learning methods, a supervised change prediction technique based on linear regression and an unsupervised change removal technique based on principal component analysis, and investigate how the performance of each is affected by the choice of training data. We show that the change prediction technique presented here succeeds only if it is provided with appropriate and adequate training data, which can be challenging for a mobile robotic system operating in an uncontrolled environment. In contrast, change removal can improve place recognition performance even when trained with as few as 100 samples. This paper shows that change removal can be combined with a number of different image descriptors and can improve performance across a range of different appearance conditions.", "title": "" }, { "docid": "b740f07b95041e764bfe8cb5a59b14a8", "text": "We present in this paper a statistical model for languageindependent bi-directional conversion between spelling and pronunciation, based on joint grapheme/phoneme units extracted from automatically aligned data. The model is evaluated on spelling-to-pronunciation and pronunciation-tospelling conversion on the NetTalk database and the CMU dictionary. We also study the effect of including lexical stress in the pronunciation. Although a direct comparison is difficult to make, our model’s performance appears to be as good or better than that of other data-driven approaches that have been applied to the same tasks.", "title": "" }, { "docid": "20832ede6851f36d6a249e044c28892a", "text": "Mobile learning highly prioritizes the successful acquisition of context-aware contents from a learning server. A variant of 2D barcodes, the quick response (QR) code, which can be rapidly read using a PDA equipped with a camera and QR code reading software, is considered promising for context-aware applications. This work presents a novel QR code and handheld augmented reality (AR) supported mobile learning (m-learning) system: the handheld English language learning organization (HELLO). In the proposed English learning system, the linked information between context-aware materials and learning zones is defined in the QR codes. Each student follows the guide map displayed on the phone screen to visit learning zones and decrypt QR codes. The detected information is then sent to the learning server to request and receive context-aware learning material wirelessly. Additionally, a 3D animated virtual learning partner is embedded in the learning device based on AR technology, enabling students to complete their context-aware immersive learning. A case study and a survey conducted in a university demonstrate the effectiveness of the proposed m-learning system.", "title": "" }, { "docid": "e26d6c67b36aad6d1c93c315c222fccb", "text": "Populated IP addresses (PIP) -- IP addresses that are associated with a large number of user requests are important for online service providers to efficiently allocate resources and to detect attacks. While some PIPs serve legitimate users, many others are heavily abused by attackers to conduct malicious activities such as scams, phishing, and malware distribution. Unfortunately, commercial proxy lists like Quova have a low coverage of PIP addresses and offer little support for distinguishing good PIPs from abused ones. In this study, we propose PIPMiner, a fully automated method to extract and classify PIPs through analyzing service logs. Our methods combine machine learning and time series analysis to distinguish good PIPs from abused ones with over 99.6% accuracy. When applying the derived PIP list to several applications, we can identify millions of malicious Windows Live accounts right on the day of their sign-ups, and detect millions of malicious Hotmail accounts well before the current detection system captures them.", "title": "" }, { "docid": "3c98c5bd1d9a6916ce5f6257b16c8701", "text": "As financial time series are inherently noisy and non-stationary, it is regarded as one of the most challenging applications of time series forecasting. Due to the advantages of generalization capability in obtaining a unique solution, support vector regression (SVR) has also been successfully applied in financial time series forecasting. In the modeling of financial time series using SVR, one of the key problems is the inherent high noise. Thus, detecting and removing the noise are important but difficult tasks when building an SVR forecasting model. To alleviate the influence of noise, a two-stage modeling approach using independent component analysis (ICA) and support vector regression is proposed in financial time series forecasting. ICA is a novel statistical signal processing technique that was originally proposed to find the latent source signals from observed mixture signals without having any prior knowledge of the mixing mechanism. The proposed approach first uses ICA to the forecasting variables for generating the independent components (ICs). After identifying and removing the ICs containing the noise, the rest of the ICs are then used to reconstruct the forecasting variables which contain less noise and served as the input variables of the SVR forecasting model. In order to evaluate the performance of the proposed approach, the Nikkei 225 opening index and TAIEX closing index are used as illustrative examples. Experimental results show that the proposed model outperforms the SVR model with non-filtered forecasting variables and a random walk model.", "title": "" }, { "docid": "af0dfe672a8828587e3b27ef473ea98e", "text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.", "title": "" }, { "docid": "1a8e346b6f2cd1c368f449f9a9474e5c", "text": "Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions performed on an initial program input, the fuzzing agent learns a policy that can next generate new higher-reward inputs. We have implemented this new approach, and preliminary empirical evidence shows that reinforcement fuzzing can outperform baseline random fuzzing.", "title": "" }, { "docid": "36c11c29f6605f7c234e68ecba2a717a", "text": "BACKGROUND\nThe main purpose of this study was to identify factors that influence healthcare quality in the Iranian context.\n\n\nMETHODS\nExploratory in-depth individual and focus group interviews were conducted with 222 healthcare stakeholders including healthcare providers, managers, policy-makers, and payers to identify factors affecting the quality of healthcare services provided in Iranian healthcare organisations.\n\n\nRESULTS\nQuality in healthcare is a production of cooperation between the patient and the healthcare provider in a supportive environment. Personal factors of the provider and the patient, and factors pertaining to the healthcare organisation, healthcare system, and the broader environment affect healthcare service quality. Healthcare quality can be improved by supportive visionary leadership, proper planning, education and training, availability of resources, effective management of resources, employees and processes, and collaboration and cooperation among providers.\n\n\nCONCLUSION\nThis article contributes to healthcare theory and practice by developing a conceptual framework that provides policy-makers and managers a practical understanding of factors that affect healthcare service quality.", "title": "" }, { "docid": "a85c6e8a666d079c60b9bc31d6d9ae62", "text": "When pedestrians encounter vehicles, they typically stop and wait for a signal from the driver to either cross or wait. What happens when the car is autonomous and there isn’t a human driver to signal them? This paper seeks to address this issue with an intent communication system (ICS) that acts in place of a human driver. This intent system has been developed to take into account the psychology behind what pedestrians are familiar with and what they expect from machines. The system integrates those expectations into the design of physical systems and mathematical algorithms. The goal of the system is to ensure that communication is simple, yet effective without leaving pedestrians with a sense of distrust in autonomous vehicles. To validate the ICS, two types of experiments have been run: field tests with an autonomous vehicle to determine how humans actually interact with the ICS and simulations to account for multiple potential behaviors.The results from both experiments show that humans react positively and more predictably when the intent of the vehicle is communicated compared to when the intent of the vehicle is unknown. In particular, the results from the simulation specifically showed a 142 percent difference between the pedestrian’s trust in the vehicle’s actions when the ICS is enabled and the pedestrian has prior knowledge of the vehicle than when the ICS is not enabled and the pedestrian having no prior knowledge of the vehicle.", "title": "" }, { "docid": "936c4fb60d37cce15ed22227d766908f", "text": "English. The SENTIment POLarity Classification Task 2016 (SENTIPOLC), is a rerun of the shared task on sentiment classification at the message level on Italian tweets proposed for the first time in 2014 for the Evalita evaluation campaign. It includes three subtasks: subjectivity classification, polarity classification, and irony detection. In 2016 SENTIPOLC has been again the most participated EVALITA task with a total of 57 submitted runs from 13 different teams. We present the datasets – which includes an enriched annotation scheme for dealing with the impact on polarity of a figurative use of language – the evaluation methodology, and discuss results and participating systems. Italiano. Descriviamo modalità e risultati della seconda edizione della campagna di valutazione di sistemi di sentiment analysis (SENTIment POLarity Classification Task), proposta nel contesto di “EVALITA 2016: Evaluation of NLP and Speech Tools for Italian”. In SENTIPOLC è stata valutata la capacità dei sistemi di riconoscere diversi aspetti del sentiment espresso nei messaggi Twitter in lingua italiana, con un’articolazione in tre sottotask: subjectivity classification, polarity classification e irony detection. La campagna ha suscitato nuovamente grande interesse, con un totale di 57 run inviati da 13 gruppi di partecipanti.", "title": "" }, { "docid": "3b4622a4ad745fc0ffb3b6268eb969fa", "text": "Eruptive syringomas: unresponsiveness to oral isotretinoin A 22-year-old man of Egyptian origin was referred to our department due to exacerbation of pruritic pre-existing papular dermatoses. The skin lesions had been present since childhood. The family history was negative for a similar condition. The patient complained of exacerbation of the pruritus during physical activity under a hot climate and had moderate to severe pruritus during his work. Physical examination revealed multiple reddish-brownish smooth-surfaced, symmetrically distributed papules 2–4 mm in diameter on the patient’s trunk, neck, axillae, and limbs (Fig. 1). The rest of the physical examination was unremarkable. The Darier sign was negative. A skin biopsy was obtained from a representative lesion on the trunk. Histopathologic examination revealed a small, wellcircumscribed neoplasm confined to the upper dermis, composed of small solid and ductal structures relatively evenly distributed in a sclerotic collagenous stroma. The solid elements were of various shapes (round, oval, curvilinear, “comma-like,” or “tadpole-like”) (Fig. 2). These microscopic features and the clinical presentation were consistent with the diagnosis of eruptive syringomas. Our patient was treated with a short course of oral antihistamines without any effect and subsequently with low-dose isotretinoin (10 mg/day) for 5 months. No improvement of the skin eruption was observed while cessation of the pruritus was accomplished. Syringoma is a common adnexal tumor with differentiation towards eccrine acrosyringium composed of small solid and ductal elements embedded in a sclerotic stroma and restricted as a rule to the upper to mid dermis, usually presenting clinically as multiple lesions on the lower eyelids and cheeks of adolescent females. A much less common variant is the eruptive or disseminated syringomas, which involve primarily young women. Eruptive syringomas are characterized by rapid development during a short period of time of hundreds of small (1–5 mm), ill-defined, smooth surfaced, skin-colored, pink, yellowish, or brownish papules typically involving the face, trunk, genitalia, pubic area, and extremities but can occur principally in any site where eccrine glands are found. The pathogenesis of eruptive syringoma remains unclear. Some authors have recently challenged the traditional notion that eruptive syringomas are neoplastic lesions. Chandler and Bosenberg presented evidence that eruptive syringomas result from autoimmune destruction of the acrosyringium and proposed the term autoimmune acrosyringitis with ductal cysts. Garrido-Ruiz et al. support the theory that eruptive syringomas may represent a hyperplastic response of the eccrine duct to an inflammatory reaction. In a recent systematic review by Williams and Shinkai the strongest association of syringomas was with Down’s syndrome (183 reported cases, 22.2%). Syringomas are also associated with diabetes mellitus (17 reported cases, 2.1%), Ehlers–Danlos", "title": "" }, { "docid": "a4073ab337c0d4ef73dceb1a32e1f878", "text": "Conditional belief networks introduce stochastic binary variables in neural networks. Contrary to a classical neural network, a belief network can predict more than the expected value of the output Y given the input X . It can predict a distribution of outputs Y which is useful when an input can admit multiple outputs whose average is not necessarily a valid answer. Such networks are particularly relevant to inverse problems such as image prediction for denoising, or text to speech. However, traditional sigmoid belief networks are hard to train and are not suited to continuous problems. This work introduces a new family of networks called linearizing belief nets or LBNs. A LBN decomposes into a deep linear network where each linear unit can be turned on or off by non-deterministic binary latent units. It is a universal approximator of real-valued conditional distributions and can be trained using gradient descent. Moreover, the linear pathways efficiently propagate continuous information and they act as multiplicative skip-connections that help optimization by removing gradient diffusion. This yields a model which trains efficiently and improves the state-of-the-art on image denoising and facial expression generation with the Toronto faces dataset.", "title": "" } ]
scidocsrr
b720c9f662b395d0237232a6b0c85d5c
Hidden Roles of CSR : Perceived Corporate Social Responsibility as a Preventive against Counterproductive Work Behaviors
[ { "docid": "92d1abda02a6c6e1c601930bfbb7ed3d", "text": "In spite of the increasing importance of corporate social responsibility (CSR) and employee job performance, little is still known about the links between the socially responsible actions of organizations and the job performance of their members. In order to explain how employees’ perceptions of CSR influence their job performance, this study first examines the relationships between perceived CSR, organizational identification, job satisfaction, and job performance, and then develops a sequential mediation model by fully integrating these links. The results of structural equation modeling analyses conducted for 250 employees at hotels in South Korea offered strong support for the proposed model. We found that perceived CSR was indirectly and positively associated with job performance sequentially mediated first through organizational identification and then job satisfaction. This study theoretically contributes to the CSR literature by revealing the sequential mechanism through which employees’ perceptions of CSR affect their job performance, and offers practical implications by stressing the importance of employees’ perceptions of CSR. Limitations of this study and future research directions are discussed.", "title": "" }, { "docid": "fb34b610cd933da8c7f863249f32f9a2", "text": "The purpose of this research was to develop broad, theoretically derived measure(s) of deviant behavior in the workplace. Two scales were developed: a 12-item scale of organizational deviance (deviant behaviors directly harmful to the organization) and a 7-item scale of interpersonal deviance (deviant behaviors directly harmful to other individuals within the organization). These scales were found to have internal reliabilities of .81 and .78, respectively. Confirmatory factor analysis verified that a 2-factor structure had acceptable fit. Preliminary evidence of construct validity is also provided. The implications of this instrument for future empirical research on workplace deviance are discussed.", "title": "" } ]
[ { "docid": "73270e8140d763510d97f7bd2fdd969e", "text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.", "title": "" }, { "docid": "266b705308b6f7c236f54bb327f315ec", "text": "In this paper, we examine the generalization error of regularized distance metric learning. We show that with appropriate constraints, the generalization error of regularized distance metric learning could be independent from the dimensionality, making it suitable for handling high dimensional data. In addition, we present an efficient online learning algorithm for regularized distance metric learning. Our empirical studies with data classification and face recognition show that the proposed algorithm is (i) effective for distance metric learning when compared to the state-of-the-art methods, and (ii) efficient and robust for high dimensional data.", "title": "" }, { "docid": "4b1c46a58d132e3b168186848122e1d0", "text": "Recently, there has been considerable interest in providing \"trusted computing platforms\" using hardware~---~TCPA and Palladium being the most publicly visible examples. In this paper we discuss our experience with building such a platform using a traditional time-sharing operating system executing on XOM~---~a processor architecture that provides copy protection and tamper-resistance functions. In XOM, only the processor is trusted; main memory and the operating system are not trusted.Our operating system (XOMOS) manages hardware resources for applications that don't trust it. This requires a division of responsibilities between the operating system and hardware that is unlike previous systems. We describe techniques for providing traditional operating systems services in this context.Since an implementation of a XOM processor does not exist, we use SimOS to simulate the hardware. We modify IRIX 6.5, a commercially available operating system to create xomos. We are then able to analyze the performance and implementation overheads of running an untrusted operating system on trusted hardware.", "title": "" }, { "docid": "f7d56588da8f5c5ac0f1481e5f2286b4", "text": "Machine learning is an established method of selecting algorithms to solve hard search problems. Despite this, to date no systematic comparison and evaluation of the different techniques has been performed and the performance of existing systems has not been critically compared to other approaches. We compare machine learning techniques for algorithm selection on real-world data sets of hard search problems. In addition to well-established approaches, for the first time we also apply statistical relational learning to this problem. We demonstrate that most machine learning techniques and existing systems perform less well than one might expect. To guide practitioners, we close by giving clear recommendations as to which machine learning techniques are likely to perform well based on our experiments.", "title": "" }, { "docid": "980565c38859db2df10db238d8a4dc61", "text": "Performing High Voltage (HV) tasks with a multi craft work force create a special set of safety circumstances. This paper aims to present vital information relating to when it is acceptable to use a single or a two-layer soil structure. Also it discusses the implication of the high voltage infrastructure on the earth grid and the safety of this implication under a single or a two-layer soil structure. A multiple case study is investigated to show the importance of using the right soil resistivity structure during the earthing system design. Keywords—Earth Grid, EPR, High Voltage, Soil Resistivity Structure, Step Voltage, Touch Voltage.", "title": "" }, { "docid": "3a75bf4c982d076fce3b4cdcd560881a", "text": "This project is one of the research topics in Professor William Dally’s group. In this project, we developed a pruning based method to learn both weights and connections for Long Short Term Memory (LSTM). In this method, we discard the unimportant connections in a pretrained LSTM, and make the weight matrix sparse. Then, we retrain the remaining model. After we remaining model is converge, we prune this model again and retrain the remaining model iteratively, until we achieve the desired size of model and performance. This method will save the size of the LSTM as well as prevent overfitting. Our results retrained on NeuralTalk shows that we can discard nearly 90% of the weights without hurting the performance too much. Part of the results in this project will be posted in NIPS 2015.", "title": "" }, { "docid": "64d4776be8e2dbb0fa3b30d6efe5876c", "text": "This paper presents a novel method for hierarchically organizing large face databases, with application to efficient identity-based face retrieval. The method relies on metric learning with local binary pattern (LBP) features. On one hand, LBP features have proved to be highly resilient to various appearance changes due to illumination and contrast variations while being extremely efficient to calculate. On the other hand, metric learning (ML) approaches have been proved very successful for face verification ‘in the wild’, i.e. in uncontrolled face images with large amounts of variations in pose, expression, appearances, lighting, etc. While such ML based approaches compress high dimensional features into low dimensional spaces using discriminatively learned projections, the complexity of retrieval is still significant for large scale databases (with millions of faces). The present paper shows that learning such discriminative projections locally while organizing the database hierarchically leads to a more accurate and efficient system. The proposed method is validated on the standard Labeled Faces in the Wild (LFW) benchmark dataset with millions of additional distracting face images collected from photos on the internet.", "title": "" }, { "docid": "17f0fbd3ab3b773b5ef9d636700b5af6", "text": "Motor sequence learning is a process whereby a series of elementary movements is re-coded into an efficient representation for the entire sequence. Here we show that human subjects learn a visuomotor sequence by spontaneously chunking the elementary movements, while each chunk acts as a single memory unit. The subjects learned to press a sequence of 10 sets of two buttons through trial and error. By examining the temporal patterns with which subjects performed a visuomotor sequence, we found that the subjects performed the 10 sets as several clusters of sets, which were separated by long time gaps. While the overall performance time decreased by repeating the same sequence, the clusters became clearer and more consistent. The cluster pattern was uncorrelated with the distance of hand movements and was different across subjects who learned the same sequence. We then split a learned sequence into three segments, while preserving or destroying the clusters in the learned sequence, and shuffled the segments. The performance on the shuffled sequence was more accurate and quicker when the clusters in the original sequence were preserved than when they were destroyed. The results suggest that each cluster is processed as a single memory unit, a chunk, and is necessary for efficient sequence processing. A learned visuomotor sequence is hierarchically represented as chunks that contain several elementary movements. We also found that the temporal patterns of sequence performance transferred from the nondominant to dominant hand, but not vice versa. This may suggest a role of the dominant hemisphere in storage of learned chunks. Together with our previous unit-recording and imaging studies that used the same learning paradigm, we predict specific roles of the dominant parietal area, basal ganglia, and presupplementary motor area in the chunking.", "title": "" }, { "docid": "9b1643284b783f2947be11f16ae8d942", "text": "We investigate the task of modeling opendomain, multi-turn, unstructured, multiparticipant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant’s history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.", "title": "" }, { "docid": "35a298d5ec169832c3faf2e30d95e1a4", "text": "© 2 0 0 1 m a s s a c h u s e t t s i n s t i t u t e o f t e c h n o l o g y, c a m b r i d g e , m a 0 2 1 3 9 u s a — w w w. a i. m i t. e d u", "title": "" }, { "docid": "fe08f3e1dc4fe2d71059b483c8532e88", "text": "Digital asset management (DAM) has increasing benefits in booming global Internet economy, but it is still a great challenge for providing an effective way to manage, store, ingest, organize and retrieve digital asset. To do it, we present a new digital asset management platform, called DAM-Chain, with Transaction-based Access Control (TBAC) which integrates the distribution ABAC model and the blockchain technology. In this platform, the ABAC provides flexible and diverse authorization mechanisms for digital asset escrowed into blockchain while the blockchain's transactions serve as verifiable and traceable medium of access request procedure. We also present four types of transactions to describe the TBAC access control procedure, and provide the algorithms of these transactions corresponding to subject registration, object escrowing and publication, access request and grant. By maximizing the strengths of both ABAC and blockchain, this platform can support flexible and diverse permission management, as well as verifiable and transparent access authorization process in an open decentralized environment.", "title": "" }, { "docid": "8eb161e363d55631148ed3478496bbd5", "text": "This paper proposes a new power-factor-correction (PFC) topology, and explains its operation principle, its control mechanism, related application problems followed by experimental results. In this proposed topology, critical-conduction-mode (CRM) interleaved technique is applied to a bridgeless PFC in order to achieve high efficiency by combining benefits of each topology. This application is targeted toward low to middle power applications that normally employs continuous-conductionmode boost converter. key words: PFC, Interleaved, critical-conduction-mode, totem-pole", "title": "" }, { "docid": "dfbf5c12d8e5a8e5e81de5d51f382185", "text": "Demand response (DR) is very important in the future smart grid, aiming to encourage consumers to reduce their demand during peak load hours. However, if binary decision variables are needed to specify start-up time of a particular appliance, the resulting mixed integer combinatorial problem is in general difficult to solve. In this paper, we study a versatile convex programming (CP) DR optimization framework for the automatic load management of various household appliances in a smart home. In particular, an L1 regularization technique is proposed to deal with schedule-based appliances (SAs), for which their on/off statuses are governed by binary decision variables. By relaxing these variables from integer to continuous values, the problem is reformulated as a new CP problem with an additional L1 regularization term in the objective. This allows us to transform the original mixed integer problem into a standard CP problem. Its major advantage is that the overall DR optimization problem remains to be convex and therefore the solution can be found efficiently. Moreover, a wide variety of appliances with different characteristics can be flexibly incorporated. Simulation result shows that the energy scheduling of SAs and other appliances can be determined simultaneously using the proposed CP formulation.", "title": "" }, { "docid": "a4d177e695f83ddbaad38b5aa5c34baa", "text": "Introduction Digital technologies play an increasingly important role in shaping the profile of human thought and action. In the few short decades since its invention, for example, the World Wide Web has transformed the way we shop, date, socialize and undertake scientific endeavours. We are also witnessing an unprecedented rate of technological innovation and change, driven, at least in part, by exponential rates of growth in computing power and performance. The technological landscape is thus a highly dynamic one – new technologies are being introduced all the time, and the rate of change looks set to continue unabated. In view of all this, it is natural to wonder about the effects of new technology on both ourselves and the societies in which we live.", "title": "" }, { "docid": "1bcb0d930848fab3e5b8aee3c983e45b", "text": "BACKGROUND\nLycopodium clavatum (Lyc) is a widely used homeopathic medicine for the liver, urinary and digestive disorders. Recently, acetyl cholinesterase (AchE) inhibitory activity has been found in Lyc alkaloid extract, which could be beneficial in dementia disorder. However, the effect of Lyc has not yet been explored in animal model of memory impairment and on cerebral blood flow.\n\n\nAIM\nThe present study was planned to explore the effect of Lyc on learning and memory function and cerebral blood flow (CBF) in intracerebroventricularly (ICV) administered streptozotocin (STZ) induced memory impairment in rats.\n\n\nMATERIALS AND METHODS\nMemory deficit was induced by ICV administration of STZ (3 mg/kg) in rats on 1st and 3rd day. Male SD rats were treated with Lyc Mother Tincture (MT) 30, 200 and 1000 for 17 days. Learning and memory was evaluated by Morris water maze test on 14th, 15th and 16th day. CBF was measured by Laser Doppler flow meter on 17th day.\n\n\nRESULTS\nSTZ (ICV) treated rats showed impairment in learning and memory along with reduced CBF. Lyc MT and 200 showed improvement in learning and memory. There was increased CBF in STZ (ICV) treated rats at all the potencies of Lyc studied.\n\n\nCONCLUSION\nThe above study suggests that Lyc may be used as a drug of choice in condition of memory impairment due to its beneficial effect on CBF.", "title": "" }, { "docid": "a023b7a853733b92287efcddc67976ae", "text": "Intensive use of e-business can provide number of opportunities and actual benefits to companies of all activities and sizes. In general, through the use of web sites companies can create global presence and widen business boundaries. Many organizations now have websites to complement their other activities, but it is likely that a smaller proportion really know how successful their sites are and in what extent they comply with business objectives. A key enabler of web sites measurement is web site analytics and metrics. Web sites analytics especially refers to the use of data collected from a web site to determine which aspects of the web site work towards the business objectives. Advanced web analytics must play an important role in overall company strategy and should converge to web intelligence – a specific part of business intelligence which collect and analyze information collected from web sites and apply them in relevant ‘business’ context. This paper examines the importance of measuring the web site quality of the Croatian hotels. Wide range of web site metrics are discussed and finally a set of 8 dimensions and 44 attributes chosen for the evaluation of Croatian hotel’s web site quality. The objective of the survey conducted on the 30 hotels was to identify different groups of hotel web sites in relation to their quality measured with specific web metrics. Key research question was: can hotel web sites be placed into meaningful groups by consideration of variation in web metrics and number of hotel stars? To confirm the reliability of chosen metrics a Cronbach's alpha test was conducted. Apart from descriptive statistics tools, to answer the posed research question, clustering analysis was conducted and the characteristics of the three clusters were considered. Experiences and best practices of the hotel web sites clusters are taken as the prime source of recommendation for improving web sites quality level. Key-Words: web metrics, hotel web sites, web analytics, web site audit, web site quality, cluster analysis", "title": "" }, { "docid": "30ba7b3cf3ba8a7760703a90261d70eb", "text": "Starch is a major storage product of many economically important crops such as wheat, rice, maize, tapioca, and potato. A large-scale starch processing industry has emerged in the last century. In the past decades, we have seen a shift from the acid hydrolysis of starch to the use of starch-converting enzymes in the production of maltodextrin, modified starches, or glucose and fructose syrups. Currently, these enzymes comprise about 30% of the world’s enzyme production. Besides the use in starch hydrolysis, starch-converting enzymes are also used in a number of other industrial applications, such as laundry and porcelain detergents or as anti-staling agents in baking. A number of these starch-converting enzymes belong to a single family: the -amylase family or family13 glycosyl hydrolases. This group of enzymes share a number of common characteristics such as a ( / )8 barrel structure, the hydrolysis or formation of glycosidic bonds in the conformation, and a number of conserved amino acid residues in the active site. As many as 21 different reaction and product specificities are found in this family. Currently, 25 three-dimensional (3D) structures of a few members of the -amylase family have been determined using protein crystallization and X-ray crystallography. These data in combination with site-directed mutagenesis studies have helped to better understand the interactions between the substrate or product molecule and the different amino acids found in and around the active site. This review illustrates the reaction and product diversity found within the -amylase family, the mechanistic principles deduced from structure–function relationship structures, and the use of the enzymes of this family in industrial applications. © 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "5184b25a4d056b861f5dbae34300344a", "text": "AFFILIATIONS: asHouri, Hsu, soroosHian, and braitHwaite— Center for Hydrometeorology and Remote Sensing, Henry Samueli School of Engineering, Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California; Knapp and neLson—NOAA/National Climatic Data Center, Asheville, North Carolina; CeCiL—Global Science & Technology, Inc., Asheville, North Carolina; prat—Cooperative Institute for Climate and Satellites, North Carolina State University, and NOAA/National Climatic Data Center, Asheville, North Carolina CORRESPONDING AUTHOR: Hamed Ashouri, Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 E-mail: [email protected]", "title": "" }, { "docid": "0e74994211d0e3c1e85ba0c85aba3df5", "text": "Images of faces manipulated to make their shapes closer to the average are perceived as more attractive. The influences of symmetry and averageness are often confounded in studies based on full-face views of faces. Two experiments are reported that compared the effect of manipulating the averageness of female faces in profile and full-face views. Use of a profile view allows a face to be \"morphed\" toward an average shape without creating an image that becomes more symmetrical. Faces morphed toward the average were perceived as more attractive in both views, but the effect was significantly stronger for full-face views. Both full-face and profile views morphed away from the average shape were perceived as less attractive. It is concluded that the effect of averageness is independent of any effect of symmetry on the perceived attractiveness of female faces.", "title": "" }, { "docid": "0f3b2081ecd311b7b2555091aaca2571", "text": "Maximum Power Point Tracking (MPPT) is widely used control technique to extract maximum power available from the solar cell of photovoltaic (PV) module. Since the solar cells have non-linear i–v characteristics. The efficiency of PV module is very low and power output depends on solar insolation level and ambient temperature, so maximization of power output with greater efficiency is of special interest. Moreover there is great loss of power due to mismatch of source and load. So, to extract maximum power from solar panel a MPPT needs to be designed. The objective of the paper is to present a novel cost effective and efficient microcontroller based MPPT system for solar photovoltaic system to ensure fast maximum power point operation at all fast changing environmental conditions. The proposed controller scheme utilizes PWM techniques to regulate the output power of boost DC/DC converter at its maximum possible value and simultaneously controls the charging process of battery. Incremental Conductance algorithm is implemented to track maximum power point. For the feasibility study, parameter extraction, model evaluation and analysis of converter system design a MATLAB/Simulink model is demonstrated and simulated for a typical 40W solar panel from Kyocera KC-40 for hardware implementation and verification. Finally, a hardware model is designed and tested in lab at different operating conditions. Further, MPPT system has been tested with Solar Panel at different solar insolation level and temperature. The resulting system has high-efficiency, lower-cost, very fast tracking speed and can be easily modified for additional control function for future use.", "title": "" } ]
scidocsrr
51f3961336efb81b85462a9fd239944b
A model for improved association of radar and camera objects in an indoor environment
[ { "docid": "8e18fa3850177d016a85249555621723", "text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.", "title": "" } ]
[ { "docid": "00eeceba7118e7a8a2f68deadc612f14", "text": "I n the growing fields of wearable robotics, rehabilitation robotics, prosthetics, and walking robots, variable stiffness actuators (VSAs) or adjustable compliant actuators are being designed and implemented because of their ability to minimize large forces due to shocks, to safely interact with the user, and their ability to store and release energy in passive elastic elements. This review article describes the state of the art in the design of actuators with adaptable passive compliance. This new type of actuator is not preferred for classical position-controlled applications such as pick and place operations but is preferred in novel robots where safe human– robot interaction is required or in applications where energy efficiency must be increased by adapting the actuator’s resonance frequency. The working principles of the different existing designs are explained and compared. The designs are divided into four groups: equilibrium-controlled stiffness, antagonistic-controlled stiffness, structure-controlled stiffness (SCS), and mechanically controlled stiffness. In classical robotic applications, actuators are preferred to be as stiff as possible to make precise position movements or trajectory tracking control easier (faster systems with high bandwidth). The biological counterpart is the muscle that has superior functional performance and a neuromechanical control system that is much more advanced at adapting and tuning its parameters. The superior power-to-weight ratio, force-toweight ratio, compliance, and control of muscle, when compared with traditional robotic actuators, are the main barriers for the development of machines that can match the motion, safety, and energy efficiency of human or other animals. One of the key differences of these systems is the compliance or springlike behavior found in biological systems [1]. Although such compliant", "title": "" }, { "docid": "b910de28ecbfa82713b30f5918eaae80", "text": "Raman microscopy is a non-destructive technique requiring minimal sample preparation that can be used to measure the chemical properties of the mineral and collagen parts of bone simultaneously. Modern Raman instruments contain the necessary components and software to acquire the standard information required in most bone studies. The spatial resolution of the technique is about a micron. As it is non-destructive and small samples can be used, it forms a useful part of a bone characterisation toolbox.", "title": "" }, { "docid": "a84ee8a0f06e07abd53605bf5b542519", "text": "Abeta peptide accumulation is thought to be the primary event in the pathogenesis of Alzheimer's disease (AD), with downstream neurotoxic effects including the hyperphosphorylation of tau protein. Glycogen synthase kinase-3 (GSK-3) is increasingly implicated as playing a pivotal role in this amyloid cascade. We have developed an adult-onset Drosophila model of AD, using an inducible gene expression system to express Arctic mutant Abeta42 specifically in adult neurons, to avoid developmental effects. Abeta42 accumulated with age in these flies and they displayed increased mortality together with progressive neuronal dysfunction, but in the apparent absence of neuronal loss. This fly model can thus be used to examine the role of events during adulthood and early AD aetiology. Expression of Abeta42 in adult neurons increased GSK-3 activity, and inhibition of GSK-3 (either genetically or pharmacologically by lithium treatment) rescued Abeta42 toxicity. Abeta42 pathogenesis was also reduced by removal of endogenous fly tau; but, within the limits of detection of available methods, tau phosphorylation did not appear to be altered in flies expressing Abeta42. The GSK-3-mediated effects on Abeta42 toxicity appear to be at least in part mediated by tau-independent mechanisms, because the protective effect of lithium alone was greater than that of the removal of tau alone. Finally, Abeta42 levels were reduced upon GSK-3 inhibition, pointing to a direct role of GSK-3 in the regulation of Abeta42 peptide level, in the absence of APP processing. Our study points to the need both to identify the mechanisms by which GSK-3 modulates Abeta42 levels in the fly and to determine if similar mechanisms are present in mammals, and it supports the potential therapeutic use of GSK-3 inhibitors in AD.", "title": "" }, { "docid": "ceb270c07d26caec5bc20e7117690f9f", "text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].", "title": "" }, { "docid": "16f75bcd060ae7a7b6f7c9c8412ca479", "text": "Deep neural networks (DNNs) are powerful machine learning models and have succeeded in various artificial intelligence tasks. Although various architectures and modules for the DNNs have been proposed, selecting and designing the appropriate network structure for a target problem is a challenging task. In this paper, we propose a method to simultaneously optimize the network structure and weight parameters during neural network training. We consider a probability distribution that generates network structures, and optimize the parameters of the distribution instead of directly optimizing the network structure. The proposed method can apply to the various network structure optimization problems under the same framework. We apply the proposed method to several structure optimization problems such as selection of layers, selection of unit types, and selection of connections using the MNIST, CIFAR-10, and CIFAR-100 datasets. The experimental results show that the proposed method can find the appropriate and competitive network structures.", "title": "" }, { "docid": "ac9f71a97f6af0718587ffd0ea92d31d", "text": "Modern cyber-physical systems are complex networked computing systems that electronically control physical systems. Autonomous road vehicles are an important and increasingly ubiquitous instance. Unfortunately, their increasing complexity often leads to security vulnerabilities. Network connectivity exposes these vulnerable systems to remote software attacks that can result in real-world physical damage, including vehicle crashes and loss of control authority. We introduce an integrated architecture to provide provable security and safety assurance for cyber-physical systems by ensuring that safety-critical operations and control cannot be unintentionally affected by potentially malicious parts of the system. Finegrained information flow control is used to design both hardware and software, determining how low-integrity information can affect high-integrity control decisions. This security assurance is used to improve end-to-end security across the entire cyber-physical system. We demonstrate this integrated approach by developing a mobile robotic testbed modeling a self-driving system and testing it with a malicious attack. ACM Reference Format: Jed Liu, Joe Corbett-Davies, Andrew Ferraiuolo, Alexander Ivanov, Mulong Luo, G. Edward Suh, Andrew C. Myers, and Mark Campbell. 2018. Secure Autonomous Cyber-Physical Systems Through Verifiable Information Flow Control. InWorkshop on Cyber-Physical Systems Security & Privacy (CPS-SPC ’18), October 19, 2018, Toronto, ON, Canada. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3264888.3264889", "title": "" }, { "docid": "0afd0f70859772054e589a2256efeba4", "text": "Hair is typically modeled and rendered using either explicitly defined hair strand geometry or a volume texture of hair densities. Taken each on their own, these two hair representations have difficulties in the case of animal fur as it consists of very dense and thin undercoat hairs in combination with coarse guard hairs. Explicit hair strand geometry is not well-suited for the undercoat hairs, while volume textures are not well-suited for the guard hairs. To efficiently model and render both guard hairs and undercoat hairs, we present a hybrid technique that combines rasterization of explicitly defined guard hairs with ray marching of a prismatic shell volume with dynamic resolution. The latter is the key to practical combination of the two techniques, and it also enables a high degree of detail in the undercoat. We demonstrate that our hybrid technique creates a more detailed and soft fur appearance as compared with renderings that only use explicitly defined hair strands. Finally, our rasterization approach is based on order-independent transparency and renders high-quality fur images in seconds.", "title": "" }, { "docid": "ab70c8814c0e15695c8142ce8aad69bc", "text": "Domain-oriented dialogue systems are often faced with users that try to cross the limits of their knowledge, by unawareness of its domain limitations or simply to test their capacity. These interactions are considered to be Out-Of-Domain and several strategies can be found in the literature to deal with some specific situations. Since if a certain input appears once, it has a non-zero probability of being entered later, the idea of taking advantage of real human interactions to feed these dialogue systems emerges, thus, naturally. In this paper, we introduce the SubTle Corpus, a corpus of Interaction-Response pairs extracted from subtitles files, created to help dialogue systems to deal with Out-of-Domain interactions.", "title": "" }, { "docid": "d75ebc4041927b525d8f4937c760518e", "text": "Most current term frequency normalization approaches for information retrieval involve the use of parameters. The tuning of these parameters has an important impact on the overall performance of the information retrieval system. Indeed, a small variation in the involved parameter(s) could lead to an important variation in the precision/recall values. Most current tuning approaches are dependent on the document collections. As a consequence, the effective parameter value cannot be obtained for a given new collection without extensive training data. In this paper, we propose a novel and robust method for the tuning of term frequency normalization parameter(s), by measuring the normalization effect on the within document frequency of the query terms. As an illustration, we apply our method on Amati \\& Van Rijsbergen's so-called normalization 2. The experiments for the ad-hoc TREC-6,7,8 tasks and TREC-8,9,10 Web tracks show that the new method is independent of the collections and able to provide reliable and good performance.", "title": "" }, { "docid": "ee82b52d5a0bc28a0a8e78e09da09340", "text": "AIMS\nExcessive internet use is becoming a concern, and some have proposed that it may involve addiction. We evaluated the dimensions assessed by, and psychometric properties of, a range of questionnaires purporting to assess internet addiction.\n\n\nMETHODS\nFourteen questionnaires were identified purporting to assess internet addiction among adolescents and adults published between January 1993 and October 2011. Their reported dimensional structure, construct, discriminant and convergent validity and reliability were assessed, as well as the methods used to derive these.\n\n\nRESULTS\nMethods used to evaluate internet addiction questionnaires varied considerably. Three dimensions of addiction predominated: compulsive use (79%), negative outcomes (86%) and salience (71%). Less common were escapism (21%), withdrawal symptoms (36%) and other dimensions. Measures of validity and reliability were found to be within normally acceptable limits.\n\n\nCONCLUSIONS\nThere is a broad convergence of questionnaires purporting to assess internet addiction suggesting that compulsive use, negative outcome and salience should be covered and the questionnaires show adequate psychometric properties. However, the methods used to evaluate the questionnaires vary widely and possible factors contributing to excessive use such as social motivation do not appear to be covered.", "title": "" }, { "docid": "ad8a727d0e3bd11cd972373451b90fe7", "text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.", "title": "" }, { "docid": "b160d69d87ad113286ee432239b090d7", "text": "Isogeometric analysis has been proposed as a methodology for bridging the gap between computer aided design (CAD) and finite element analysis (FEA). Although both the traditional and isogeometric pipelines rely upon the same conceptualization to solid model steps, they drastically differ in how they bring the solid model both to and through the analysis process. The isogeometric analysis process circumvents many of the meshing pitfalls experienced by the traditional pipeline by working directly within the approximation spaces used by the model representation. In this paper, we demonstrate that in a similar way as how mesh quality is used in traditional FEA to help characterize the impact of the mesh on analysis, an analogous concept of model quality exists within isogeometric analysis. The consequence of these observations is the need for a new area within modeling – analysis-aware modeling – in which model properties and parameters are selected to facilitate isogeometric analysis. ! 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "dfbf5c12d8e5a8e5e81de5d51f382185", "text": "Demand response (DR) is very important in the future smart grid, aiming to encourage consumers to reduce their demand during peak load hours. However, if binary decision variables are needed to specify start-up time of a particular appliance, the resulting mixed integer combinatorial problem is in general difficult to solve. In this paper, we study a versatile convex programming (CP) DR optimization framework for the automatic load management of various household appliances in a smart home. In particular, an L1 regularization technique is proposed to deal with schedule-based appliances (SAs), for which their on/off statuses are governed by binary decision variables. By relaxing these variables from integer to continuous values, the problem is reformulated as a new CP problem with an additional L1 regularization term in the objective. This allows us to transform the original mixed integer problem into a standard CP problem. Its major advantage is that the overall DR optimization problem remains to be convex and therefore the solution can be found efficiently. Moreover, a wide variety of appliances with different characteristics can be flexibly incorporated. Simulation result shows that the energy scheduling of SAs and other appliances can be determined simultaneously using the proposed CP formulation.", "title": "" }, { "docid": "750c67fe63611248e8d8798a42ac282c", "text": "Chaos and its drive-response synchronization for a fractional-order cellular neural networks (CNN) are studied. It is found that chaos exists in the fractional-order system with six-cell. The phase synchronisation of drive and response chaotic trajectories is investigated after that. These works based on Lyapunov exponents (LE), Lyapunov stability theory and numerical solving fractional-order system in Matlab environment.", "title": "" }, { "docid": "cfaf2c04cd06103489ac60d00a70cd2c", "text": "BACKGROUND\nΔ(9)-Tetrahydrocannabinol (THC), 11-nor-9-carboxy-THC (THCCOOH), and cannabinol (CBN) were measured in breath following controlled cannabis smoking to characterize the time course and windows of detection of breath cannabinoids.\n\n\nMETHODS\nExhaled breath was collected from chronic (≥4 times per week) and occasional (<twice per week) smokers before and after smoking a 6.8% THC cigarette. Sample analysis included methanol extraction from breath pads, solid-phase extraction, and liquid chromatography-tandem mass spectrometry quantification.\n\n\nRESULTS\nTHC was the major cannabinoid in breath; no sample contained THCCOOH and only 1 contained CBN. Among chronic smokers (n = 13), all breath samples were positive for THC at 0.89 h, 76.9% at 1.38 h, and 53.8% at 2.38 h, and only 1 sample was positive at 4.2 h after smoking. Among occasional smokers (n = 11), 90.9% of breath samples were THC-positive at 0.95 h and 63.6% at 1.49 h. One occasional smoker had no detectable THC. Analyte recovery from breath pads by methanolic extraction was 84.2%-97.4%. Limits of quantification were 50 pg/pad for THC and CBN and 100 pg/pad for THCCOOH. Solid-phase extraction efficiency was 46.6%-52.1% (THC) and 76.3%-83.8% (THCCOOH, CBN). Matrix effects were -34.6% to 12.3%. Cannabinoids fortified onto breath pads were stable (≤18.2% concentration change) for 8 h at room temperature and -20°C storage for 6 months.\n\n\nCONCLUSIONS\nBreath may offer an alternative matrix for identifying recent driving under the influence of cannabis, but currently sensitivity is limited to a short detection window (0.5-2 h).", "title": "" }, { "docid": "599c2f4205f3a0978d0567658daf8be6", "text": "With increasing audio/video service consumption through unmanaged IP networks, HTTP adaptive streaming techniques have emerged to handle bandwidth limitations and variations. But while it is becoming common to serve multiple clients in one home network, these solutions do not adequately address fine tuned quality arbitration between the multiple streams. While clients compete for bandwidth, the video suffers unstable conditions and/or inappropriate bit-rate levels.\n We hereby experiment a mechanism based on traffic chapping that allow bandwidth arbitration to be implemented in the home gateway, first determining desirable target bit-rates to be reached by each stream and then constraining the clients to stay within their limits. This enables the delivery of optimal quality of experience to the maximum number of users. This approach is validated through experimentation, and results are shown through a set of objective measurement criteria.", "title": "" }, { "docid": "7f73952f3dfb445fd700d951a013595e", "text": "Although parallel and convergent evolution are discussed extensively in technical articles and textbooks, their meaning can be overlapping, imprecise, and contradictory. The meaning of parallel evolution in much of the evolutionary literature grapples with two separate hypotheses in relation to phenotype and genotype, but often these two hypotheses have been inferred from only one hypothesis, and a number of subsidiary but problematic criteria, in relation to the phenotype. However, examples of parallel evolution of genetic traits that underpin or are at least associated with convergent phenotypes are now emerging. Four criteria for distinguishing parallelism from convergence are reviewed. All are found to be incompatible with any single proposition of homoplasy. Therefore, all homoplasy is equivalent to a broad view of convergence. Based on this concept, all phenotypic homoplasy can be described as convergence and all genotypic homoplasy as parallelism, which can be viewed as the equivalent concept of convergence for molecular data. Parallel changes of molecular traits may or may not be associated with convergent phenotypes but if so describe homoplasy at two biological levels-genotype and phenotype. Parallelism is not an alternative to convergence, but rather it entails homoplastic genetics that can be associated with and potentially explain, at the molecular level, how convergent phenotypes evolve.", "title": "" }, { "docid": "d59d1ac7b3833ee1e60f7179a4a9af99", "text": "s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges.", "title": "" }, { "docid": "b3d1780cb8187e5993c5adbb7959b7a6", "text": "We present impacto, a device designed to render the haptic sensation of hitting or being hit in virtual reality. The key idea that allows the small and light impacto device to simulate a strong hit is that it decomposes the stimulus: it renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impact to the hit by thrusting the user's arm backwards using electrical muscle stimulation. The device is self-contained, wireless, and small enough for wearable use, thus leaves the user unencumbered and able to walk around freely in a virtual environment. The device is of generic shape, allowing it to also be worn on legs, so as to enhance the experience of kicking, or merged into props, such as a baseball bat. We demonstrate how to assemble multiple impacto units into a simple haptic suit. Participants of our study rated impact simulated using impacto's combination of solenoid hit and electrical muscle stimulation as more realistic than either technique in isolation.", "title": "" }, { "docid": "c7b7ca49ea887c25b05485e346b5b537", "text": "I n our last article 1 we described the external features which characterize the cranial and facial structures of the cranial strains known as hyperflexion and hyperextension. To understand how these strains develop we have to examine the anatomical relations underlying all cranial patterns. Each strain represent a variation on a theme. By studying the features in common, it is possible to account for the facial and dental consequences of these variations. The key is the spheno-basilar symphysis and the displacements which can take place between the occiput and the sphenoid at that suture. In hyperflexion there is shortening of the cranium in an antero-posterior direction with a subsequent upward buckling of the spheno-basilar symphysis (Figure 1). In children, where the cartilage of the joint has not ossified, a v-shaped wedge can be seen occasionally on the lateral skull radiograph (Figure 2). Figure (3a) is of the cranial base seen from a vertex viewpoint. By leaving out the temporal bones the connection between the centrally placed spheno-basilar symphysis and the peripheral structures of the cranium can be seen more easily. Sutherland realized that the cranium could be divided into quadrants (Figure 3b) centered on the spheno-basilar symphysis and that what happens in each quadrant is directly influenced by the spheno-basilar symphysis. He noted that accompanying the vertical changes at the symphysis there are various lateral displacements. As the peripheral structures move laterally, this is known as external rotation. If they move closer to the midline, this is called internal rotation. It is not unusual to have one side of the face externally rotated and the other side internally rotated (Figure 4a). This can have a significant effect in the mouth, giving rise to asymmetries (Figure 4b). This shows a palatal view of the maxilla with the left posterior dentition externally rotated and the right buccal posterior segment internally rotated, reflecting the internal rotation of the whole right side of the face. This can be seen in hyperflexion but also other strains. With this background, it is now appropriate to examine in detail the cranial strain known as hyperflexion. As its name implies, it is brought about by an exaggeration of the flexion/ extension movement of the cranium into flexion. Rhythmic movement of the cranium continues despite the displacement into flexion, but it does so more readily into flexion than extension. As the skull is shortened in an antero-posterior plane, it is widened laterally. Figures 3a and 3b. 3a: cranial base from a vertex view (temporal bones left out). 3b: Sutherland’s quadrants imposed on cranial base. Figure 2. Lateral Skull Radiograph of Hyperflexion patient. Note V-shaped wedge at superior border of the spheno-basillar symphysis. Figure 1. Movement of Occiput and Sphenold in Hyperflexion. Reprinted from Orthopedic Gnathology, Hockel, J., Ed. 1983. With permission from Quintessence Publishing Co.", "title": "" } ]
scidocsrr
e43f03d688e52d00c7d017e0e029e7a4
Design of LTCC Wideband Patch Antenna for LMDS Band Applications
[ { "docid": "bf77cd91ec7a5133998e60dfd4ec520f", "text": "A simple procedure for the design of compact stacked-patch antennas is presented based on LTCC multilayer packaging technology. The advantage of this topology is that only one parameter, i.e., the substrate thickness (or equivalently the number of LTCC layers), needs to be adjusted in order to achieve an optimized bandwidth performance. The validity of the new design strategy is verified through applying it to practical compact antenna design for several wireless communication bands, including ISM 2.4-GHz band, IEEE 802.11a 5.8-GHz, and LMDS 28-GHz band. It is shown that a 10-dB return-loss bandwidth of 7% can be achieved for the LTCC (/spl epsiv//sub r/=5.6) multilayer structure with a thickness of less than 0.03 wavelengths, which can be realized using a different number of laminated layers for different frequencies (e.g., three layers for the 28-GHz band).", "title": "" } ]
[ { "docid": "25e6f4b6c86fac766c09aae302ec9516", "text": "ABSTRACT. The purpose of this study is to construct doctors’ acceptance model of Electronic Medical Records (EMR) in private hospitals. The model extends the Technology Acceptance Model (TAM) with two factors of Individual Capabilities; Self-Efficacy (SE) and Perceived Behavioral Control (PBC). The initial findings proposes additional factors over the original factors in TAM making Perceived Usefulness (PU), Perceived Ease Of Use (PEOU), Behavioral Intention to use (BI), SE, and PBC working in incorporation. A cross-sectional survey was used in which data were gathered by a personal administered questionnaire as the instrument for data collection. Doctors of public hospitals were involved in this study which proves that all factors are reliable.", "title": "" }, { "docid": "dfdd857de86c75e769492b56a092b242", "text": "Understanding the anatomy of the ankle ligaments is important for correct diagnosis and treatment. Ankle ligament injury is the most frequent cause of acute ankle pain. Chronic ankle pain often finds its cause in laxity of one of the ankle ligaments. In this pictorial essay, the ligaments around the ankle are grouped, depending on their anatomic orientation, and each of the ankle ligaments is discussed in detail.", "title": "" }, { "docid": "48513729ea0b9ad7cf74626ca5eed686", "text": "We consider a generalization of the lcm-sum function, and we give two kinds of asymptotic formulas for the sum of that function. Our results include a generalization ofBordelì es's results and a refinement of the error estimate of Alladi's result. We prove these results by the method similar to those ofBordelì es.", "title": "" }, { "docid": "4408d485de63034cb2225ee7aa9e3afe", "text": "We present the characterization of dry spiked biopotential electrodes and test their suitability to be used in anesthesia monitoring systems based on the measurement of electroencephalographic signals. The spiked electrode consists of an array of microneedles penetrating the outer skin layers. We found a significant dependency of the electrode-skin-electrode impedance (ESEI) on the electrode size (i.e., the number of spikes) and the coating material of the spikes. Electrodes larger than 3/spl times/3 mm/sup 2/ coated with Ag-AgCl have sufficiently low ESEI to be well suited for electroencephalograph (EEG) recordings. The maximum measured ESEI was 4.24 k/spl Omega/ and 87 k/spl Omega/, at 1 kHz and 0.6 Hz, respectively. The minimum ESEI was 0.65 k/spl Omega/ an 16 k/spl Omega/, at the same frequencies. The ESEI of spiked electrodes is stable over an extended period of time. The arithmetic mean of the generated DC offset voltage is 11.8 mV immediately after application on the skin and 9.8 mV after 20-30 min. A spectral study of the generated potential difference revealed that the AC part was unstable at frequencies below approximately 0.8 Hz. Thus, the signal does not interfere with a number of clinical applications using real-time EEG. Comparing raw EEG recordings of the spiked electrode with commercial Zipprep electrodes showed that both signals were similar. Due to the mechanical strength of the silicon microneedles and the fact that neither skin preparation nor electrolytic gel is required, use of the spiked electrode is convenient. The spiked electrode is very comfortable for the patient.", "title": "" }, { "docid": "0cd1f01d1b2a5afd8c6eba13ef5082fa", "text": "Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra.", "title": "" }, { "docid": "f97ed9ef35355feffb1ebf4242d7f443", "text": "Moore’s law has allowed the microprocessor market to innovate at an astonishing rate. We believe microchip implants are the next frontier for the integrated circuit industry. Current health monitoring technologies are large, expensive, and consume significant power. By miniaturizing and reducing power, monitoring equipment can be implanted into the body and allow 24/7 health monitoring. We plan to implement a new transmitter topology, compressed sensing, which can be used for wireless communications with microchip implants. This paper focuses on the ADC used in the compressed sensing signal chain. Using the Cadence suite of tools and a 32/28nm process, we produced simulations of our compressed sensing Analog to Digital Converter to feed into a Digital Compression circuit. Our results indicate that a 12-bit, 20Ksample, 9.8nW Successive Approximation ADC is possible for diagnostic resolution (10 bits). By incorporating a hybrid-C2C DAC with differential floating voltage shields, it is possible to obtain 9.7 ENOB. Thus, we recommend this ADC for use in compressed sensing for biomedical purposes. Not only will it be useful in digital compressed sensing, but this can also be repurposed for use in analog compressed sensing.", "title": "" }, { "docid": "0c863db545e890a2f0d58f188692999b", "text": "Digital investigation in the cloud is challenging, but there's also opportunities for innovations in digital forensic solutions (such as remote forensic collection of evidential data from cloud servers client devices and the underlying supporting infrastructure such as distributed file systems). This column describes the challenges and opportunities in cloud forensics.", "title": "" }, { "docid": "ca550339bd91ba8e431f1e82fbaf5a99", "text": "In several previous papers and particularly in [3] we presented the use of logic equations and their solution using ternary vectors and set-theoretic considerations as well as binary codings and bit-parallel vector operations. In this paper we introduce a new and elegant model for the game of Sudoku that uses the same approach and solves this problem without any search always finding all solutions (including no solutions or several solutions). It can also be extended to larger Sudokus and to a whole class of similar discrete problems, such as Queens’ problems on the chessboard, graph-coloring problems etc. Disadvantages of known SAT approaches for such problems were overcome by our new method.", "title": "" }, { "docid": "68fe4f62d48270395ca3f257bbf8a18a", "text": "Adjectives like warm, hot, and scalding all describe temperature but differ in intensity. Understanding these differences between adjectives is a necessary part of reasoning about natural language. We propose a new paraphrasebased method to automatically learn the relative intensity relation that holds between a pair of scalar adjectives. Our approach analyzes over 36k adjectival pairs from the Paraphrase Database under the assumption that, for example, paraphrase pair really hot↔ scalding suggests that hot < scalding. We show that combining this paraphrase evidence with existing, complementary patternand lexicon-based approaches improves the quality of systems for automatically ordering sets of scalar adjectives and inferring the polarity of indirect answers to yes/no questions.", "title": "" }, { "docid": "48aea9478d2a9f1edb108202bd65e8dd", "text": "The popularity of mobile devices and location-based services (LBSs) has raised significant concerns regarding the location privacy of their users. A popular approach to protect location privacy is anonymizing the users of LBS systems. In this paper, we introduce an information-theoretic notion for location privacy, which we call perfect location privacy. We then demonstrate how anonymization should be used by LBS systems to achieve the defined perfect location privacy. We study perfect location privacy under two models for user movements. First, we assume that a user’s current location is independent from her past locations. Using this independent identically distributed (i.i.d.) model, we show that if the pseudonym of the user is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{r-1}}}\\right)$ </tex-math></inline-formula> observations are made by the adversary for that user, then the user has perfect location privacy. Here, <inline-formula> <tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula> is the number of the users in the network and <inline-formula> <tex-math notation=\"LaTeX\">$r$ </tex-math></inline-formula> is the number of all possible locations. Next, we model users’ movements using Markov chains to better model real-world movement patterns. We show that perfect location privacy is achievable for a user if the user’s pseudonym is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{|E|-r}}}\\right)$ </tex-math></inline-formula> observations are collected by the adversary for that user, where <inline-formula> <tex-math notation=\"LaTeX\">$|E|$ </tex-math></inline-formula> is the number of edges in the user’s Markov chain model.", "title": "" }, { "docid": "f32477f15fb7f550c74bc052c487a14b", "text": "This paper demonstrates the sketch drawing capability of NAO humanoid robot. Two redundant degrees of freedom elbow yaw (RElbowYaw) and wrist yaw (RWristYaw) of the right hand have been sacrificed because of their less contribution in drawing. The Denavit-Hartenberg (DH) parameters of the system has been defined in order to measure the working envelop of the right hand as well as to achieve the inverse kinematic solution. A linear transformation has been used to transform the image points with respect to real world coordinate system and novel 4 point calibration technique has been proposed to calibrate the real world coordinate system with respect to NAO end effector.", "title": "" }, { "docid": "848dd074e4615ea5ecb164c96fac6c63", "text": "A simultaneous analytical method for etizolam and its main metabolites (alpha-hydroxyetizolam and 8-hydroxyetizolam) in whole blood was developed using solid-phase extraction, TMS derivatization and ion trap gas chromatography tandem mass spectrometry (GC-MS/MS). Separation of etizolam, TMS derivatives of alpha-hydroxyetizolam and 8-hydroxyetizolam and fludiazepam as internal standard was performed within about 17 min. The inter-day precision evaluated at the concentration of 50 ng/mL etizolam, alpha-hydroxyetizolam and 8-hydroxyetizolam was evaluated 8.6, 6.4 and 8.0% respectively. Linearity occurred over the range in 5-50 ng/mL. This method is satisfactory for clinical and forensic purposes. This method was applied to two unnatural death cases suspected to involve etizolam. Etizolam and its two metabolites were detected in these cases.", "title": "" }, { "docid": "0cef7d9df5606df8becd2226233e3c99", "text": "Telecare medical information systems (TMISs) are increasingly popular technologies for healthcare applications. Using TMISs, physicians and caregivers can monitor the vital signs of patients remotely. Since the database of TMISs stores patients’ electronic medical records (EMRs), only authorized users should be granted the access to this information for the privacy concern. To keep the user anonymity, recently, Chen et al. proposed a dynamic ID-based authentication scheme for telecare medical information system. They claimed that their scheme is more secure and robust for use in a TMIS. However, we will demonstrate that their scheme fails to satisfy the user anonymity due to the dictionary attacks. It is also possible to derive a user password in case of smart card loss attacks. Additionally, an improved scheme eliminating these weaknesses is also presented.", "title": "" }, { "docid": "794e78423eaa3484ba28127d76e4bd74", "text": "Classification of environmental sounds is a fundamental procedure for a wide range of real-world applications. In this paper, we propose a novel acoustic feature extraction method for classifying the environmental sounds. The proposed method is motivated from the image processing technique, local binary pattern (LBP), and works on a spectrogram which forms two-dimensional (time-frequency) data like an image. Since the spectrogram contains noisy pixel values, for improving classification performance, it is crucial to extract the features which are robust to the fluctuations in pixel values. We effectively incorporate the local statistics, mean and standard deviation on local pixels, to establish robust LBP. In addition, we provide the technique of L2-Hellinger normalization which is efficiently applied to the proposed features so as to further enhance the discriminative power while increasing the robustness. In the experiments on environmental sound classification using RWCP dataset that contains 105 sound categories, the proposed method produces the superior performance (98.62%) compared to the other methods, exhibiting significant improvements over the standard LBP method as well as robustness to noise and low computation time.", "title": "" }, { "docid": "8bdd02547be77f4c825c9aed8016ddf8", "text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.", "title": "" }, { "docid": "1bc95cb394896d57c601358574ea4f89", "text": "The transition from an informative to a service oriented interactive governmental portals has become a necessity due to the time and cost saving benefits for both governments and users. User experience is a key factor in maintaining these benefits. In this study we propose an E-government Portal Assessment Method (EGPAM), which is a direct method for measuring user experience in e-government portals. We present a case study assessing the portal of the Ministry of Public Works (MOW) in Kuwait. Results showed that having a direct measurement to user experience enabled easier identification of the current level of user satisfaction and provided a guidance on ways to improve user experience and addressing identified issues.", "title": "" }, { "docid": "eae0f8a921b301e52c822121de6c6b58", "text": "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10× more layers. The source code for the complete system are publicly available1.", "title": "" }, { "docid": "f9d2305bc8dd4921970529f4c816b98b", "text": "Chaos scales graph processing from secondary storage to multiple machines in a cluster. Earlier systems that process graphs from secondary storage are restricted to a single machine, and therefore limited by the bandwidth and capacity of the storage system on a single machine. Chaos is limited only by the aggregate bandwidth and capacity of all storage devices in the entire cluster.\n Chaos builds on the streaming partitions introduced by X-Stream in order to achieve sequential access to storage, but parallelizes the execution of streaming partitions. Chaos is novel in three ways. First, Chaos partitions for sequential storage access, rather than for locality and load balance, resulting in much lower pre-processing times. Second, Chaos distributes graph data uniformly randomly across the cluster and does not attempt to achieve locality, based on the observation that in a small cluster network bandwidth far outstrips storage bandwidth. Third, Chaos uses work stealing to allow multiple machines to work on a single partition, thereby achieving load balance at runtime.\n In terms of performance scaling, on 32 machines Chaos takes on average only 1.61 times longer to process a graph 32 times larger than on a single machine. In terms of capacity scaling, Chaos is capable of handling a graph with 1 trillion edges representing 16 TB of input data, a new milestone for graph processing capacity on a small commodity cluster.", "title": "" }, { "docid": "cfc2c98e3422d32ca4c30fea1f18b74a", "text": "While it is known that academic searchers differ from typical web searchers, little is known about the search behavior of academic searchers over longer periods of time. In this study we take a look at academic searchers through a large-scale log analysis on a major academic search engine. We focus on two aspects: query reformulation patterns and topic shifts in queries. We first analyze how each of these aspects evolve over time. We identify important query reformulation patterns: revisiting and issuing new queries tend to happen more often over time. We also find that there are two distinct types of users: one type of users becomes increasingly focused on the topics they search for as time goes by, and the other becomes increasingly diversifying. After analyzing these two aspects separately, we investigate whether, and to which degree, there is a correlation between topic shifts and query reformulations. Surprisingly, users’ preferences of query reformulations correlate little with their topic shift tendency. However, certain reformulations may help predict the magnitude of the topic shift that happens in the immediate next timespan. Our results shed light on academic searchers’ information seeking behavior and may benefit search personalization.", "title": "" } ]
scidocsrr
a613c67f9f24fa382437b912d38cd586
Automated Diagnosis of Glaucoma Using Texture and Higher Order Spectra Features
[ { "docid": "e494f926c9b2866d2c74032d200e4d0a", "text": "This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.", "title": "" }, { "docid": "0a3a349e6b66d822cd826f633ba9f066", "text": "Diabetic retinopathy (DR) is a condition where the retina is damaged due to fluid leaking from the blood vessels into the retina. In extreme cases, the patient will become blind. Therefore, early detection of diabetic retinopathy is crucial to prevent blindness. Various image processing techniques have been used to identify the different stages of diabetes retinopathy. The application of non-linear features of the higher-order spectra (HOS) was found to be efficient as it is more suitable for the detection of shapes. The aim of this work is to automatically identify the normal, mild DR, moderate DR, severe DR and prolific DR. The parameters are extracted from the raw images using the HOS techniques and fed to the support vector machine (SVM) classifier. This paper presents classification of five kinds of eye classes using SVM classifier. Our protocol uses, 300 subjects consisting of five different kinds of eye disease conditions. We demonstrate a sensitivity of 82% for the classifier with the specificity of 88%.", "title": "" } ]
[ { "docid": "a13ca3d83e6ec1693bd9ad53323d2f63", "text": "BACKGROUND\nThis study examined longitudinal patterns of heroin use, other substance use, health, mental health, employment, criminal involvement, and mortality among heroin addicts.\n\n\nMETHODS\nThe sample was composed of 581 male heroin addicts admitted to the California Civil Addict Program (CAP) during the years 1962 through 1964; CAP was a compulsory drug treatment program for heroin-dependent criminal offenders. This 33-year follow-up study updates information previously obtained from admission records and 2 face-to-face interviews conducted in 1974-1975 and 1985-1986; in 1996-1997, at the latest follow-up, 284 were dead and 242 were interviewed.\n\n\nRESULTS\nIn 1996-1997, the mean age of the 242 interviewed subjects was 57.4 years. Age, disability, years since first heroin use, and heavy alcohol use were significant correlates of mortality. Of the 242 interviewed subjects, 20.7% tested positive for heroin (with additional 9.5% urine refusal and 14.0% incarceration, for whom urinalyses were unavailable), 66.9% reported tobacco use, 22.1% were daily alcohol drinkers, and many reported illicit drug use (eg, past-year heroin use was 40.5%; marijuana, 35.5%; cocaine, 19.4%; crack, 10.3%; amphetamine, 11.6%). The group also reported high rates of health problems, mental health problems, and criminal justice system involvement. Long-term heroin abstinence was associated with less criminality, morbidity, psychological distress, and higher employment.\n\n\nCONCLUSIONS\nWhile the number of deaths increased steadily over time, heroin use patterns were remarkably stable for the group as a whole. For some, heroin addiction has been a lifelong condition associated with severe health and social consequences.", "title": "" }, { "docid": "2f5ccd63b8f23300c090cb00b6bbe045", "text": "Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a \"variable,\" the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.", "title": "" }, { "docid": "d59e21319b9915c2f6d7a8931af5503c", "text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.", "title": "" }, { "docid": "c55057c6231d472477bf93339e6b5292", "text": "BACKGROUND\nAcute hospital discharge delays are a pressing concern for many health care administrators. In Canada, a delayed discharge is defined by the alternate level of care (ALC) construct and has been the target of many provincial health care strategies. Little is known on the patient characteristics that influence acute ALC length of stay. This study examines which characteristics drive acute ALC length of stay for those awaiting nursing home admission.\n\n\nMETHODS\nPopulation-level administrative and assessment data were used to examine 17,111 acute hospital admissions designated as alternate level of care (ALC) from a large Canadian health region. Case level hospital records were linked to home care administrative and assessment records to identify and characterize those ALC patients that account for the greatest proportion of acute hospital ALC days.\n\n\nRESULTS\nALC patients waiting for nursing home admission accounted for 41.5% of acute hospital ALC bed days while only accounting for 8.8% of acute hospital ALC patients. Characteristics that were significantly associated with greater ALC lengths of stay were morbid obesity (27 day mean deviation, 99% CI = ±14.6), psychiatric diagnosis (13 day mean deviation, 99% CI = ±6.2), abusive behaviours (12 day mean deviation, 99% CI = ±10.7), and stroke (7 day mean deviation, 99% CI = ±5.0). Overall, persons with morbid obesity, a psychiatric diagnosis, abusive behaviours, or stroke accounted for 4.3% of all ALC patients and 23% of all acute hospital ALC days between April 1st 2009 and April 1st, 2011. ALC patients with the identified characteristics had unique clinical profiles.\n\n\nCONCLUSIONS\nA small number of patients with non-medical days waiting for nursing home admission contribute to a substantial proportion of total non-medical days in acute hospitals. Increases in nursing home capacity or changes to existing funding arrangements should target the sub-populations identified in this investigation to maximize effectiveness. Specifically, incentives should be introduced to encourage nursing homes to accept acute patients with the least prospect for community-based living, while acute patients with the greatest prospect for community-based living are discharged to transitional care or directly to community-based care.", "title": "" }, { "docid": "40e74f062a6d4c969d87e57e7566bc9e", "text": "Bullying is a serious public health concern that is associated with significant negative mental, social, and physical outcomes. Technological advances have increased adolescents' use of social media, and online communication platforms have exposed adolescents to another mode of bullying- cyberbullying. Prevention and intervention materials, from websites and tip sheets to classroom curriculum, have been developed to help youth, parents, and teachers address cyberbullying. While youth and parents are willing to disclose their experiences with bullying to their health care providers, these disclosures need to be taken seriously and handled in a caring manner. Health care providers need to include questions about bullying on intake forms to encourage these disclosures. The aim of this article is to examine the current status of cyberbullying prevention and intervention. Research support for several school-based intervention programs is summarised. Recommendations for future research are provided.", "title": "" }, { "docid": "5f66a3faa36f273831b13b4345c2bf15", "text": "The structure of blood vessels in the sclerathe white part of the human eye, is unique for every individual, hence it is best suited for human identification. However, this is a challenging research because it has a high insult rate (the number of occasions the valid user is rejected). In this survey firstly a brief introduction is presented about the sclera based biometric authentication. In addition, a literature survey is presented. We have proposed simplified method for sclera segmentation, a new method for sclera pattern enhancement based on histogram equalization and line descriptor based feature extraction and pattern matching with the help of matching score between the two segment descriptors. We attempt to increase the awareness about this topic, as much of the research is not done in this area.", "title": "" }, { "docid": "e685a22b6f7b20fb1289923e86e467c5", "text": "Nowadays, with the growth in the use of search engines, the extension of spying programs and anti -terrorism prevention, several researches focused on text analysis. In this sense, lemmatization and stemming are two common requirements of these researches. They include reducing different grammatical forms of a word and bring them to a common base form. In what follows, we will discuss these treatment methods on arabic text, especially the Khoja Stemmer, show their limits and provide new tools to improve it.", "title": "" }, { "docid": "31fb6df8d386f28b63140ee2ad8d11ea", "text": "The problem and the solution.The majority of the literature on creativity has focused on the individual, yet the social environment can influence both the level and frequency of creative behavior. This article reviews the literature for factors related to organizational culture and climate that act as supports and impediments to organizational creativity and innovation. The work of Amabile, Kanter, Van de Ven, Angle, and others is reviewed and synthesized to provide an integrative understanding of the existing literature. Implications for human resource development research and practice are discussed.", "title": "" }, { "docid": "d2ca6d41e582c798bc7c53e932fd8dec", "text": "How to measure usability is an important question in HCI research and user interface evaluation. We review current practice in measuring usability by categorizing and discussing usability measures from 180 studies published in core HCI journals and proceedings. The discussion distinguish several problems with the measures, including whether they actually measure usability, if they cover usability broadly, how they are reasoned about, and if they meet recommendations on how to measure usability. In many studies, the choice of and reasoning about usability measures fall short of a valid and reliable account of usability as quality-in-use of the user interface being studied. Based on the review, we discuss challenges for studies of usability and for research into how to measure usability. The challenges are to distinguish and empirically compare subjective and objective measures of usability; to focus on developing and employing measures of learning and retention; to study long-term use and usability; to extend measures of satisfaction beyond post-use questionnaires; to validate and standardize the host of subjective satisfaction questionnaires used; to study correlations between usability measures as a means for validation; and to use both micro and macro tasks and corresponding measures of usability. In conclusion, we argue that increased attention to the problems identified and challenges discussed may strengthen studies of usability and usability research. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5bece01bed7c5a9a2433d95379882a37", "text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.", "title": "" }, { "docid": "4d79d71c019c0f573885ffa2bc67f48b", "text": "In this article, we provide a basic introduction to CMOS image-sensor technology, design and performance limits and present recent developments and future directions in this area. We also discuss image-sensor operation and describe the most popular CMOS image-sensor architectures. We note the main non-idealities that limit CMOS image sensor performance, and specify several key performance measures. One of the most important advantages of CMOS image sensors over CCDs is the ability to integrate sensing with analog and digital processing down to the pixel level. Finally, we focus on recent developments and future research directions that are enabled by pixel-level processing, the applications of which promise to further improve CMOS image sensor performance and broaden their applicability beyond current markets.", "title": "" }, { "docid": "a492dcdbb9ec095cdfdab797c4b4e659", "text": "We present a new class of methods for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. SpAM is essentially a functional version of the grouped lasso of Yuan and Lin (2006). SpAM is also closely related to the COSSO model of Lin and Zhang (2006), but decouples smoothing and sparsity, enabling the use of arbitrary nonparametric smoothers. We give an analysis of the theoretical properties of sparse additive models, and present empirical results on synthetic and real data, showing that SpAM can be effective in fitting sparse nonparametric models in high dimensional data.", "title": "" }, { "docid": "5bc183ebfcc9280dae0c15454085d95d", "text": "In this paper a criminal detection framework that could help policemen to recognize the face of a criminal or a suspect is proposed. The framework is a client-server video based face recognition surveillance in the real-time. The framework applies face detection and tracking using Android mobile devices at the client side and video based face recognition at the server side. This paper focuses on the development of the client side of the proposed framework, face detection and tracking using Android mobile devices. For the face detection stage, robust Viola-Jones algorithm that is not affected by illuminations is used. The face tracking stage is based on Optical Flow algorithm. Optical Flow is implemented in the proposed framework with two feature extraction methods, Fast Corner Features, and Regular Features. The proposed face detection and tracking is implemented using Android studio and OpenCV library, and tested using Sony Xperia Z2 Android 5.1 Lollipop Smartphone. Experiments show that face tracking using Optical Flow with Regular Features achieves a higher level of accuracy and efficiency than Optical Flow with Fast Corner Features.", "title": "" }, { "docid": "711c950873c784a0c80217c83f81070c", "text": "Accelerators are special purpose processors designed to speed up compute-intensive sections of applications. Two extreme endpoints in the spectrum of possible accelerators are FPGAs and GPUs, which can often achieve better performance than CPUs on certain workloads. FPGAs are highly customizable, while GPUs provide massive parallel execution resources and high memory bandwidth. Applications typically exhibit vastly different performance characteristics depending on the accelerator. This is an inherent problem attributable to architectural design, middleware support and programming style of the target platform. For the best application-to-accelerator mapping, factors such as programmability, performance, programming cost and sources of overhead in the design flows must be all taken into consideration. In general, FPGAs provide the best expectation of performance, flexibility and low overhead, while GPUs tend to be easier to program and require less hardware resources. We present a performance study of three diverse applications - Gaussian elimination, data encryption standard (DES), and Needleman-Wunsch - on an FPGA, a GPU and a multicore CPU system. We perform a comparative study of application behavior on accelerators considering performance and code complexity. Based on our results, we present an application characteristic to accelerator platform mapping, which can aid developers in selecting an appropriate target architecture for their chosen application.", "title": "" }, { "docid": "18498166845b27890110c3ca0cd43d86", "text": "Raine Mäntysalo The purpose of this article is to make an overview of postWWII urban planning theories from the point of view of participation. How have the ideas of public accountability, deliberative democracy and involvement of special interests developed from one theory to another? The urban planning theories examined are rational-comprehensive planning theory, advocacy planning theory, incrementalist planning theory and the two branches of communicative planning theory: planning as consensus-seeking and planning as management of conflicts.", "title": "" }, { "docid": "ce1384d061248cbb96e77ea482b2ba62", "text": "Preventable behaviors contribute to many life threatening health problems. Behavior-change technologies have been deployed to modify these, but such systems typically draw on traditional behavioral theories that overlook affect. We examine the importance of emotion tracking for behavior change. First, we conducted interviews to explore how emotions influence unwanted behaviors. Next, we deployed a system intervention, in which 35 participants logged information for a self-selected, unwanted behavior (e.g., smoking or overeating) over 21 days. 16 participants engaged in standard behavior tracking using a Fact-Focused system to record objective information about goals. 19 participants used an Emotion-Focused system to record emotional consequences of behaviors. Emotion-Focused logging promoted more successful behavior change and analysis of logfiles revealed mechanisms for success: greater engagement of negative affect for unsuccessful days and increased insight were key to motivating change. We present design implications to improve behavior-change technologies with emotion tracking.", "title": "" }, { "docid": "79934e1cb9a6c07fb965da9674daeb69", "text": "BACKGROUND\nAtrophic scars can complicate moderate and severe acne. There are, at present, several modalities of treatment with different results. Percutaneous collagen induction (PCI) has recently been proposed as a simple and effective therapeutic option for the management of atrophic scars.\n\n\nOBJECTIVE\nThe aim of our study was to analyze the efficacy and safety of percutaneous collagen induction for the treatment of acne scarring in different skin phototypes.\n\n\nMETHODS & MATERIALS\nA total of 60 patients of skin types phototype I to VI were included in the study. They were divided into three groups before beginning treatment: Group A (phototypes I to II), Group B (phototypes III to V), and Group C (phototypes VI). Each patient had three treatments at monthly intervals. The aesthetic improvement was evaluated by using a Global Aesthetic Improvement Scale (GAIS), and analyzed statistically by computerized image analysis of the patients' photographs. The differences in the GAIS scores in the different time-points of each group were found using the Wilcoxon's test for nonparametric-dependent continuous variables. Computerized image analysis of silicone replicas was used to quantify the irregularity of the surface micro-relief with Fast Fourier Transformation (FFT); average values of gray were obtained along the x- and y-axes. The calculated indexes were the integrals of areas arising from the distribution of pixels along the axes.\n\n\nRESULTS\nAll patients completed the study. The Wilcoxon's test for nonparametric-dependent continuous variables showed a statistically significant (p < 0.05) reduction in severity grade of acne scars at T5 compared to baseline (T1). The analysis of the surface micro-relief performed on skin replicas showed a decrease in the degree of irregularity of skin texture in all three groups of patients, with an average reduction of 31% in both axes after three sessions. No short- or long-term dyschromia was observed.\n\n\nCONCLUSION\nPCI offers a simple and safe modality to improve the appearance of acne scars without risk of dyspigmentation in patient of all skin types.", "title": "" }, { "docid": "1dc32737d1c6aea101258e5687fc8545", "text": "Individuals with Binge Eating Disorder (BED) often evidence comorbid Substance Use Disorders (SUD), resulting in poor outcome. This study is the first to examine treatment outcome for this concurrent disordered population. In this pilot study, 38 individuals diagnosed with BED and SUD participated in a 16-week group Mindfulness-Action Based Cognitive Behavioral Therapy (MACBT). Participants significantly improved on measures of objective binge eating episodes; disordered eating attitudes; alcohol and drug addiction severity; and depression. Taken together, MACBT appears to hold promise in treating individuals with co-existing BED-SUD.", "title": "" }, { "docid": "0858f3c76ea9570eeae23c33307f2eaf", "text": "Geometrical validation around the Calpha is described, with a new Cbeta measure and updated Ramachandran plot. Deviation of the observed Cbeta atom from ideal position provides a single measure encapsulating the major structure-validation information contained in bond angle distortions. Cbeta deviation is sensitive to incompatibilities between sidechain and backbone caused by misfit conformations or inappropriate refinement restraints. A new phi,psi plot using density-dependent smoothing for 81,234 non-Gly, non-Pro, and non-prePro residues with B < 30 from 500 high-resolution proteins shows sharp boundaries at critical edges and clear delineation between large empty areas and regions that are allowed but disfavored. One such region is the gamma-turn conformation near +75 degrees,-60 degrees, counted as forbidden by common structure-validation programs; however, it occurs in well-ordered parts of good structures, it is overrepresented near functional sites, and strain is partly compensated by the gamma-turn H-bond. Favored and allowed phi,psi regions are also defined for Pro, pre-Pro, and Gly (important because Gly phi,psi angles are more permissive but less accurately determined). Details of these accurate empirical distributions are poorly predicted by previous theoretical calculations, including a region left of alpha-helix, which rates as favorable in energy yet rarely occurs. A proposed factor explaining this discrepancy is that crowding of the two-peptide NHs permits donating only a single H-bond. New calculations by Hu et al. [Proteins 2002 (this issue)] for Ala and Gly dipeptides, using mixed quantum mechanics and molecular mechanics, fit our nonrepetitive data in excellent detail. To run our geometrical evaluations on a user-uploaded file, see MOLPROBITY (http://kinemage.biochem.duke.edu) or RAMPAGE (http://www-cryst.bioc.cam.ac.uk/rampage).", "title": "" }, { "docid": "57b35e32b92b54fc1ea7724e73b26f39", "text": "The authors examined relations between the Big Five personality traits and academic outcomes, specifically SAT scores and grade-point average (GPA). Openness was the strongest predictor of SAT verbal scores, and Conscientiousness was the strongest predictor of both high school and college GPA. These relations replicated across 4 independent samples and across 4 different personality inventories. Further analyses showed that Conscientiousness predicted college GPA, even after controlling for high school GPA and SAT scores, and that the relation between Conscientiousness and college GPA was mediated, both concurrently and longitudinally, by increased academic effort and higher levels of perceived academic ability. The relation between Openness and SAT verbal scores was independent of academic achievement and was mediated, both concurrently and longitudinally, by perceived verbal intelligence. Together, these findings show that personality traits have independent and incremental effects on academic outcomes, even after controlling for traditional predictors of those outcomes. ((c) 2007 APA, all rights reserved).", "title": "" } ]
scidocsrr
1aabdfb4da04b692b0fac41a6b6bd243
End-to-End People Detection in Crowded Scenes
[ { "docid": "1d8cd516cec4ef74d72fa283059bf269", "text": "Current high-quality object detection approaches use the same scheme: salience-based object proposal methods followed by post-classification using deep convolutional features. This spurred recent research in improving object proposal methods [18, 32, 15, 11, 2]. However, domain agnostic proposal generation has the principal drawback that the proposals come unranked or with very weak ranking, making it hard to trade-off quality for running time. Also, it raises the more fundamental question of whether high-quality proposal generation requires careful engineering or can be derived just from data alone. We demonstrate that learning-based proposal methods can effectively match the performance of hand-engineered methods while allowing for very efficient runtime-quality trade-offs. Using our new multi-scale convolutional MultiBox (MSC-MultiBox) approach, we substantially advance the state-of-the-art on the ILSVRC 2014 detection challenge data set, with 0.5 mAP for a single model and 0.52 mAP for an ensemble of two models. MSC-Multibox significantly improves the proposal quality over its predecessor Multibox [4] method: AP increases from 0.42 to 0.53 for the ILSVRC detection challenge. Finally, we demonstrate improved bounding-box recall compared to Multiscale Combinatorial Grouping [18] with less proposals on the Microsoft-COCO [14] data set.", "title": "" }, { "docid": "66e91cdcb987e6f9ee48360414c993d6", "text": "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.", "title": "" }, { "docid": "a77eddf9436652d68093946fbe1d2ed0", "text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.", "title": "" } ]
[ { "docid": "f0c415dfb22032064e8cdb0ec76403b7", "text": "In this paper, an impedance control scheme for aerial robotic manipulators is proposed, with the aim of reducing the end-effector interaction forces with the environment. The proposed control has a multi-level architecture, in detail the outer loop is composed by a trajectory generator and an impedance filter that modifies the trajectory to achieve a complaint behaviour in the end-effector space; a middle loop is used to generate the joint space variables through an inverse kinematic algorithm; finally the inner loop is aimed at ensuring the motion tracking. The proposed control architecture has been experimentally tested.", "title": "" }, { "docid": "49680e94843e070a5ed0179798f66f33", "text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.", "title": "" }, { "docid": "866f7fa780b24fe420623573482df984", "text": "We present the prenatal ultrasound findings of massive macroglossia in a fetus with prenatally diagnosed Beckwith-Wiedemann syndrome. Three-dimensional surface mode ultrasound was utilized for enhanced visualization of the macroglossia.", "title": "" }, { "docid": "be20cb4f75ff0d4d1637095d5928b005", "text": "Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.", "title": "" }, { "docid": "413b21bece889166a385651ba5cd8512", "text": "Monaural speech separation is a fundamental problem in robust speech processing. Recently, deep neural network (DNN)-based speech separation methods, which predict either clean speech or an ideal time-frequency mask, have demonstrated remarkable performance improvement. However, a single DNN with a given window length does not leverage contextual information sufficiently, and the differences between the two optimization objectives are not well understood. In this paper, we propose a deep ensemble method, named multicontext networks, to address monaural speech separation. The first multicontext network averages the outputs of multiple DNNs whose inputs employ different window lengths. The second multicontext network is a stack of multiple DNNs. Each DNN in a module of the stack takes the concatenation of original acoustic features and expansion of the soft output of the lower module as its input, and predicts the ratio mask of the target speaker; the DNNs in the same module employ different contexts. We have conducted extensive experiments with three speech corpora. The results demonstrate the effectiveness of the proposed method. We have also compared the two optimization objectives systematically and found that predicting the ideal time-frequency mask is more efficient in utilizing clean training speech, while predicting clean speech is less sensitive to SNR variations.", "title": "" }, { "docid": "7d308c302065253ee1adbffad04ff3f1", "text": "Cloud computing opens a new era in IT as it can provide various elastic and scalable IT services in a pay-as-you-go fashion, where its users can reduce the huge capital investments in their own IT infrastructure. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several common drawbacks. First, a necessary authorization/authentication process is missing between the auditor and cloud service provider, i.e., anyone can challenge the cloud service provider for a proof of integrity of certain file, which potentially puts the quality of the so-called `auditing-as-a-service' at risk; Second, although some of the recent work based on BLS signature can already support fully dynamic data updates over fixed-size data blocks, they only support updates with fixed-sized blocks as basic unit, which we call coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Based on our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates. Theoretical analysis and experimental results demonstrate that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.", "title": "" }, { "docid": "c3fcdcbada22feb1851c96fd60104e02", "text": "Criteria for the diagnosis of vascular dementia (VaD) that are reliable, valid, and readily applicable in a variety of settings are urgently needed for both clinical and research purposes. To address this need, the Neuroepidemiology Branch of the National Institute of Neurological Disorders and Stroke (NINDS) convened an International Workshop with support from the Association Internationale pour la Recherche et l'Enseignement en Neurosciences (AIREN), resulting in research criteria for the diagnosis of VaD. Compared with other current criteria, these guidelines emphasize (1) the heterogeneity of vascular dementia syndromes and pathologic subtypes including ischemic and hemorrhagic strokes, cerebral hypoxic-ischemic events, and senile leukoencephalopathic lesions; (2) the variability in clinical course, which may be static, remitting, or progressive; (3) specific clinical findings early in the course (eg, gait disorder, incontinence, or mood and personality changes) that support a vascular rather than a degenerative cause; (4) the need to establish a temporal relationship between stroke and dementia onset for a secure diagnosis; (5) the importance of brain imaging to support clinical findings; (6) the value of neuropsychological testing to document impairments in multiple cognitive domains; and (7) a protocol for neuropathologic evaluations and correlative studies of clinical, radiologic, and neuropsychological features. These criteria are intended as a guide for case definition in neuroepidemiologic studies, stratified by levels of certainty (definite, probable, and possible). They await testing and validation and will be revised as more information becomes available.", "title": "" }, { "docid": "b425716ec96c3fadb73d6475d1278c06", "text": "This paper presents a transformerless single-phase inverter topology based on a modified H-bridge-based multilevel converter. The topology comprises two legs, namely, a usual two-level leg and a T-type leg. The latter is based on a usual two-level leg, which has been modified to gain access to the midpoint of the split dc-link by means of a bidirectional switch. The topology is referred as an asymmetrical T-type five-level (5L-T-AHB) inverter. An ad hoc modulation strategy based on sinusoidal pulsewidth modulation is also presented to control the 5L-T-AHB inverter, where the two-level leg is commuted at fundamental frequency. Numerical and experimental results show that the proposed 5L-T-AHB inverter achieves high efficiency, exhibits reduced leakage currents, and complies with the transformerless norms and regulations, which makes it suitable for the transformerless PV inverters market.11This updated version includes experimental evidence, considerations for practical implementation, efficiency studies, visualization of semiconductor losses distribution, a deeper and corrected common mode analysis, and an improved notation among other modifications.", "title": "" }, { "docid": "7340823ae6afd072ab186ec8aaad0d44", "text": "Blood flow measurement using Doppler ultrasound has become a useful tool for diagnosing cardiovascular diseases and as a physiological monitor. Recently, pocket-sized ultrasound scanners have been introduced for portable diagnosis. The present paper reports the implementation of a portable ultrasound pulsed-wave (PW) Doppler flowmeter using a smartphone. A 10-MHz ultrasonic surface transducer was designed for the dynamic monitoring of blood flow velocity. The directional baseband Doppler shift signals were obtained using a portable analog circuit system. After hardware processing, the Doppler signals were fed directly to a smartphone for Doppler spectrogram analysis and display in real time. To the best of our knowledge, this is the first report of the use of this system for medical ultrasound Doppler signal processing. A Couette flow phantom, consisting of two parallel disks with a 2-mm gap, was used to evaluate and calibrate the device. Doppler spectrograms of porcine blood flow were measured using this stand-alone portable device under the pulsatile condition. Subsequently, in vivo portable system verification was performed by measuring the arterial blood flow of a rat and comparing the results with the measurement from a commercial ultrasound duplex scanner. All of the results demonstrated the potential for using a smartphone as a novel embedded system for portable medical ultrasound applications.", "title": "" }, { "docid": "301e061163b115126b8f0b9851ed265c", "text": "Pressure ulcers are a common problem among older adults in all health care settings. Prevalence and incidence estimates vary by setting, ulcer stage, and length of follow-up. Risk factors associated with increased pressure ulcer incidence have been identified. Activity or mobility limitation, incontinence, abnormalities in nutritional status, and altered consciousness are the most consistently reported risk factors for pressure ulcers. Pain, infectious complications, prolonged and expensive hospitalizations, persistent open ulcers, and increased risk of death are all associated with the development of pressure ulcers. The tremendous variability in pressure ulcer prevalence and incidence in health care settings suggests that opportunities exist to improve outcomes for persons at risk for and with pressure ulcers.", "title": "" }, { "docid": "792907ad8871e63f6b39d344452ca66a", "text": "This paper presents the design of a hardware-efficient, low-power image processing system for next-generation wireless endoscopy. The presented system is composed of a custom CMOS image sensor, a dedicated image compressor, a forward error correction (FEC) encoder protecting radio transmitted data against random and burst errors, a radio data transmitter, and a controller supervising all operations of the system. The most significant part of the system is the image compressor. It is based on an integer version of a discrete cosine transform and a novel, low complexity yet efficient, entropy encoder making use of an adaptive Golomb-Rice algorithm instead of Huffman tables. The novel hardware-efficient architecture designed for the presented system enables on-the-fly compression of the acquired image. Instant compression, together with elimination of the necessity of retransmitting erroneously received data by their prior FEC encoding, significantly reduces the size of the required memory in comparison to previous systems. The presented system was prototyped in a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. Its power consumption is low and comparable to other application-specific-integrated-circuits-based systems, despite FPGA-based implementation.", "title": "" }, { "docid": "67db336c7de0cff2df34e265a219e838", "text": "Machine reading aims to automatically extract knowledge from text. It is a long-standing goal of AI and holds the promise of revolutionizing Web search and other fields. In this paper, we analyze the core challenges of machine reading and show that statistical relational AI is particularly well suited to address these challenges. We then propose a unifying approach to machine reading in which statistical relational AI plays a central role. Finally, we demonstrate the promise of this approach by presenting OntoUSP, an end-toend machine reading system that builds on recent advances in statistical relational AI and greatly outperforms state-of-theart systems in a task of extracting knowledge from biomedical abstracts and answering questions.", "title": "" }, { "docid": "397f6c39825a5d8d256e0cc2fbba5d15", "text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.", "title": "" }, { "docid": "84e94437c5fb10fbb07b52f4d24378bf", "text": "We present the concept of logarithmic computation for neural networks. We explore how logarithmic encoding of non-uniformly distributed weights and activations is preferred over linear encoding at resolutions of 4 bits and less. Logarithmic encoding enables networks to 1) achieve higher classification accuracies than fixed-point at low resolutions and 2) eliminate bulky digital multipliers. We demonstrate our ideas in the hardware realization, LogNet, an inference engine using only bitshift-add convolutions and weights distributed across the computing fabric. The opportunities from hardware work in synergy with those from the algorithm domain.", "title": "" }, { "docid": "5b03f69a2e7a21e5e1144080b604af2e", "text": "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach, in comparison to other spectral domain convolutional architectures, on spectral image classification, community detection, vertex classification, and matrix completion tasks.", "title": "" }, { "docid": "78ce9ddb8fbfeb801455a76a3a6b0af2", "text": "Deeply embedded domain-specific languages (EDSLs) intrinsically compromise programmer experience for improved program performance. Shallow EDSLs complement them by trading program performance for good programmer experience. We present Yin-Yang, a framework for DSL embedding that uses Scala macros to reliably translate shallow EDSL programs to the corresponding deep EDSL programs. The translation allows program prototyping and development in the user friendly shallow embedding, while the corresponding deep embedding is used where performance is important. The reliability of the translation completely conceals the deep em- bedding from the user. For the DSL author, Yin-Yang automatically generates the deep DSL embeddings from their shallow counterparts by reusing the core translation. This obviates the need for code duplication and leads to reliability by construction.", "title": "" }, { "docid": "f451ca49b2bca088632ad055d78fbf2a", "text": "Intrabody communications (IBC) is a novel communication technique which uses the human body itself as the signal propagation medium. This communication method is categorized as a physical layer of IEEE 802.15.6 or Wireless Body Area Network (WBAN) standard. It is significant to investigate the IBC systems to improve the transceiver design characteristics such as data rate and power consumption. In this paper, we propose a new IBC transmitter implementing pulse position modulation (PPM) scheme based on impulse radio. A FPGA is employed to implement the architecture of a carrier-free PPM transmission. Results demonstrate the data rate of 1.56 Mb/s which is suitable for the galvanic coupling IBC method. The PPM transmitter power consumption is 2.0 mW with 3.3 V supply voltage. Having energy efficiency as low as 1.28 nJ/bit provides an enhanced solution for portable biomedical applications based on body area networks.", "title": "" }, { "docid": "966205d925e2c0840fcc9064fa450462", "text": "Three diierent algorithms for obstacle detection are presented in this paper each based on diierent assumptions. The rst two algorithms are qualitative in that they return only yes/no answers regarding the presence of obstacles in the eld of view; no 3D reconstruction is performed. They have the advantage of fast determination of the existence of obstacles in a scene based on the solvability of a linear system. The rst algorithm uses information about the ground plane, while the second only assumes that the ground is planar. The third algorithm is quantitative in that it continuously estimates the ground plane and reconstructs partial 3D structures by determining the height above the ground plane of each point in the scene. Experimental results are presented for real and simulated data, and the performance of the three algorithms under diierent noise levels is compared in simulation. We conclude that in terms of the robustness of performance, the third algorithm is superior to the other two.", "title": "" }, { "docid": "f3d7c9ec8238de96c325de69786c9091", "text": "This paper 1 presents an algorithm for automatically detecting bone contours from hand radiographs using active contours. Prior knowledge is first used to locate initial contours for the snakes inside each bone of interest. Next, an adaptive snake algorithm is applied so that parameters are properly adjusted for each bone specifically. We introduce a novel truncation technique to prevent the external forces of the snake from pulling the contour outside the bones boundaries, yielding excelent results.", "title": "" }, { "docid": "f13ffbb31eedcf46df1aaecfbdf61be9", "text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.", "title": "" } ]
scidocsrr
dbaf6f105044a7944eb6467095edbc1f
Why do narcissists take more risks ? Testing the roles of perceived risks and benefits of risky behaviors
[ { "docid": "0f9b073461047d698b6bba8d9ee7bff2", "text": "Different psychotherapeutic theories provide contradictory accounts of adult narcissism as the product of either parental coldness or excessive parental admiration during childhood. Yet, none of these theories has been tested systematically in a nonclinical sample. The authors compared four structural equation models predicting overt and covert narcissism among 120 United Kingdom adults. Both forms of narcissism were predicted by both recollections of parental coldness and recollections of excessive parental admiration. Moreover, a suppression relationship was detected between these predictors: The effects of each were stronger when modeled together than separately. These effects were found after controlling for working models of attachment; covert narcissism was predicted also by attachment anxiety. This combination of childhood experiences may help to explain the paradoxical combination of grandiosity and fragility in adult narcissism.", "title": "" } ]
[ { "docid": "1420ca15b9abeb003cee176d8825bad9", "text": "Academic study of cloud computing is an emerging research field in Saudi Arabia. Saudi Arabia represents the largest economy in the Arab Gulf region, which makes it a potential market of cloud computing technologies. This cross-sectional exploratory empirical research is based on technology–organization–environment (TOE) framework, targeting higher education institutions. In this study, the factors that affect the cloud adoption by higher education institutions were identified and tested using SmartPLS software, a powerful statistical analysis tool for structural equation modeling. Three factors were found significant in this context. Relative advantage, complexity and data concern were the most significant factors. The model explained 47.9 % of the total adoption variance. The findings offer education institutions and cloud computing service providers with better understanding of factors affecting the adoption of cloud computing.", "title": "" }, { "docid": "3a090b6fdf404e5262c7c36e3ae5879e", "text": "Background: While several benefits are attributed to the Internet and video games, an important proportion of the population presents symptoms related to possible new technological addictions and there has been little discussion of treatment of problematic technology use. Although demand for knowledge is growing, only a small number of treatments have been described. Objective: To conduct a systematic review of the literature, to establish Cognitive Behavioral Therapy (CBT) as a possible strategy for treating Internet and video game addictions. Method: The review was conducted in the following databases: Science Direct on Line, PubMed, PsycINFO, Cochrane Clinical Trials Library, BVS and SciELO. The keywords used were: Cognitive Behavioral Therapy; therapy; treatment; with association to the terms Internet addiction and video game addiction. Given the scarcity of studies in the field, no restrictions to the minimum period of publication were made, so that articles found until October 2013 were accounted. Results: Out of 72 articles found, 23 described CBT as a psychotherapy for Internet and video game addiction. The manuscripts showed the existence of case studies and protocols with satisfactory efficacy. Discussion: Despite the novelty of technological dependencies, CBT seems to be applicable and allows an effective treatment for this population. Lemos IL, et al. / Rev Psiq Clín. 2014;41(3):82-8", "title": "" }, { "docid": "5fd10b2277918255133f2e37a55e1103", "text": "Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on deep neural network (DNN): The first learning stage is to generate separate representation for each modality and the second learning stage is to get the cross-modal common representation. However the existing methods have three limitations: 1) In the first learning stage they only model intramodality correlation but ignore intermodality correlation with rich complementary context. 2) In the second learning stage they only adopt shallow networks with single-loss regularization but ignore the intrinsic relevance of intramodality and intermodality correlation. 3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems this paper proposes a cross-modal correlation learning (CCL) approach with multigrained fusion by hierarchical network and the contributions are as follows: 1) In the first learning stage CCL exploits multilevel association with joint optimization to preserve the complementary context from intramodality and intermodality correlation simultaneously. 2) In the second learning stage a multitask learning strategy is designed to adaptively balance the intramodality semantic category constraints and intermodality pairwise similarity constraints. 3) CCL adopts multigrained modeling which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets the experimental results show our CCL approach achieves the best performance.", "title": "" }, { "docid": "7aad80319743ac72d2c4e117e5f831fa", "text": "In this letter, we propose a novel method for classifying ambulatory activities using eight plantar pressure sensors within smart shoes. Using these sensors, pressure data of participants can be collected regarding level walking, stair descent, and stair ascent. Analyzing patterns of the ambulatory activities, we present new features with which to describe the ambulatory activities. After selecting critical features, a multi-class support vector machine algorithm is applied to classify these activities. Applying the proposed method to the experimental database, we obtain recognition rates up to 95.2% after six steps.", "title": "" }, { "docid": "4bbe3b4512ff5bf18aa17d54b6645049", "text": "The aim of this study is to find a minimal size of text samples for authorship attribution that would provide stable results independent of random noise. A few controlled tests for different sample lengths, languages and genres are discussed and compared. Although I focus on Delta methodology, the results are valid for many other multidimensional methods relying on word frequencies and \"nearest neighbor\" classifications.", "title": "" }, { "docid": "4cfe999fa7b2594327b6109084f0164f", "text": "A large number of post-transcriptional modifications of transfer RNAs (tRNAs) have been described in prokaryotes and eukaryotes. They are known to influence their stability, turnover, and chemical/physical properties. A specific subset of tRNAs contains a thiolated uridine residue at the wobble position to improve the codon-anticodon interaction and translational accuracy. The proteins involved in tRNA thiolation are reminiscent of prokaryotic sulfur transfer reactions and of the ubiquitylation process in eukaryotes. In plants, some of the proteins involved in this process have been identified and show a high degree of homology to their non-plant equivalents. For other proteins, the identification of the plant homologs is much less clear, due to the low conservation in protein sequence. This manuscript describes the identification of CTU2, the second CYTOPLASMIC THIOURIDYLASE protein of Arabidopsis thaliana. CTU2 is essential for tRNA thiolation and interacts with ROL5, the previously identified CTU1 homolog of Arabidopsis. CTU2 is ubiquitously expressed, yet its activity seems to be particularly important in root tissue. A ctu2 knock-out mutant shows an alteration in root development. The analysis of CTU2 adds a new component to the so far characterized protein network involved in tRNA thiolation in Arabidopsis. CTU2 is essential for tRNA thiolation as a ctu2 mutant fails to perform this tRNA modification. The identified Arabidopsis CTU2 is the first CTU2-type protein from plants to be experimentally verified, which is important considering the limited conservation of these proteins between plant and non-plant species. Based on the Arabidopsis protein sequence, CTU2-type proteins of other plant species can now be readily identified.", "title": "" }, { "docid": "5b96fcbe3ac61265ef5407f4e248193e", "text": "Modelling the similarity of sentence pairs is an important problem in natural language processing and information retrieval, with applications in tasks such as paraphrase identification and answer selection in question answering. The Multi-Perspective Convolutional Neural Network (MP-CNN) is a model that improved previous state-of-the-art models in 2015 and has remained a popular model for sentence similarity tasks. However, until now, there has not been a rigorous study of how the model actually achieves competitive accuracy. In this thesis, we report on a series of detailed experiments that break down the contribution of each component of MP-CNN towards its statistical accuracy and how they affect model robustness. We find that two key components of MP-CNN are non-essential to achieve competitive accuracy and they make the model less robust to changes in hyperparameters. Furthermore, we suggest simple changes to the architecture and experimentally show that we improve the accuracy of MP-CNN when we remove these two major components of MP-CNN and incorporate these small changes, pushing its scores closer to more recent works on competitive semantic textual similarity and answer selection datasets, while using eight times fewer parameters.", "title": "" }, { "docid": "d11fc4a2a799356380354af144aafe37", "text": "[Context and motivation] For the past several years, Cyber Physical Systems (CPS) have emerged as a new system type like embedded systems or information systems. CPS are highly context-dependent, observe the world through sensors, act upon it through actuators, and communicate with one another through powerful networks. It has been widely argued that these properties pose new challenges for the development process. [Question/problem] Yet, how these CPS properties impact the development process has thus far been subject to conjecture. An investigation of a development process from a cyber physical perspective has thus far not been undertaken. [Principal ideas/results] In this paper, we conduct initial steps into such an investigation. We present a case study involving the example of a software simulator of an airborne traffic collision avoidance system. [Contribution] The goal of the case study is to investigate which of the challenges from the literature impact the development process of CPS the most.", "title": "" }, { "docid": "275cdc97004df1886c8da247c7206a71", "text": "This paper considers optimal synthesis of a special type of four-bar linkages. Combination of this optimal four-bar linkage with on of it’s cognates and elimination of two redundant cognates will result in a Watt’s six-bar mechanism, which generates straight and parallel motion. This mechanism can be utilized for legged machines. The advantage of this mechanism is that the leg remains straight during it’s contact period and because of it’s parallel motion, the legs can be as wide as desired to increase contact area and decrease the number of legs required to keep body’s stability statically and dynamically. “Genetic algorithm” optimization method is used to find optimal lengths. It is especially useful for problems like the coupler curve equation which are completely nonlinear or extremely difficult to solve.", "title": "" }, { "docid": "f8062f3ece1ff887047303d53cf37323", "text": "The task of automatically tracking the visual attention in dynamic visual scenes is highly challenging. To approach it, we propose a Bayesian online learning algorithm. As the visual scene changes and new objects appear, based on a mixture model, the algorithm can identify and tell visual saccades (transitions) from visual fixation clusters (regions of interest). The approach is evaluated on real-world data, collected from eye-tracking experiments in driving sessions.", "title": "" }, { "docid": "199527da97881d37606ddf2416b46fe4", "text": "Driven by the demands on healthcare resulting from the shift toward more sedentary lifestyles, considerable effort has been devoted to the monitoring and classification of human activity. In previous studies, various classification schemes and feature extraction methods have been used to identify different activities from a range of different datasets. In this paper, we present a comparison of 14 methods to extract classification features from accelerometer signals. These are based on the wavelet transform and other well-known time- and frequency-domain signal characteristics. To allow an objective comparison between the different features, we used two datasets of activities collected from 20 subjects. The first set comprised three commonly used activities, namely, level walking, stair ascent, and stair descent, and the second a total of eight activities. Furthermore, we compared the classification accuracy for each feature set across different combinations of three different accelerometer placements. The classification analysis has been performed with robust subject-based cross-validation methods using a nearest-neighbor classifier. The findings show that, although the wavelet transform approach can be used to characterize nonstationary signals, it does not perform as accurately as frequency-based features when classifying dynamic activities performed by healthy subjects. Overall, the best feature sets achieved over 95% intersubject classification accuracy.", "title": "" }, { "docid": "2d4357831f83de026759776e019934da", "text": "Mapping the physical location of nodes within a wireless sensor network (WSN) is critical in many applications such as tracking and environmental sampling. Passive RFID tags pose an interesting solution to localizing nodes because an outside reader, rather than the tag, supplies the power to the tag. Thus, utilizing passive RFID technology allows a localization scheme to not be limited to objects that have wireless communication capability because the technique only requires that the object carries a RFID tag. This paper illustrates a method in which objects can be localized without the need to communicate received signal strength information between the reader and the tagged item. The method matches tag count percentage patterns under different signal attenuation levels to a database of tag count percentages, attenuations and distances from the base station reader.", "title": "" }, { "docid": "be5e1336187b80bc418b2eb83601fbd4", "text": "Pedestrian detection has been an important problem for decades, given its relevance to a number of applications in robotics, including driver assistance systems, road scene understanding and surveillance systems. The two main practical requirements for fielding such systems are very high accuracy and real-time speed: we need pedestrian detectors that are accurate enough to be relied on and are fast enough to run on systems with limited compute power. This paper addresses both of these requirements by combining very accurate deep-learning-based classifiers within very efficient cascade classifier frameworks. Deep neural networks (DNN) have been shown to excel at classification tasks [5], and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second (FPS). The resulting approach achieves a 26.2% average miss rate on the Caltech Pedestrian detection benchmark, which is the first work we are aware of that achieves high accuracy while running in real-time. To achieve this, we combine a fast cascade [2] with a cascade of classifiers, which we propose to be DNNs. Our approach is unique, as it is the only one to produce a pedestrian detector at real-time speeds (15 FPS) that is also very accurate. Figure 1 visualizes existing methods as plotted on the accuracy computational time axis, measured on the challenging Caltech pedestrian detection benchmark [4]. As can be seen in this figure, our approach is the only one to reside in the high accuracy, high speed region of space, which makes it particularly appealing for practical applications. Fast Deep Network Cascade. Our main architecture is a cascade structure in which we take advantage of the fast features for elimination, VeryFast [2] as an initial stage and combine it with small and large deep networks [1, 5] for high accuracy. The VeryFast algorithm is a cascade itself, but of boosting classifiers. It reduces recall with each stage, producing a high average miss rate in the end. Since the goal is eliminate many non-pedestrian patches and at the same time keep the recall high, we used only 10% of the stages in that cascade. Namely, we use a cascade of only 200 stages, instead of the 2000 in the original work. The first stage of our deep cascade processes all image patches that have high confidence values and pass through the VeryFast classifier. We here utilize the idea of a tiny convolutional network proposed by our prior work [1]. The tiny deep network has three layers only and features a 5x5 convolution, a 1x1 convolution and a very shallow fully-connected layer of 512 units. It reduces the massive computational time that is needed to evaluate a full DNN at all candidate locations filtered by the previous stage. The speedup produced by the tiny network, is a crucial component in achieving real-time performance in our fast cascade method. The baseline deep neural network is based on the original deep network of Krizhevsky et al [5]. As mentioned, this network in general is extremely slow to be applied alone. To achieve real-time speeds, we first apply it to only the remaining filtered patches from the previous two stages. Another key difference is that we reduced the depths of some of the convolutional layers and the sizes of the receptive fields, which is specifically done to gain speed advantage. Runtime. Our deep cascade works at 67ms on a standard NVIDIA K20 Tesla GPU per 640x480 image, which is a runtime of 15 FPS. The time breakdown is as follows. The soft-cascade takes about 7 milliseconds (ms). About 1400 patches are passed through per image from the fast cascade. The tiny DNN runs at 0.67 ms per batch of 128, so it can process the patches in 7.3 ms. The final stage of the cascade (which is the baseline classifier) takes about 53ms. This is an overall runtime of 67ms. Experimental evaluation. We evaluate the performance of the Fast Deep Network Cascade using the training and test protocols established in the Caltech pedestrian benchmark [4]. We tested several scenarios by training on the Caltech data only, denoted as DeepCascade, on an indeFigure 1: Performance of pedestrian detection methods on the accuracy vs speed axis. Our DeepCascade method achieves both smaller missrates and real-time speeds. Methods for which the runtime is more than 5 seconds per image, or is unknown, are plotted on the left hand side. The SpatialPooling+/Katamari methods use additional motion information.", "title": "" }, { "docid": "9a4bdfe80a949ec1371a917585518ae4", "text": "This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with non-deterministic effects, concurrent actions, and continuous change.", "title": "" }, { "docid": "0bf292fdbc04805b4bd671d6f5099cf7", "text": "We consider the stochastic optimization of finite sums over a Riemannian manifold where the functions are smooth and convex. We present MASAGA, an extension of the stochastic average gradient variant SAGA on Riemannian manifolds. SAGA is a variance-reduction technique that typically outperforms methods that rely on expensive full-gradient calculations, such as the stochastic variance-reduced gradient method. We show that MASAGA achieves a linear convergence rate with uniform sampling, and we further show that MASAGA achieves a faster convergence rate with non-uniform sampling. Our experiments show that MASAGA is faster than the recent Riemannian stochastic gradient descent algorithm for the classic problem of finding the leading eigenvector corresponding to the maximum eigenvalue.", "title": "" }, { "docid": "8c35fd3040e4db2d09e3d6dc0e9ae130", "text": "Internet of Things is referred to a combination of physical devices having sensors and connection capabilities enabling them to interact with each other (machine to machine) and can be controlled remotely via cloud engine. Success of an IoT device depends on the ability of systems and devices to securely sample, collect, and analyze data, and then transmit over link, protocol, or media selections based on stated requirements, all without human intervention. Among the requirements of the IoT, connectivity is paramount. It's hard to imagine that a single communication technology can address all the use cases possible in home, industry and smart cities. Along with the existing low power technologies like Zigbee, Bluetooth and 6LoWPAN, 802.11 WiFi standards are also making its way into the market with its own advantages in high range and better speed. Along with IEEE, WiFi Alliance has a new standard for the proximity applications. Neighbor Awareness Network (NAN) popularly known as WiFi Aware is that standard which enables low power discovery over WiFi and can light up many proximity based used cases. In this paper we discuss how NAN can influence the emerging IoT market as a connectivity solution for proximity assessment and contextual notifications with its benefits in some of the scenarios. When we consider WiFi the infrastructure already exists in terms of access points all around in public and smart phones or tablets come with WiFi as a default feature hence enabling NAN can be easy and if we can pair them with IoT, many innovative use cases can evolve.", "title": "" }, { "docid": "8bae8e7937f4c9a492a7030c62d7d9f4", "text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.", "title": "" }, { "docid": "b1bced32626640b0078f4782d6ab1d40", "text": "This report summarizes my overview talk on software clone detection research. It first discusses the notion of software redundancy, cloning, duplication, and similarity. Then, it describes various categorizations of clone types, empirical studies on the root causes for cloning, current opinions and wisdom of consequences of cloning, empirical studies on the evolution of clones, ways to remove, to avoid, and to detect them, empirical evaluations of existing automatic clone detector performance (such as recall, precision, time and space consumption) and their fitness for a particular purpose, benchmarks for clone detector evaluations, presentation issues, and last but not least application of clone detection in other related fields. After each summary of a subarea, I am listing open research questions.", "title": "" }, { "docid": "db2ebec1eeec213a867b10fe9550bfc7", "text": "Photovoltaic method is very popular for generating electrical power. Its energy production depends on solar radiation on that location and orientation. Shadow rapidly decreases performance of the Photovoltaic system. In this research, it is being investigated that how exactly real-time shadow can be detected. In principle, 3D city models containing roof structure, vegetation, thematically differentiated surface and texture, are suitable to simulate exact real-time shadow. An automated procedure to measure exact shadow effect from the 3D city models and a long-term simulation model to determine the produced energy from the photovoltaic system is being developed here. In this paper, a method for detecting shadow for direct radiation has been discussed with its result using a 3D city model to perform a solar energy potentiality analysis. Figure 1. Partial Shadow on PV array (Reisa 2011). Former military area Scharnhauser Park shown in figure 2 has been choosen as the case study area for this research. It is an urban conversion and development area of 150 hecta res in the community of Ostfildern on the southern border near Stuttgart with 7000 inhabitants. About 80% heating energy demand of the whole area is supplied by renewable energies and a small portion of electricity is delivered by existing roof top photovoltaic system (Tereci et al, 2009). This has been selected as the study area for this research because of availability CityGML and LIDAR data, building footprints and existing photovoltaic cells on roofs and façades. Land Survey Office Baden-Wüttemberg provides the laser scanning data with a density of 4 points per square meter at a high resolution of 0.2 meter. The paper has been organized with a brief introduction at the beginning explaining background of photovoltaic energy and motivation for this research in. Then the effect of shadow on photovoltaic cells and a methodology for detecting shadow from direct radiation. Then result has been shown applying the methodology and some brief idea about the future work of this research has been presented.", "title": "" }, { "docid": "f7edc938429e5f085e355004325b7698", "text": "We present a large scale unified natural language inference (NLI) dataset for providing insight into how well sentence representations capture distinct types of reasoning. We generate a large-scale NLI dataset by recasting 11 existing datasets from 7 different semantic tasks. We use our dataset of approximately half a million context-hypothesis pairs to test how well sentence encoders capture distinct semantic phenomena that are necessary for general language understanding. Some phenomena that we consider are event factuality, named entity recognition, figurative language, gendered anaphora resolution, and sentiment analysis, extending prior work that included semantic roles and frame semantic parsing. Our dataset will be available at https:// www.decomp.net, to grow over time as additional resources are recast.", "title": "" } ]
scidocsrr
47052b6522116f9277c62e67fdf9cc95
The Reversible Residual Network: Backpropagation Without Storing Activations
[ { "docid": "7ec6540b44b23a0380dcb848239ccac4", "text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.", "title": "" }, { "docid": "4d2be7aac363b77c6abd083947bc28c7", "text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.", "title": "" }, { "docid": "b2fc60b400b2b8ed3425658e3a1e9217", "text": "We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O( √ n) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(logn) with as little as O(n logn) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30% additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.", "title": "" }, { "docid": "b0bd9a0b3e1af93a9ede23674dd74847", "text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.", "title": "" } ]
[ { "docid": "79564b938dde94306a2a142240bf30ea", "text": "Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting (MTC) dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large margins and achieves the overall best counting performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences. TasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem.", "title": "" }, { "docid": "bbb6b192974542b165d3f7a0d139a8e1", "text": "While gamification is gaining ground in business, marketing, corporate management, and wellness initiatives, its application in education is still an emerging trend. This article presents a study of the published empirical research on the application of gamification to education. The study is limited to papers that discuss explicitly the effects of using game elements in specific educational contexts. It employs a systematic mapping design. Accordingly, a categorical structure for classifying the research results is proposed based on the extracted topics discussed in the reviewed papers. The categories include gamification design principles, game mechanics, context of applying gamification (type of application, educational level, and academic subject), implementation, and evaluation. By mapping the published works to the classification criteria and analyzing them, the study highlights the directions of the currently conducted empirical research on applying gamification to education. It also indicates some major obstacles and needs, such as the need for proper technological support, for controlled studies demonstrating reliable positive or negative results of using specific game elements in particular educational contexts, etc. Although most of the reviewed papers report promising results, more substantial empirical research is needed to determine whether both extrinsic and intrinsic motivation of the learners can be influenced by gamification.", "title": "" }, { "docid": "072a6a274820e7dea5d811906f81d244", "text": "Analysis of vascular geometry is important in many medical imaging applications, such as retinal, pulmonary, and cardiac investigations. In order to make reliable judgments for clinical usage, accurate and robust segmentation methods are needed. Due to the high complexity of biological vasculature trees, manual identification is often too time-consuming and tedious to be used in practice. To design an automated and computerized method, a major challenge is that the appearance of vasculatures in medical images has great variance across modalities and subjects. Therefore, most existing approaches are specially designed for a particular task, lacking the flexibility to be adapted to other circumstances. In this paper, we present a generic approach for vascular structure identification from medical images, which can be used for multiple purposes robustly. The proposed method uses the state-of-the-art deep convolutional neural network (CNN) to learn the appearance features of the target. A Principal Component Analysis (PCA)-based nearest neighbor search is then utilized to estimate the local structure distribution, which is further incorporated within the generalized probabilistic tracking framework to extract the entire connected tree. Qualitative and quantitative results over retinal fundus data demonstrate that the proposed framework achieves comparable accuracy as compared with state-of-the-art methods, while efficiently producing more information regarding the candidate tree structure.", "title": "" }, { "docid": "824480b0f5886a37ca1930ce4484800d", "text": "Conduction loss reduction technique using a small resonant capacitor for a phase shift full bridge converter with clamp diodes is proposed in this paper. The proposed technique can be implemented simply by adding a small resonant capacitor beside the leakage inductor of transformer. Since the voltage across the small resonant capacitor is applied to the small leakage inductor of transformer during freewheeling period, the primary current can be decreased rapidly. This results in the reduced conduction loss on the secondary side of transformer while the proposed technique can still guarantee the wide ZVS ranges. The operational principles and analysis are presented. Experimental results show that the proposed reduction technique of conduction loss can be operated properly.", "title": "" }, { "docid": "ecea52064dd97ee4acdd11cb2c84f8cf", "text": "Occupational therapists have used activity analysis to ensure the therapeutic use of activities. Recently, they have begun to explore the affective components of activities. This study explores the feelings (affective responses) that chronic psychiatric patients have toward selected activities commonly used in occupational therapy. Twenty-two participating chronic psychiatric patients were randomly assigned to one of three different activity groups: cooking, craft, or sensory awareness. Immediately following participation, each subject was asked to rate the activity by using Osgood's semantic differential, which measures the evaluation, power, and action factors of affective meaning. Data analysis revealed significant differences between the cooking activity and the other two activities on the evaluation factor. The fact that the three activities were rated differently is evidence that different activities can elicit different responses in one of the target populations of occupational therapy. The implications of these findings to occupational therapists are discussed and areas of future research are indicated.", "title": "" }, { "docid": "23ee528e0efe7c4fec7f8cda7e49a8dd", "text": "The development of reliability-based design criteria for surface ship structures needs to consider the following three components: (1) loads, (2) structural strength, and (3) methods of reliability analysis. A methodology for reliability-based design of ship structures is provided in this document. The methodology consists of the following two approaches: (1) direct reliabilitybased design, and (2) load and resistance factor design (LRFD) rules. According to this methodology, loads can be linearly or nonlinearly treated. Also in assessing structural strength, linear or nonlinear analysis can be used. The reliability assessment and reliability-based design can be performed at several levels of a structural system, such as at the hull-girder, grillage, panel, plate and detail levels. A rational treatment of uncertainty is suggested by considering all its types. Also, failure definitions can have significant effects on the assessed reliability, or resulting reliability-based designs. A method for defining and classifying failures at the system level is provided. The method considers the continuous nature of redundancy in ship structures. A bibliography is provided at the end of this document to facilitate future implementation of the methodology.", "title": "" }, { "docid": "0356445aef8821582d18234683b62194", "text": "Supervisory control and data acquisition (SCADA) systems are large-scale industrial control systems often spread across geographically dispersed locations that let human operators control entire physical systems, from a single control room. Early multi-site SCADA systems used closed networks and propriety industrial communication protocols like Modbus, DNP3 etc to reach remote sites. But with time it has become more convenient and more cost-effective to connect them to the Internet. However, internet connections to SCADA systems build in new vulnerabilities, as SCADA systems were not designed with internet security in mind. This can become matter of national security if these systems are power plants, water treatment facilities, or other pieces of critical infrastructure. Compared to IT systems, SCADA systems have a higher requirement concerning reliability, latency and uptime, so it is not always feasible to apply IT security measures deployed in IT systems. This paper provides an overview of security issues and threats in SCADA networks. Next, attention is focused on security assessment of the SCADA. This is followed by an overview of relevant SCADA security solutions. Finally we propose our security solution approach which is embedded in bump-in-the-wire is discussed.", "title": "" }, { "docid": "7f54157faf8041436174fa865d0f54a8", "text": "The goal of robot learning from demonstra tion is to have a robot learn from watching a demonstration of the task to be performed In our approach to learning from demon stration the robot learns a reward function from the demonstration and a task model from repeated attempts to perform the task A policy is computed based on the learned reward function and task model Lessons learned from an implementation on an an thropomorphic robot arm using a pendulum swing up task include simply mimicking demonstrated motions is not adequate to per form this task a task planner can use a learned model and reward function to com pute an appropriate policy this model based planning process supports rapid learn ing both parametric and nonparametric models can be learned and used and in corporating a task level direct learning com ponent which is non model based in addi tion to the model based planner is useful in compensating for structural modeling errors and slow model learning", "title": "" }, { "docid": "013270914bfee85265f122b239c9fc4c", "text": "Current study is with the aim to identify similarities and distinctions between irony and sarcasm by adopting quantitative sentiment analysis as well as qualitative content analysis. The result of quantitative sentiment analysis shows that sarcastic tweets are used with more positive tweets than ironic tweets. The result of content analysis corresponds to the result of quantitative sentiment analysis in identifying the aggressiveness of sarcasm. On the other hand, from content analysis it shows that irony owns two senses. The first sense of irony is equal to aggressive sarcasm with speaker awareness. Thus, tweets of first sense of irony may attack a specific target, and the speaker may tag his/her tweet irony because the tweet itself is ironic. These tweets though tagged as irony are in fact sarcastic tweets. Different from this, the tweets of second sense of irony is tagged to classify an event to be ironic. However, from the distribution in sentiment analysis and examples in content analysis, irony seems to be more broadly used in its second sense.", "title": "" }, { "docid": "f17a6c34a7b3c6a7bf266f04e819af94", "text": "BACKGROUND\nPatients with advanced squamous-cell non-small-cell lung cancer (NSCLC) who have disease progression during or after first-line chemotherapy have limited treatment options. This randomized, open-label, international, phase 3 study evaluated the efficacy and safety of nivolumab, a fully human IgG4 programmed death 1 (PD-1) immune-checkpoint-inhibitor antibody, as compared with docetaxel in this patient population.\n\n\nMETHODS\nWe randomly assigned 272 patients to receive nivolumab, at a dose of 3 mg per kilogram of body weight every 2 weeks, or docetaxel, at a dose of 75 mg per square meter of body-surface area every 3 weeks. The primary end point was overall survival.\n\n\nRESULTS\nThe median overall survival was 9.2 months (95% confidence interval [CI], 7.3 to 13.3) with nivolumab versus 6.0 months (95% CI, 5.1 to 7.3) with docetaxel. The risk of death was 41% lower with nivolumab than with docetaxel (hazard ratio, 0.59; 95% CI, 0.44 to 0.79; P<0.001). At 1 year, the overall survival rate was 42% (95% CI, 34 to 50) with nivolumab versus 24% (95% CI, 17 to 31) with docetaxel. The response rate was 20% with nivolumab versus 9% with docetaxel (P=0.008). The median progression-free survival was 3.5 months with nivolumab versus 2.8 months with docetaxel (hazard ratio for death or disease progression, 0.62; 95% CI, 0.47 to 0.81; P<0.001). The expression of the PD-1 ligand (PD-L1) was neither prognostic nor predictive of benefit. Treatment-related adverse events of grade 3 or 4 were reported in 7% of the patients in the nivolumab group as compared with 55% of those in the docetaxel group.\n\n\nCONCLUSIONS\nAmong patients with advanced, previously treated squamous-cell NSCLC, overall survival, response rate, and progression-free survival were significantly better with nivolumab than with docetaxel, regardless of PD-L1 expression level. (Funded by Bristol-Myers Squibb; CheckMate 017 ClinicalTrials.gov number, NCT01642004.).", "title": "" }, { "docid": "6adf612b6a80494f9c9559170ab66670", "text": "In recent years, Steganography and Steganalysis are two important areas of research that involve a number of applications. These two areas of research are important especially when reliable and secure information exchange is required. Steganography is an art of embedding information in a cover image without causing statistically significant variations to the cover image. Steganalysis is the technology that attempts to defeat Steganography by detecting the hidden information and extracting. In this paper a comparative analysis is made to demonstrate the effectiveness of the proposed methods. The effectiveness of the proposed methods has been estimated by computing Mean square error (MSE) and Peak Signal to Noise Ratio (PSNR), Processing time, security.The analysis shows that the BER and PSNR is improved in the LSB Method but security sake DCT is the best method.", "title": "" }, { "docid": "491bf7103b8540748b58465ff9238fe7", "text": "We present a new approach for defining groups of populations that are geographically homogeneous and maximally differentiated from each other. As a by-product, it also leads to the identification of genetic barriers between these groups. The method is based on a simulated annealing procedure that aims to maximize the proportion of total genetic variance due to differences between groups of populations (spatial analysis of molecular variance; samova). Monte Carlo simulations were used to study the performance of our approach and, for comparison, the behaviour of the Monmonier algorithm, a procedure commonly used to identify zones of sharp genetic changes in a geographical area. Simulations showed that the samova algorithm indeed finds maximally differentiated groups, which do not always correspond to the simulated group structure in the presence of isolation by distance, especially when data from a single locus are available. In this case, the Monmonier algorithm seems slightly better at finding predefined genetic barriers, but can often lead to the definition of groups of populations not differentiated genetically. The samova algorithm was then applied to a set of European roe deer populations examined for their mitochondrial DNA (mtDNA) HVRI diversity. The inferred genetic structure seemed to confirm the hypothesis that some Italian populations were recently reintroduced from a Balkanic stock, as well as the differentiation of groups of populations possibly due to the postglacial recolonization of Europe or the action of a specific barrier to gene flow.", "title": "" }, { "docid": "aabed671a466730e273225d8ee572f73", "text": "It is essential to base instruction on a foundation of understanding of children’s thinking, but it is equally important to adopt the longer-term view that is needed to stretch these early competencies into forms of thinking that are complex, multifaceted, and subject to development over years, rather than weeks or months. We pursue this topic through our studies of model-based reasoning. We have identified four forms of models and related modeling practices that show promise for developing model-based reasoning. Models have the fortuitous feature of making forms of student reasoning public and inspectable—not only among the community of modelers, but also to teachers. Modeling provides feedback about student thinking that can guide teaching decisions, an important dividend for improving professional practice.", "title": "" }, { "docid": "fe59d96ddb5a777f154da5cf813c556c", "text": "For a set $P$ of $n$ points in the plane and an integer $k \\leq n$, consider the problem of finding the smallest circle enclosing at least $k$ points of $P$. We present a randomized algorithm that computes in $O( n k )$ expected time such a circle, improving over previously known algorithms. Further, we present a linear time $\\delta$-approximation algorithm that outputs a circle that contains at least $k$ points of $P$ and has radius less than $(1+\\delta)r_{opt}(P,k)$, where $r_{opt}(P,k)$ is the radius of the minimum circle containing at least $k$ points of $P$. The expected running time of this approximation algorithm is $O(n + n \\cdot\\min((1/k\\delta^3) \\log^2 (1/\\delta), k))$.", "title": "" }, { "docid": "647ba490d8507eeefb50387ab95bf59c", "text": "This study compares the cradle-to-gate total energy and major emissions for the extraction of raw materials, production, and transportation of the common wood building materials from the CORRIM 2004 reports. A life-cycle inventory produced the raw materials, including fuel resources and emission to air, water, and land for glued-laminated timbers, kiln-dried and green softwood lumber, laminated veneer lumber, softwood plywood, and oriented strandboard. Major findings from these comparisons were that the production of wood products, by the nature of the industry, uses a third of their energy consumption from renewable resources and the remainder from fossil-based, non-renewable resources when the system boundaries consider forest regeneration and harvesting, wood products and resin production, and transportation life-cycle stages. When the system boundaries are reduced to a gate-to-gate (manufacturing life-cycle stage) model for the wood products, the biomass component of the manufacturing energy increases to nearly 50% for most products and as high as 78% for lumber production from the Southeast. The manufacturing life-cycle stage consumed the most energy over all the products when resin is considered part of the production process. Extraction of log resources and transportation of raw materials for production had the least environmental impact.", "title": "" }, { "docid": "734638df47b05b425b0dcaaab11d886e", "text": "Satisfying the needs of users of online video streaming services requires not only to manage the network Quality of Service (QoS), but also to address the user's Quality of Experience (QoE) expectations. While QoS factors reflect the status of individual networks, they do not comprehensively capture the end-to-end features affecting the quality delivered to the user. In this situation, QoE management is the better option. However, traditionally used QoE management models require human interaction and have stringent requirements in terms of time and complexity. Thus, they fail to achieve successful performance in terms of real-timeliness, accuracy, scalability and adaptability. This dissertation work investigates new methods to bring QoE management to the level required by the real-time management of video services. In this paper, we highlight our main contributions. First, with the aim to perform a combined network-service assessment, we designed an experimental methodology able to map network QoS onto service QoE. Our methodology is meant to provide service and network providers with the means to pinpoint the working boundaries of their video-sets and to predict the effect of network policies on perception. Second, we developed a generic machine learning framework that allows deriving accurate predictive No Reference (NR) assessment metrics, based on simplistic NR QoE methods, that are functionally and computationally viable for real-time QoE evaluation. The tools, methods and conclusions derived from this dissertation conform a solid contribution to QoE management of video streaming services, opening new venues for further research.", "title": "" }, { "docid": "49a9b9bb7a040523378f5ed4363f9fe9", "text": "Pattern recognition is used to classify the input data into different classes based on extracted key features. Increasing the recognition rate of pattern recognition applications is a challenging task. The spike neural networks inspired from physiological brain architecture, is a neuromorphic hardware implementation of network of neurons. A sample of neuromorphic architecture has two layers of neurons, input and output. The number of input neurons is fixed based on the input data patterns. While the number of outputs neurons can be different. The goal of this paper is performance evaluation of neuromorphic architecture in terms of recognition rates using different numbers of output neurons. For this purpose a simulation environment of N2S3 and MNIST handwritten digits are used. Our simulation results show the recognition rate for various number of output neurons, 20, 30, 50, 100, 200, and 300 is 70%, 74%, 79%, 85%, 89%, and 91%, respectively.", "title": "" }, { "docid": "9973de0dc30f8e8f7234819163a15db2", "text": "Jennifer L. Docktor, Natalie E. Strand, José P. Mestre, and Brian H. Ross Department of Physics, University of Wisconsin–La Crosse, La Crosse, Wisconsin 54601, USA Department of Physics, University of Illinois, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, Illinois 61801, USA Department of Educational Psychology, University of Illinois, Champaign, Illinois 61820, USA Department of Psychology, University of Illinois, Champaign, Illinois 61820, USA (Received 30 April 2015; published 1 September 2015)", "title": "" }, { "docid": "d8d52c5329ed7f187ba7ebfde45b750c", "text": "Lately enhancing the capability of network services automatically and dynamically through SDN and CDN/CDNi networks has become a recent topic of research. While, in one hand, these systems can be very beneficial to control and optimize the overall network services that studies the topology, traffic paths, packet handling and such others, on the other hand, the servers in such architectures can also be a potential target for DoS and/or DDoS attacks. We, therefore, propose a mechanism for the SDN based CDNi networks to securely deliver services with a multi-defense strategy against DDoS attacks. Addition of ALTO like servers in such architectures enables mapping a very big network to provide a bird's eye view. We propose an additional marking path map in the ALTO server to trace the request packets. The next defense is a protection switch to protect the main servers. A Management Information Base (MIB) is also proposed in the SDN controller to compare and assess the request traffic coming to the protection switches.", "title": "" } ]
scidocsrr
465bafd70ed8b80fd04d1e9c3bba37d7
Improving the Resolution of CNN Feature Maps Efficiently with Multisampling
[ { "docid": "bd3776d1dc36d6a91ea73d3c12ca326c", "text": "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0% and 82.1% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: //github.com/tensorflow/models/tree/master/research/deeplab.", "title": "" } ]
[ { "docid": "d2bf33fcd8d1de5cca697ef97e774feb", "text": "The accuracy of Automated Speech Recognition (ASR) technology has improved, but it is still imperfect in many settings. Researchers who evaluate ASR performance often focus on improving the Word Error Rate (WER) metric, but WER has been found to have little correlation with human-subject performance on many applications. We propose a new captioning-focused evaluation metric that better predicts the impact of ASR recognition errors on the usability of automatically generated captions for people who are Deaf or Hard of Hearing (DHH). Through a user study with 30 DHH users, we compared our new metric with the traditional WER metric on a caption usability evaluation task. In a side-by-side comparison of pairs of ASR text output (with identical WER), the texts preferred by our new metric were preferred by DHH participants. Further, our metric had significantly higher correlation with DHH participants' subjective scores on the usability of a caption, as compared to the correlation between WER metric and participant subjective scores. This new metric could be used to select ASR systems for captioning applications, and it may be a better metric for ASR researchers to consider when optimizing ASR systems.", "title": "" }, { "docid": "11c397d0158350bccf741e34c1731a6c", "text": "The purpose of this study is to evaluate the impact of brand awareness to repurchase intention of customers with trilogy of emotions approach. The study population consisted if all the people in Yazd. As the research sample, 384 people who went to cell phone shopping centers in Yazd province responded to the questionnaire. Cronbach's alpha was used to determine the reliability of the questionnaire, and its values was 0.87. To examine the effects of brand awareness on purchase intention, structural equation modeling and AMOUS and SPSS softwares were used. The results of this study show that consumers cognition does not affect the purchase intention, but the customers’ conation and affection affect the re-purchase intention. In addition, brand awareness affects emotions (cognition, affection, and conation) and consumer purchase intention.", "title": "" }, { "docid": "70830fc4130b4c3281f596e8d7d2529e", "text": "In 1948 Shannon developed fundamental limits on the efficiency of communication over noisy channels. The coding theorem asserts that there are block codes with code rates arbitrarily close to channel capacity and probabilities of error arbitrarily close to zero. Fifty years later, codes for the Gaussian channel have been discovered that come close to these fundamental limits. There is now a substantial algebraic theory of error-correcting codes with as many connections to mathematics as to engineering practice, and the last 20 years have seen the construction of algebraic-geometry codes that can be encoded and decoded in polynomial time, and that beat the Gilbert–Varshamov bound. Given the size of coding theory as a subject, this review is of necessity a personal perspective, and the focus is reliable communication, and not source coding or cryptography. The emphasis is on connecting coding theories for Hamming and Euclidean space and on future challenges, specifically in data networking, wireless communication, and quantum information theory.", "title": "" }, { "docid": "0259066962633694e027b059567d722f", "text": "In order to improve real-time and robustness of the lane detection and get more ideal lane, in the image preprocessing, the filter is used in strengthening lane information of the binary image, reducing the noise and removing irrelevant information. The lane edge detection is by using Canny operator, then the corner detection method is used in getting the Image corners coordinates and finally using the RANSAC to circulation fit for corners, according to the optimal lanes parameters drawing lane. Through experiment of different scenes, this method can not only effectively rule out linear pixel interference of outside the road in multiple complex environments, but also quickly and accurately identify lane. This method improves the stability of the lane detection to a certain extent, which has good robust and real-time.", "title": "" }, { "docid": "e754c7c7821703ad298d591a3f7a3105", "text": "The rapid growth in the population density in urban cities and the advancement in technology demands real-time provision of services and infrastructure. Citizens, especially travelers, want to be reached within time to the destination. Consequently, they require to be facilitated with smart and real-time traffic information depending on the current traffic scenario. Therefore, in this paper, we proposed a graph-oriented mechanism to achieve the smart transportation system in the city. We proposed to deploy road sensors to get the overall traffic information as well as the vehicular network to obtain location and speed information of the individual vehicle. These Internet of Things (IoT) based networks generate enormous volume of data, termed as Big Data, depicting the traffic information of the city. To process incoming Big Data from IoT devices, then generating big graphs from the data, and processing them, we proposed an efficient architecture that uses the Giraph tool with parallel processing servers to achieve real-time efficiency. Later, various graph algorithms are used to achieve smart transportation by making real-time intelligent decisions to facilitate the citizens as well as the metropolitan authorities. Vehicular Datasets from various reliable resources representing the real city traffic are used for analysis and evaluation purpose. The system is implemented using Giraph and Spark tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.", "title": "" }, { "docid": "c530181b0ed858cf8c2819ff1fcda1b4", "text": "The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (BCNN), which has shown dramatic performance gains on certain fine-grained recognition problems [13]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [10]. This is the first widely available public benchmark designed specifically to test face identification in real-world images. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computer face detection system, it does not have the bias inherent in such a database. As a result, it includes variations in pose that are more challenging than many other popular benchmarks. In our experiments, we demonstrate the performance of the model trained only on ImageNet, then fine-tuned on the training set of IJB-A, and finally use a moderate-sized external database, FaceScrub [15]. Another feature of this benchmark is that that the testing data consists of collections of samples of a particular identity. We consider two techniques for pooling samples from these collections to improve performance over using only a single image, and we report results for both methods. Our application of this new CNN to the IJB-A results in gains over the published baselines of this new database.", "title": "" }, { "docid": "772df08be1a3c3ea0854603727727c63", "text": "This paper presents a low profile ultrawideband tightly coupled phased array antenna with integrated feedlines. The aperture array consists of planar element pairs with fractal geometry. In each element these pairs are set orthogonal to each other for dual polarisation. The design is an array of closely capacitively coupled pairs of fractal octagonal rings. The adjustment of the capacitive load at the tip end of the elements and the strong mutual coupling between the elements, enables a wideband conformal performance. Adding a ground plane below the array partly compensates for the frequency variation of the array impedance, providing further enhancement in the array bandwidth. Additional improvement is achieved by placing another layer of conductive elements at a defined distance above the radiating elements. A Genetic Algorithm was scripted in MATLAB and combined with the HFSS simulator, providing an easy optimisation tool across the operational bandwidth for the array unit cell design parameters. The proposed antenna shows a wide-scanning ability with a low cross-polarisation level over a wide bandwidth.", "title": "" }, { "docid": "84ece888e2302d13775973f552c6b810", "text": "We present a qualitative study of hospitality exchange processes that take place via the online peer-to-peer platform Airbnb. We explore 1) what motivates individuals to monetize network hospitality and 2) how the presence of money ties in with the social interaction related to network hospitality. We approach the topic from the perspective of hosts -- that is, Airbnb users who participate by offering accommodation for other members in exchange for monetary compensation. We found that participants were motivated to monetize network hospitality for both financial and social reasons. Our analysis indicates that the presence of money can provide a helpful frame for network hospitality, supporting hosts in their efforts to accomplish desired sociability, select guests consistent with their preferences, and control the volume and type of demand. We conclude the paper with a critical discussion of the implications of our findings for network hospitality and, more broadly, for the so-called sharing economy.", "title": "" }, { "docid": "f596a018fd10374df79544063c509b9d", "text": "We contribute a dense SLAM system that takes a live stream of depth images as input and reconstructs nonrigid deforming scenes in real time, without templates or prior models. In contrast to existing approaches, we do not maintain any volumetric data structures, such as truncated signed distance function (TSDF) fields or deformation fields, which are performance and memory intensive. Our system works with a flat point (surfel) based representation of geometry, which can be directly acquired from commodity depth sensors. Standard graphics pipelines and general purpose GPU (GPGPU) computing are leveraged for all central operations: i.e., nearest neighbor maintenance, non-rigid deformation field estimation and fusion of depth measurements. Our pipeline inherently avoids expensive volumetric operations such as marching cubes, volumetric fusion and dense deformation field update, leading to significantly improved performance. Furthermore, the explicit and flexible surfel based geometry representation enables efficient tackling of topology changes and tracking failures, which makes our reconstructions consistent with updated depth observations. Our system allows robots maintain a scene description with nonrigidly deformed objects that potentially enables interactions with dynamic working environments.", "title": "" }, { "docid": "bade68b8f95fc0ae5a377a52c8b04b5c", "text": "The majority of deterministic mathematical programming problems have a compact formulation in terms of algebraic equations. Therefore they can easily take advantage of the facilities offered by algebraic modeling languages. These tools allow expressing models by using convenient mathematical notation (algebraic equations) and translate the models into a form understandable by the solvers for mathematical programs. Algebraic modeling languages provide facility for the management of a mathematical model and its data, and access different general-purpose solvers. The use of algebraic modeling languages (AMLs) simplifies the process of building the prototype model and in some cases makes it possible to create and maintain even the production version of the model. As presented in other chapters of this book, stochastic programming (SP) is needed when exogenous parameters of the mathematical programming problem are random. Dealing with stochasticities in planning is not an easy task. In a standard scenario-by-scenario analysis, the system is optimized for each scenario separately. Varying the scenario hypotheses we can observe the different optimal responses of the system and delineate the “strong trends” of the future. Indeed, this scenarioby-scenario approach implicitly assumes perfect foresight. The method provides a first-stage decision, which is valid only for the scenario under consideration. Having as many decisions as there are scenarios leaves the decision-maker without a clear recommendation. In stochastic programming the whole set of scenarios is combined into an event tree, which describes the unfolding of uncertainties over the period of planning. The model takes into account the uncertainties characterizing the scenarios through stochastic programming techniques. This adaptive plan is much closer, in spirit, to the way that decision-makers have to deal with uncertain future", "title": "" }, { "docid": "08662b69f45cb46040a7fe41495ce4c1", "text": "Festivals have been proliferating worldwide, and local authorities are either supporting, or organizing small, local festivals to enhance the attractiveness of the destination for non-local visitors. Festivals are also very effective tools for developing destination image, revitalizing economy, culture, traditions, building civic pride, raising funds for special, civic or charitable projects, and providing opportunities for the community to deal with fi ne arts. Th is situation increases the importance of factors related to the satisfaction and loyalty of festival visitors, especially for small and local festivals. Th erefore, drawing on the existing literature and an assumption that festivalscape is the most important contributor to visitors’ satisfaction and loyalty in the context of a small, local and municipality organized annual festivals, the present study aims to identify factors related to the festivalscape that determine visitors’ satisfaction and loyalty by using a structural equation modeling. Th e study examines several variables as the antecedents of the festival visitors’ satisfaction and loyalty such as staff , festival area, food, souvenir, informational adequacy and convenience. As a result of the analysis, the study reveals three dimensions related to the festivalscape environmental factors which are food, festival area, and convenience and examines how these factors aff ect the visitors’ satisfaction and, in turn, their loyalty.", "title": "" }, { "docid": "06b86a3d7f324fba7d95c358e0c38a8f", "text": "Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS Forestto systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.", "title": "" }, { "docid": "7ff2f2057d7e38f0258cd361c978eb70", "text": "Sustainable production of renewable energy is being hotly debated globally since it is increasingly understood that first generation biofuels, primarily produced from food crops and mostly oil seeds are limited in their ability to achieve targets for biofuel production, climate change mitigation and economic growth. These concerns have increased the interest in developing second generation biofuels produced from non-food feedstocks such as microalgae, which potentially offer greatest opportunities in the longer term. This paper reviews the current status of microalgae use for biodiesel production, including their cultivation, harvesting, and processing. The microalgae species most used for biodiesel production are presented and their main advantages described in comparison with other available biodiesel feedstocks. The various aspects associated with the design of microalgae production units are described, giving an overview of the current state of development of algae cultivation systems (photo-bioreactors and open ponds). Other potential applications and products from microalgae are also presented such as for biological sequestration of CO2, wastewater treatment, in human health, as food additive, and for aquaculture. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a95825b859dc9fc02bef853eff451d02", "text": "Mobile systems, such as smartphones and tablets, incorporate a diverse set of I/O devices, such as camera, audio devices, GPU, and sensors. This in turn results in a large number of diverse and customized device drivers running in the operating system kernel of mobile systems. These device drivers contain various bugs and vulnerabilities, making them a top target for kernel exploits [78]. Unfortunately, security analysts face important challenges in analyzing these device drivers in order to find, understand, and patch vulnerabilities. More specifically, using the state-of-the-art dynamic analysis techniques such as interactive debugging, fuzzing, and record-and-replay for analysis of these drivers is difficult, inefficient, or even completely inaccessible depending on the analysis. In this paper, we present Charm1, a system solution that facilitates dynamic analysis of device drivers of mobile systems. Charm’s key technique is remote device driver execution, which enables the device driver to execute in a virtual machine on a workstation. Charm makes this possible by using the actual mobile system only for servicing the low-level and infrequent I/O operations through a low-latency and customized USB channel. Charm does not require any specialized hardware and is immediately available to analysts. We show that it is feasible to apply Charm to various device drivers, including camera, audio, GPU, and IMU sensor drivers, in different mobile systems, including LG Nexus 5X, Huawei Nexus 6P, and Samsung Galaxy S7. In an extensive evaluation, we show that Charm enhances the usability of fuzzing of device drivers, enables record-andreplay of driver’s execution, and facilitates detailed vulnerability analysis. Altogether, these capabilities have enabled us to find 25 bugs in device drivers, analyze 3 existing ones, and even build an arbitrary-code-execution kernel exploit using one of them. 1Charm is open sourced: https://trusslab.github.io/charm/", "title": "" }, { "docid": "6ad6d141e2eb5b17f162eea327996265", "text": "• Use small, cross-functional teams managing smaller, prioritized tasks • Frequently test incremental project progress against user stories to ensure a match between final product and customer expectation • Utilize the best mix of agile, traditional, and hybrid techniques to meet specific project requirements, recognize and avoid pitfalls, and improve quality • Differentiate between frameworks such as Scrum, Extreme Programming (XP), and Lean, and select the most suitable for the specific domain and project", "title": "" }, { "docid": "f14515c943b95e5e47c7f4f95b93f6fe", "text": "The Architecture, Engineering & Construction (AEC) sector is a highly fragmented, data intensive, project based industry, involving a number of very different professions and organisations. Projects carried out within this sector involve collaboration between various people, using a variety of different systems. This, along with the industry’s strong data sharing and processing requirements, means that the management of building data is complex and challenging. This paper presents a solution to data sharing requirements of the AEC sector by utilising Cloud Computing. Our solution presents two key contributions, first a governance model for building data, based on extensive research and industry consultation. Second, a prototype implementation of this governance model, utilising the CometCloud autonomic Cloud Computing engine based on the Master/Worker paradigm. We have integrated our prototype with the 3D modelling software Google Sketchup. The approach and prototype presented has applicability in a number of other eScience related applications involving multi-disciplinary, collaborative working using Cloud Computing infrastructure.", "title": "" }, { "docid": "08cfbb5cc4540af3f67db65740e28bd1", "text": "The amount of contextual data collected, stored, mined, and shared is increasing exponentially. Street cameras, credit card transactions, chat and Twitter logs, e-mail, web site visits, phone logs and recordings, social networking sites, all are examples of data that persists in a manner not under individual control, leading some to declare the death of privacy. We argue here that the ability to generate convincing fake contextual data can be a basic tool in the fight to preserve privacy. One use for the technology is for an individual to make his actual data indistinguishable amongst a pile of false data.\n In this paper we consider two examples of contextual data, search engine query data and location data. We describe the current state of faking these types of data and our own efforts in this direction.", "title": "" }, { "docid": "5cf2c4239507b7d66cec3cf8fabf7f60", "text": "Government corruption is more prevalent in poor countries than in rich countries. This paper uses cross-industry heterogeneity in growth rates within Vietnam to test empirically whether growth leads to lower corruption. We find that it does. We begin by developing a model of government officials’ choice of how much bribe money to extract from firms that is based on the notion of inter-regional tax competition, and consider how officials’ choices change as the economy grows. We show that economic growth is predicted to decrease the rate of bribe extraction under plausible assumptions, with the benefit to officials of demanding a given share of revenue as bribes outweighed by the increased risk that firms will move elsewhere. This effect is dampened if firms are less mobile. Our empirical analysis uses survey data collected from over 13,000 Vietnamese firms between 2006 and 2010 and an instrumental variables strategy based on industry growth in other provinces. We find, first, that firm growth indeed causes a decrease in bribe extraction. Second, this pattern is particularly true for firms with strong land rights and those with operations in multiple provinces, consistent with these firms being more mobile. Our results suggest that as poor countries grow, corruption could subside “on its own,” and they demonstrate one type of positive feedback between economic growth and good institutions. ∗Contact information: Bai: [email protected]; Jayachandran: [email protected]; Malesky: [email protected]; Olken: [email protected]. We thank Lori Beaman, Raymond Fisman, Chang-Tai Hsieh, Supreet Kaur, Neil McCulloch, Andrei Shleifer, Matthew Stephenson, Eric Verhoogen, and Ekaterina Zhuravskaya for helpful comments.", "title": "" }, { "docid": "9259d540f93e06b3772eb05ac73369f2", "text": "A compact reconfigurable rectifying antenna (rectenna) has been proposed for 5.2- and 5.8-GHz microwave power transmission. The proposed rectenna consists of a frequency reconfigurable microstrip antenna and a frequency reconfigurable rectifying circuit. Here, the use of the odd-symmetry mode has significantly cut down the antenna size by half. By controlling the switches installed in the antenna and the rectifying circuit, the rectenna is able to switch operation between 5.2 and 5.8 GHz. Simulated conversion efficiencies of 70.5% and 69.4% are achievable at the operating frequencies of 5.2 and 5.8 GHz, respectively, when the rectenna is given with an input power of 16.5 dBm. Experiment has been conducted to verify the design idea. Due to fabrication tolerances and parametric deviation of the actual diode, the resonant frequencies of the rectenna are measured to be 4.9 and 5.9 GHz. When supplied with input powers of 16 and 15 dBm, the measured maximum conversion efficiencies of the proposed rectenna are found to be 65.2% and 64.8% at 4.9 and 5.9 GHz, respectively, which are higher than its contemporary counterparts.", "title": "" }, { "docid": "fe9724a94d1aa13e4fbefa7c88ac09dd", "text": "We demonstrate a multimodal dialogue system using reinforcement learning for in-car scenarios, developed at Edinburgh University and Cambridge University for the TALK project1. This prototype is the first “Information State Update” (ISU) dialogue system to exhibit reinforcement learning of dialogue strategies, and also has a fragmentary clarification feature. This paper describes the main components and functionality of the system, as well as the purposes and future use of the system, and surveys the research issues involved in its construction. Evaluation of this system (i.e. comparing the baseline system with handcoded vs. learnt dialogue policies) is ongoing, and the demonstration will show both.", "title": "" } ]
scidocsrr
49e658809e38ef884886729d22555c60
On Model Discovery For Hosted Data Science Projects
[ { "docid": "9e35b35e679b7344c568c0edbad67a62", "text": "Ground is an open-source data context service, a system to manage all the information that informs the use of data. Data usage has changed both philosophically and practically in the last decade, creating an opportunity for new data context services to foster further innovation. In this paper we frame the challenges of managing data context with basic ABCs: Applications, Behavior, and Change. We provide motivation and design guidelines, present our initial design of a common metamodel and API, and explore the current state of the storage solutions that could serve the needs of a data context service. Along the way we highlight opportunities for new research and engineering solutions. 1. FROM CRISIS TO OPPORTUNITY Traditional database management systems were developed in an era of risk-averse design. The technology itself was expensive, as was the on-site cost of managing it. Expertise was scarce and concentrated in a handful of computing and consulting firms. Two conservative design patterns emerged that lasted many decades. First, the accepted best practices for deploying databases revolved around tight control of schemas and data ingest in support of general-purpose accounting and compliance use cases. Typical advice from data warehousing leaders held that “There is no point in bringing data . . . into the data warehouse environment without integrating it” [15]. Second, the data management systems designed for these users were often built by a single vendor and deployed as a monolithic stack. A traditional DBMS included a consistent storage engine, a dataflow engine, a language compiler and optimizer, a runtime scheduler, a metadata catalog, and facilities for data ingest and queueing—all designed to work closely together. As computing and data have become orders of magnitude more efficient, changes have emerged for both of these patterns. Usage is changing profoundly, as expertise and control shifts from the central accountancy of an IT department to the domain expertise of “business units” tasked with extracting value from data [12]. The changes in economics and usage brought on the “three Vs” of Big Data: Volume, Velocity and Variety. Resulting best practices focus on open-ended schema-on-use data “lakes” and agile development, This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2017. CIDR ’17 January 8-11, 2017, Chaminade, CA, USA in support of exploratory analytics and innovative application intelligence [26]. Second, while many pieces of systems software that have emerged in this space are familiar, the overriding architecture is profoundly different. In today’s leading open source data management stacks, nearly all of the components of a traditional DBMS are explicitly independent and interchangeable. This architectural decoupling is a critical and under-appreciated aspect of the Big Data movement, enabling more rapid innovation and specialization. 1.1 Crisis: Big Metadata An unfortunate consequence of the disaggregated nature of contemporary data systems is the lack of a standard mechanism to assemble a collective understanding of the origin, scope, and usage of the data they manage. In the absence of a better solution to this pressing need, the Hive Metastore is sometimes used, but it only serves simple relational schemas—a dead end for representing a Variety of data. As a result, data lake projects typically lack even the most rudimentary information about the data they contain or how it is being used. For emerging Big Data customers and vendors, this Big Metadata problem is hitting a crisis point. Two significant classes of end-user problems follow directly from the absence of shared metadata services. The first is poor productivity. Analysts are often unable to discover what data exists, much less how it has been previously used by peers. Valuable data is left unused and human effort is routinely duplicated—particularly in a schema-on-use world with raw data that requires preparation. “Tribal knowledge” is a common description for how organizations manage this productivity problem. This is clearly not a systematic solution, and scales very poorly as organizations grow. The second problem stemming from the absence of a system to track metadata is governance risk. Data management necessarily entails tracking or controlling who accesses data, what they do with it, where they put it, and how it gets consumed downstream. In the absence of a standard place to store metadata and answer these questions, it is impossible to enforce policies and/or audit behavior. As a result, many administrators marginalize their Big Data stack as a playpen for non-critical data, and thereby inhibit both the adoption and the potential of new technologies. In our experiences deploying and managing systems in production, we have seen the need for a common service layer to support the capture, publishing and sharing of metadata information in a flexible way. The effort in this paper began by addressing that need. 1.2 Opportunity: Data Context The lack of metadata services in the Big Data stack can be viewed as an opportunity: a clean slate to rethink how we track and leverage modern usage of data. Storage economics and schema-on-use agility suggest that the Data Lake movement could go much farther than Data Warehousing in enabling diverse, widely-used central repositories of data that can adapt to new data formats and rapidly changing organizations. In that spirit, we advocate rethinking traditional metadata in a far more comprehensive sense. More generally, what we should strive to capture is the full context of data. To emphasize the conceptual shifts of this data context, and as a complement to the “three Vs” of Big Data, we introduce three key sources of information—the ABCs of Data Context. Each represents a major change from the simple metadata of traditional enterprise data management. Applications: Application context is the core information that describes how raw bits get interpreted for use. In modern agile scenarios, application context is often relativistic (many schemas for the same data) and complex (with custom code for data interpretation). Application context ranges from basic data descriptions (encodings, schemas, ontologies, tags), to statistical models and parameters, to user annotations. All of the artifacts involved—wrangling scripts, view definitions, model parameters, training sets, etc.—are critical aspects of application context. Behavior: This is information about how data was created and used over time. In decoupled systems, behavioral context spans multiple services, applications and formats and often originates from highvolume sources (e.g., machine-generated usage logs). Not only must we track upstream lineage— the data sets and code that led to the creation of a data object—we must also track the downstream lineage, including data products derived from this data object. Aside from data lineage, behavioral context includes logs of usage: the “digital exhaust” left behind by computations on the data. As a result, behavioral context metadata can often be larger than the data itself. Change: This is information about the version history of data, code and associated information, including changes over time to both structure and content. Traditional metadata focused on the present, but historical context is increasingly useful in agile organizations. This context can be a linear sequence of versions, or it can encompass branching and concurrent evolution, along with interactions between co-evolving versions. By tracking the version history of all objects spanning code, data, and entire analytics pipelines, we can simplify debugging and enable auditing and counterfactual analysis. Data context services represent an opportunity for database technology innovation, and an urgent requirement for the field. We are building an open-source data context service we call Ground, to serve as a central model, API and repository for capturing the broad context in which data gets used. Our goal is to address practical problems for the Big Data community in the short term and to open up opportunities for long-term research and innovation. In the remainder of the paper we illustrate the opportunities in this space, design requirements for solutions, and our initial efforts to tackle these challenges in open source. 2. DIVERSE USE CASES To illustrate the potential of the Ground data context service, we describe two concrete scenarios in which Ground can aid in data discovery, facilitate better collaboration, protect confidentiality, help diagnose problems, and ultimately enable new value to be captured from existing data. After presenting these scenarios, we explore the design requirements for a data context service. 2.1 Scenario: Context-Enabled Analytics This scenario represents the kind of usage we see in relatively technical organizations making aggressive use of data for machinelearning driven applications like customer targeting. In these organizations, data analysts make extensive use of flexible tools for data preparation and visualization and often have some SQL skills, while data scientists actively prototype and develop custom software for machine learning applications. Janet is an analyst in the Customer Satisfaction department at a large bank. She suspects that the social network behavior of customers can predict if they are likely to close their accounts (customer churn). Janet has access to a rich context-service-enabled data lake and a wide range of tools that she can use to assess her hypothesis. Janet begins by downloading a free sample of a social media feed. She uses an advanced data catalog application (we’ll call it “Catly”) which connects to Ground, recognizes the co", "title": "" }, { "docid": "66d35e0f9d725475d9d1e61a724cf5ea", "text": "As data-driven methods are becoming pervasive in a wide variety of disciplines, there is an urgent need to develop scalable and sustainable tools to simplify the process of data science, to make it easier for the users to keep track of the analyses being performed and datasets being generated, and to enable the users to understand and analyze the workflows. In this paper, we describe our vision of a unified provenance and metadata management system to support lifecycle management of complex collaborative data science workflows. We argue that the information about the analysis processes and data artifacts can, and should be, captured in a semi-passive manner; and we show that querying and analyzing this information can not only simplify bookkeeping and debugging tasks but also enable a rich new set of capabilities like identifying flaws in the data science process itself. It can also significantly reduce the user time spent in fixing post-deployment problems through automated analysis and monitoring. We have implemented a prototype system, PROVDB, on top of git and Neo4j, and we describe its key features and capabilities.", "title": "" }, { "docid": "c3317ea39578195cab8801b8a31b21b6", "text": "We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyperparameters. Inspired by the principle of “optimism under uncertainty,” we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on n samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior.", "title": "" } ]
[ { "docid": "98fb03e0e590551fa9e7c82b827c78ed", "text": "This article describes on-going developments of the VENUS European Project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu) concerning the first mission to sea in Pianosa Island, Italy in October 2006. The VENUS project aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. In this paper we focus on the underwater photogrammetric approach used to survey the archaeological site of Pianosa. After a brief presentation of the archaeological context we shall see the calibration process in such a context. The next part of this paper is dedicated to the survey: it is divided into two parts: a DTM of the site (combining acoustic bathymetry and photogrammetry) and a specific artefact plotting dedicated to the amphorae present on the site. * Corresponding author. This is useful to know for communication with the appropriate person in cases with more than one author. ** http://cordis.europa.eu/ist/digicult/venus.htm or the project web site : http://www.venus-project.eu 1. VENUS, VIRTUAL EXPLORATION OF UNDERWATER SITES The VENUS project is funded by European Commission, Information Society Technologies (IST) programme of the 6th FP for RTD . It aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. (Chapman et alii, 2006). Underwater archaeological sites, for example shipwrecks, offer extraordinary opportunities for archaeologists due to factors such as darkness, low temperatures and a low oxygen rate which are favourable to preservation. On the other hand, these sites can not be experienced first hand and today are continuously jeopardised by activities such as deep trawling that destroy their surface layer. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. The project team plans to survey shipwrecks at various depths and to explore advanced methods and techniques of data acquisition through autonomous or remotely operated unmanned vehicles with innovative sonar and photogrammetry equipment. Research will also cover aspects such as data processing and storage, plotting of archaeological artefacts and information system management. This work will result in a series of best practices and procedures for collecting and storing data. Further, VENUS will develop virtual reality and augmented reality tools for the visualisation of an immersive interaction with a digital model of an underwater site. The model will be made accessible online, both as an example of digital preservation and for demonstrating new facilities of exploration in a safe, cost-effective and pedagogical environment. The virtual underwater site will provide archaeologists with an improved insight into the data and the general public with simulated dives to the site. The VENUS consortium, composed of eleven partners, is pooling expertise in various disciplines: archaeology and underwater exploration, knowledge representation and photogrammetry, virtual reality and digital data preservation. This paper focuses on the first experimentation in Pianosa Island, Tuscany, Italy. The document is structured as follows. A short description of the archaeological context, then the next section explains the survey method: calibration, collecting photographs using ROV and divers, photographs orientation and a particular way to measure amphorae with photogrammetry using archaeological knowledge. A section shows 3D results in VRML and finally we present the future planned work. 2. THE UNDERWATER ARCHAEOLOGICAL SITE OF PIANOSA ISLAND The underwater archaeological site of Pianosa, discovered in 1989 by volunteer divers (Giuseppe Adriani, Paolo Vaccari), is located at a depth of 35 m, close to the Scoglio della Scola, in XXI International CIPA Symposium, 01-06 October, Athens, Greece", "title": "" }, { "docid": "b9c253196a1cac6109e814e5d9a7cd97", "text": "In this digital age, most business is conducted electronically. This contemporary paradigm creates openings for potentially harmful unanticipated information security incidents of both a criminal or civil nature, with the potential to cause considerable direct and indirect damage to smaller businesses. Electronic evidence is fundamental to the successful handling of such incidents. If an organisation does not prepare proactively for such incidents it is highly likely that important relevant digital evidence will not be available. Not being able to respond effectively could be extremely damaging to smaller companies, as they are unable to absorb losses as easily as larger organisations. In order to prepare smaller businesses for incidents of this nature, the implementation of Digital Forensic Readiness policies and procedures is necessitated. Numerous varying factors such as the perceived high cost, as well as the current lack of forensic skills, make the implementation of Digital Forensic Readiness appear difficult if not infeasible for smaller organisations. In order to solve this problem it is necessary to develop a scalable and flexible framework for the implementation of Digital Forensic Readiness based on the individual risk profile of a small to medium enterprise (SME). This paper aims to determine, from literature, the concepts of Digital Forensic Readiness and how they apply to SMEs. Based on the findings, the aspects of Digital Forensics and organisational characteristics that should be included in such a framework is highlighted.", "title": "" }, { "docid": "6ffbb212bec4c90c6b37a9fde3fd0b4c", "text": "In this paper, we address a new research problem on active learning from data streams where data volumes grow continuously and labeling all data is considered expensive and impractical. The objective is to label a small portion of stream data from which a model is derived to predict newly arrived instances as accurate as possible. In order to tackle the challenges raised by data streams' dynamic nature, we propose a classifier ensembling based active learning framework which selectively labels instances from data streams to build an accurate classifier. A minimal variance principle is introduced to guide instance labeling from data streams. In addition, a weight updating rule is derived to ensure that our instance labeling process can adaptively adjust to dynamic drifting concepts in the data. Experimental results on synthetic and real-world data demonstrate the performances of the proposed efforts in comparison with other simple approaches.", "title": "" }, { "docid": "d6587e4d37742c25355296da3a718c41", "text": "Vehicular Ad hoc Networks (VANETs) are classified as an application of Mobile Ad-hoc Networks (MANETs) that has the potential in improving road safety and providing Intelligent Transportation System (ITS). Vehicular communication system facilitates communication devices for exchange of information among vehicles and vehicles and Road Side Units (RSUs).The era of vehicular adhoc networks is now gaining attention and momentum. Researchers and developers have built VANET simulation tools to allow the study and evaluation of various routing protocols, various emergency warning protocols and others VANET applications. Simulation of VANET routing protocols and its applications is fundamentally different from MANETs simulation because in VANETs, vehicular environment impose new issues and requirements, such as multi-path fading, roadside obstacles, trip models, traffic flow models, traffic lights, traffic congestion, vehicular speed and mobility, drivers behaviour etc. This paper presents a comparative study of various publicly available VANET simulation tools. Currently, there are network simulators, VANET mobility generators and VANET simulators are publicly available. In particular, this paper contrast their software characteristics, graphical user interface, accuracy of simulation, ease of use, popularity, input requirements, output visualization capabilities etc. Keywords-Ad-hoc network, ITS (Intelligent Transportation System), MANET, Simulation, VANET.", "title": "" }, { "docid": "fd543534d6a9cf10abb2f073cec41fdb", "text": "Article history: Available online 26 October 2012 We present an O ( √ n log n)-approximation algorithm for the problem of finding the sparsest spanner of a given directed graph G on n vertices. A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph. More precisely, given a graph G = (V , E) with nonnegative edge lengths d : E → R 0 and a stretch k 1, a subgraph H = (V , E H ) is a k-spanner of G if for every edge (s, t) ∈ E , the graph H contains a path from s to t of length at most k · d(s, t). The previous best approximation ratio was Õ (n2/3), due to Dinitz and Krauthgamer (STOC ’11). We also improve the approximation ratio for the important special case of directed 3-spanners with unit edge lengths from Õ ( √ n ) to O (n1/3 log n). The best previously known algorithms for this problem are due to Berman, Raskhodnikova and Ruan (FSTTCS ’10) and Dinitz and Krauthgamer. The approximation ratio of our algorithm almost matches Dinitz and Krauthgamer’s lower bound for the integrality gap of a natural linear programming relaxation. Our algorithm directly implies an O (n1/3 log n)-approximation for the 3-spanner problem on undirected graphs with unit lengths. An easy O ( √ n )-approximation algorithm for this problem has been the best known for decades. Finally, we consider the Directed Steiner Forest problem: given a directed graph with edge costs and a collection of ordered vertex pairs, find a minimum-cost subgraph that contains a path between every prescribed pair. We obtain an approximation ratio of O (n2/3+ ) for any constant > 0, which improves the O (n · min(n4/5,m2/3)) ratio due to Feldman, Kortsarz and Nutov (JCSS’12). © 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "70ed5a9f324bfd601de3759ae0b94bd1", "text": "BACKGROUND\nBiomarkers have many distinct purposes, and depending on their intended use, the validation process varies substantially.\n\n\nPURPOSE\nThe goal of this article is to provide an introduction to the topic of biomarkers, and then to discuss three specific types of biomarkers, namely, prognostic, predictive, and surrogate.\n\n\nRESULTS\nA principle challenge for biomarker validation from a statistical perspective is the issue of multiplicity. In general, the solution to this multiplicity challenge is well known to statisticians: pre-specification and replication. Critical requirements for prognostic marker validation include uniform treatment, complete follow-up, unbiased case selection, and complete ascertainment of the many possible confounders that exist in the context of an observational sample. In the case of predictive biomarker validation, observational data are clearly inadequate and randomized controlled trials are mandatory. Within the context of randomization, strategies for predictive marker validation can be grouped into two categories: retrospective versus prospective validation. The critical validation criteria for a surrogate endpoint is to ensure that if a trial uses a surrogate endpoint, the trial will result in the same inferences as if the trial had observed the true endpoint. The field of surrogate endpoint validation has now moved to the multi-trial or meta-analytic setting as the preferred method.\n\n\nCONCLUSIONS\nBiomarkers are a highly active research area. For all biomarker developmental and validation studies, the importance of fundamental statistical concepts remains the following: pre-specification of hypotheses, randomization, and replication. Further statistical methodology research in this area is clearly needed as we move forward.", "title": "" }, { "docid": "f86eea3192fe3dd8548cec52e53553e0", "text": "Acromioclavicular (AC) joint separations are common injuries of the shoulder girdle, especially in the young and active population. Typically the mechanism of this injury is a direct force against the lateral aspect of the adducted shoulder, the magnitude of which affects injury severity. While low-grade injuries are frequently managed successfully using non-surgical measures, high-grade injuries frequently warrant surgical intervention to minimize pain and maximize shoulder function. Factors such as duration of injury and activity level should also be taken into account in an effort to individualize each patient's treatment. A number of surgical techniques have been introduced to manage symptomatic, high-grade injuries. The purpose of this article is to review the important anatomy, biomechanical background, and clinical management of this entity.", "title": "" }, { "docid": "82e0394b9b5c88c14259fabd111ddc46", "text": "In recent years, the venous flap has been highly regarded in microsurgical and reconstructive surgeries, especially in the reconstruction of hand and digit injuries. It is easily designed and harvested with good quality. It is thin and pliable, without the need of sacrificing a major artery at the donor site, and has no limitation on the donor site. It can be transferred not only as a pure skin flap, but also as a composite flap including tendons and nerves as well as vein grafts. All these advantages make it an optimal candidate for hand and digit reconstruction when conventional flaps are limited or unavailable. In this article, we review its classifications and the selection of donor sites, update its clinical applications, and summarize its indications for all types of venous flaps in hand and digit reconstruction.", "title": "" }, { "docid": "6717e438376a78cb177bfc3942b6eec6", "text": "Decisions are often guided by generalizing from past experiences. Fundamental questions remain regarding the cognitive and neural mechanisms by which generalization takes place. Prior data suggest that generalization may stem from inference-based processes at the time of generalization. By contrast, generalization may emerge from mnemonic processes occurring while premise events are encoded. Here, participants engaged in a two-phase learning and generalization task, wherein they learned a series of overlapping associations and subsequently generalized what they learned to novel stimulus combinations. Functional MRI revealed that successful generalization was associated with coupled changes in learning-phase activity in the hippocampus and midbrain (ventral tegmental area/substantia nigra). These findings provide evidence for generalization based on integrative encoding, whereby overlapping past events are integrated into a linked mnemonic representation. Hippocampal-midbrain interactions support the dynamic integration of experiences, providing a powerful mechanism for building a rich associative history that extends beyond individual events.", "title": "" }, { "docid": "4434d1d0cbf30d62bcbbd7cf14989034", "text": "The EDT2 750V uses a micro pattern trench cell with a narrow mesa for reducing the on-state losses with a tailored channel width for short circuit robustness. To account for high system stray inductances (Lstray) and currents for Full or Hybrid Electric Vehicle inverter applications, it features a 750V voltage rating compared to the predecessor IGBT3 650V by an optimized vertical structure and proper plasma shaping. This plasma distribution not only determines the performance tradeoff between on-state and switching losses, but at the same time defines the surge voltage for a given Lstray*I in the application as visualized in a switch-off loss vs. surge voltage trade-off diagram. Shaping of the feedback capacitance Cgc optimizes the tunability of the switching slopes by means of an external gate resistor for an easier adaption to a wider range of system inductances with low losses.", "title": "" }, { "docid": "064cedd8f636b3d3c004d68eb85a7166", "text": "This paper presents a strategy to generate generic summary of documents using Probabilistic Latent Semantic Indexing. Generally a document contains several topics rather than a single one. Summaries created by human beings tend to cover several topics to give the readers an overall idea about the original document. Hence we can expect that a summary containing sentences from better part of the topic spectrum should make a better summary. PLSI has proven to be an effective method in topic detection. In this paper we present a method for creating extractive summary of the document by using PLSI to analyze the features of document such as term frequency and graph structure. We also show our results, which was evaluated using ROUGE, and compare the results with other techniques, proposed in the past.", "title": "" }, { "docid": "2f88356c3a1ab60e3dd084f7d9630c70", "text": "Recently, some E-commerce sites launch a new interaction box called Tips on their mobile apps. Users can express their experience and feelings or provide suggestions using short texts typically several words or one sentence. In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings. Jointly modeling these two facets is helpful for designing a better recommendation system. While some existing models integrate text information such as item specifications or user reviews into user and item latent factors for improving the rating prediction, no existing works consider tips for improving recommendation quality. We propose a deep learning based framework named NRT which can simultaneously predict precise ratings and generate abstractive tips with good linguistic quality simulating user experience and feelings. For abstractive tips generation, gated recurrent neural networks are employed to \"translate'' user and item latent representations into a concise sentence. Extensive experiments on benchmark datasets from different domains show that NRT achieves significant improvements over the state-of-the-art methods. Moreover, the generated tips can vividly predict the user experience and feelings.", "title": "" }, { "docid": "0db1e1304ec2b5d40790677c9ce07394", "text": "Neural sequence-to-sequence model has achieved great success in abstractive summarization task. However, due to the limit of input length, most of previous works can only utilize lead sentences as the input to generate the abstractive summarization, which ignores crucial information of the document. To alleviate this problem, we propose a novel approach to improve neural sentence summarization by using extractive summarization, which aims at taking full advantage of the document information as much as possible. Furthermore, we present both of streamline strategy and system combination strategy to achieve the fusion of the contents in different views, which can be easily adapted to other domains. Experimental results on CNN/Daily Mail dataset demonstrate both our proposed strategies can significantly improve the performance of neural sentence summarization.", "title": "" }, { "docid": "5c5c21bd0c50df31c6ccec63d864568c", "text": "Intellectual Property issues (IP) is a concern that refrains companies to cooperate in whatever of Open Innovation (OI) processes. Particularly, SME consider open innovation as uncertain, risky processes. Despite the opportunities that online OI platforms offer, SMEs have so far failed to embrace them, and proved reluctant to OI. We intend to find whether special collaborative spaces that facilitate a sort of preventive idea claiming, explicit claiming evolution of defensive publication, as so far patents and publications for prevailing innovation, can be the right complementary instruments in OI as to when stronger IP protection regimes might drive openness by SME in general. These spaces, which we name NIR (Networking Innovation Rooms), are a practical, smart paradigm to boost OI for SME. There users sign smart contracts as NDA which takes charge of timestamping any IP disclosure or creation and declares what corrective actions (if they might apply) might be taken for unauthorised IP usage or disclosure of any of the NDA signers. With Blockchain, a new technology emerges which enables decentralised, fine-grained IP management for OI.", "title": "" }, { "docid": "7db7d64ce262c5e4681d91c6faf29f67", "text": "Conceptual natural language processing systems usually rely on case frame instantiation to recognize events and role objects in text. But generating a good set of case frames for a domain is timeconsuming, tedious, and prone to errors of omission. We have developed a corpus-based algorithm for acquiring conceptual case frames empirically from unannotated text. Our algorithm builds on previous research on corpus-based methods for acquiring extraction patterns and semantic lexicons. Given extraction patterns and a semantic lexicon for a domain, our algorithm learns semantic preferences for each extraction pattern and merges the syntactically compatible patterns to produce multi-slot case frames with selectional restrictions. The case frames generate more cohesive output and produce fewer false hits than the original extraction patterns. Our system requires only preclassified training texts and a few hours of manual review to filter the dictionaries, demonstrating that conceptual case frames can be acquired from unannotated text without special training resources.", "title": "" }, { "docid": "77bdd6c3f5065ef4abfaa70d34bc020a", "text": "The discovery of disease-causing mutations typically requires confirmation of the variant or gene in multiple unrelated individuals, and a large number of rare genetic diseases remain unsolved due to difficulty identifying second families. To enable the secure sharing of case records by clinicians and rare disease scientists, we have developed the PhenomeCentral portal (https://phenomecentral.org). Each record includes a phenotypic description and relevant genetic information (exome or candidate genes). PhenomeCentral identifies similar patients in the database based on semantic similarity between clinical features, automatically prioritized genes from whole-exome data, and candidate genes entered by the users, enabling both hypothesis-free and hypothesis-driven matchmaking. Users can then contact other submitters to follow up on promising matches. PhenomeCentral incorporates data for over 1,000 patients with rare genetic diseases, contributed by the FORGE and Care4Rare Canada projects, the US NIH Undiagnosed Diseases Program, the EU Neuromics and ANDDIrare projects, as well as numerous independent clinicians and scientists. Though the majority of these records have associated exome data, most lack a molecular diagnosis. PhenomeCentral has already been used to identify causative mutations for several patients, and its ability to find matching patients and diagnose these diseases will grow with each additional patient that is entered.", "title": "" }, { "docid": "970fed17476873ab69b0359f6d74ab40", "text": "The smart grid is an innovative energy network that will improve the conventional electrical grid network to be more reliable, cooperative, responsive, and economical. Within the context of the new capabilities, advanced data sensing, communication, and networking technology will play a significant role in shaping the future of the smart grid. The smart grid will require a flexible and efficient framework to ensure the collection of timely and accurate information from various locations in power grid to provide continuous and reliable operation. This article presents a tutorial on the sensor data collection, communications, and networking issues for the smart grid. First, the applications of data sensing in the smart grid are reviewed. Then, the requirements for data sensing and collection, the corresponding sensors and actuators, and the communication and networking architecture are discussed. The communication technologies and the data communication network architecture and protocols for the smart grid are described. Next, different emerging techniques for data sensing, communications, and sensor data networking are reviewed. The issues related to security of data sensing and communications in the smart grid are then discussed. To this end, the standardization activities and use cases related to data sensing and communications in the smart grid are summarized. Finally, several open issues and challenges are outlined. Copyright © 2012 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "b772ea661f263bbe4e012547f9e14539", "text": "MOTIVATION\nMany problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology.\n\n\nRESULTS\nWe study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors.\n\n\nCONCLUSIONS\nWe have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments.\n\n\nAVAILABILITY\nhttp://www.dbs.ifi.lmu.de/~borgward/MMD.", "title": "" }, { "docid": "8b5a06aab3e4bc184733eb108c1706ae", "text": "Profiling data to determine metadata about a given dataset is an important and frequent activity of any IT professional and researcher and is necessary for various use-cases. It encompasses a vast array of methods to examine datasets and produce metadata. Among the simpler results are statistics, such as the number of null values and distinct values in a column, its data type, or the most frequent patterns of its data values. Metadata that are more difficult to compute involve multiple columns, namely correlations, unique column combinations, functional dependencies, and inclusion dependencies. Further techniques detect conditional properties of the dataset at hand. This survey provides a classification of data profiling tasks and comprehensively reviews the state of the art for each class. In addition, we review data profiling tools and systems from research and industry. We conclude with an outlook on the future of data profiling beyond traditional profiling tasks and beyond relational databases.", "title": "" } ]
scidocsrr
1ac039c8bed24d917679957a1907bfa9
Learning an Optimizer for Image Deconvolution
[ { "docid": "34b7073f947888694053cb421544cb37", "text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "title": "" }, { "docid": "a77eddf9436652d68093946fbe1d2ed0", "text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.", "title": "" } ]
[ { "docid": "5e4d19e0243c1cbd29901c4bf1bc6005", "text": "In the current world, sports produce considerable data such as players skills, game results, season matches, leagues management, etc. The big challenge in sports science is to analyze this data to gain a competitive advantage. The analysis can be done using several techniques and statistical methods in order to produce valuable information. The problem of modeling soccer data has become increasingly popular in the last few years, with the prediction of results being the most popular topic. In this paper, we propose a Bayesian Model based on rank position and shared history that predicts the outcome of future soccer matches. The model was tested using a data set containing the results of over 200,000 soccer matches from different soccer leagues around the world.", "title": "" }, { "docid": "c8d0e702114386e5782bf6df934cccf2", "text": "The contradiction between the stated preferences of social media users toward privacy and actual privacy behaviors has suggested a willingness to trade privacy regulation for social goals. This study employs data from a survey of 361 social media users, which collected data on privacy attitudes, online privacy strategies and behaviors, and the uses and gratifications that social media experiences bring. Using canonical correlation, it examines in detail how underlying dimensions of privacy concern relate to specific contexts of social media use, and how these contexts relate to various domains of privacyprotecting behaviors. In addition, this research identifies how specific areas of privacy concern relate to levels of privacy regulation, offering new insight into the privacy paradox. In doing so, this study lends greater nuance to how the dynamic of privacy and sociality is understood and enacted by users, and how privacy management and the motivations underlying media use intersect.", "title": "" }, { "docid": "bbb91e336f0125c0e8a0358f6afc9ef1", "text": "In this paper, we study a new learning paradigm for neural machine translation (NMT). Instead of maximizing the likelihood of the human translation as in previous works, we minimize the distinction between human translation and the translation given by an NMT model. To achieve this goal, inspired by the recent success of generative adversarial networks (GANs), we employ an adversarial training architecture and name it as AdversarialNMT. In Adversarial-NMT, the training of the NMT model is assisted by an adversary, which is an elaborately designed 2D convolutional neural network (CNN). The goal of the adversary is to differentiate the translation result generated by the NMT model from that by human. The goal of the NMT model is to produce high quality translations so as to cheat the adversary. A policy gradient method is leveraged to co-train the NMT model and the adversary. Experimental results on English→French and German→English translation tasks show that Adversarial-NMT can achieve significantly better translation quality than several strong baselines.", "title": "" }, { "docid": "d3ce299b7df463f5040c1115c18c8663", "text": "The elbow patients herein discussed feature common soft tissue conditions such as tennis elbow, golfers' elbow and olecranon bursitis. Relevant anatomical structures for these conditions can easily be identified and demonstrated by cross examination by instructors and participants. Patients usually present rotator cuff tendinopathy, frozen shoulder, axillary neuropathy and suprascapular neuropathy. The structures involved in tendinopathy and frozen shoulder can be easily identified and demonstrated under normal conditions. The axillary and the suprascapular nerves have surface landmarks but cannot be palpated. In neuropathy however, physical findings in both neuropathies are pathognomonic and will be discussed.", "title": "" }, { "docid": "94a2cd3e147d48fac77a2063a82a5981", "text": "Multiview learning has shown promising potential in many applications. However, most techniques are focused on either view consistency, or view diversity. In this paper, we introduce a novel multiview boosting algorithm, called Boost.SH, that computes weak classifiers independently of each view but uses a shared weight distribution to propagate information among the multiple views to ensure consistency. To encourage diversity, we introduce randomized Boost.SH and show its convergence to the greedy Boost.SH solution in the sense of minimizing regret using the framework of adversarial multiarmed bandits. We also introduce a variant of Boost.SH that combines decisions from multiple experts for recommending views for classification. We propose an expert strategy for multiview learning based on inverse variance, which explores both consistency and diversity. Experiments on biometric recognition, document categorization, multilingual text, and yeast genomic multiview data sets demonstrate the advantage of Boost.SH (85%) compared with other boosting algorithms like AdaBoost (82%) using concatenated views and substantially better than a multiview kernel learning algorithm (74%).", "title": "" }, { "docid": "3875aba9d886a741098df8f5527ce49b", "text": "Underactuated systems offer compact design with easy actuation and control but at the cost of limited stable configurations and reduced dexterity compared to the directly driven and fully actuated systems. Here, we propose a compact origami-based design in which we can modulate the material stiffness of the joints and thereby control the stable configurations and the overall stiffness in an underactuated robot. The robotic origami, robogami, design uses multiple functional layers in nominally two-dimensional robots to achieve the desired functionality. To control the stiffness of the structure, we adjust the elastic modulus of a shape memory polymer using an embedded customized stretchable heater. We study the actuation of a robogami finger with three joints and determine its stable configurations and contact forces at different stiffness settings. We monitor the configuration of the finger using feedback from customized curvature sensors embedded in each joint. A scaled down version of the design is used in a two-fingered gripper and different grasp modes are achieved by activating different sets of joints.", "title": "" }, { "docid": "59ce394adc1fb8abe6047e5911c1a4a9", "text": "This paper presents a system that transforms the speech signals of speakers with physical speech disabilities into a more intelligible orm that can be more easily understood by listeners. These transformations are based on the correction of pronunciation errors y the removal of repeated sounds, the insertion of deleted sounds, the devoicing of unvoiced phonemes, the adjustment of the empo of speech by phase vocoding, and the adjustment of the frequency characteristics of speech by anchor-based morphing of he spectrum. These transformations are based on observations of disabled articulation including improper glottal voicing, lessened ongue movement, and lessened energy produced by the lungs. This system is a substantial step towards full automation in speech ransformation without the need for expert or clinical intervention. Among human listeners, recognition rates increased up to 191% (from 21.6% to 41.2%) relative to the original speech by using he module that corrects pronunciation errors. Several types of modified dysarthric speech signals are also supplied to a standard utomatic speech recognition system. In that study, the proportion of words correctly recognized increased up to 121% (from 72.7% o 87.9%) relative to the original speech, across various parameterizations of the recognizer. This represents a significant advance owards human-to-human assistive communication software and human–computer interaction. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "aad3945a69f57049c052bcb222f1b772", "text": "The chapter 1 on Social Media and Social Computing has documented the nature and characteristics of social networks and community detection. The explanation about the emerging of social networks and their properties constitute this chapter followed by a discussion on social community. The nodes, ties and influence in the social networks are the core of the discussion in the second chapter. Centrality is the core discussion here and the degree of centrality and its measure is explained. Understanding network topology is required for social networks concepts.", "title": "" }, { "docid": "470db66b9bcff16a9a559810ce352dfa", "text": "Abstract The state of security on the Internet is poor and progress toward increased protection is slow. This has given rise to a class of action referred to as “Ethical Hacking”. Companies are releasing software with little or no testing and no formal verification and expecting consumers to debug their product for them. For dot.com companies time-to-market is vital, security is not perceived as a marketing advantage, and implementing a secure design process an expensive sunk expense such that there is no economic incentive to produce bug-free software. There are even legislative initiatives to release software manufacturers from legal responsibility to their defective software.", "title": "" }, { "docid": "86a4a75135878f0cc7dc83d0742f5791", "text": "The past few years have seen an explosion of interest in the epigenetics of cancer. This has been a consequence of both the exciting coalescence of the chromatin and DNA methylation fields, and the realization that DNA methylation changes are involved in human malignancies. The ubiquity of DNA methylation changes has opened the way to a host of innovative diagnostic and therapeutic strategies. Recent advances attest to the great promise of DNA methylation markers as powerful future tools in the clinic.", "title": "" }, { "docid": "c9171bf5a2638b35ff7dc9c8e6104d30", "text": "Dimensionality reduction is an important aspect in the pattern classification literature, and linear discriminant analysis (LDA) is one of the most widely studied dimensionality reduction technique. The application of variants of LDA technique for solving small sample size (SSS) problem can be found in many research areas e.g. face recognition, bioinformatics, text recognition, etc. The improvement of the performance of variants of LDA technique has great potential in various fields of research. In this paper, we present an overview of these methods. We covered the type, characteristics and taxonomy of these methods which can overcome SSS problem. We have also highlighted some important datasets and software/ packages.", "title": "" }, { "docid": "b64a2e6bb533043a48b7840b72f71331", "text": "Autonomous long range navigation in partially known planetary-like terrain is an open challenge for robotics. Navigating several hundreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we brie y describe some functionalities that are currently running on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to integrate various instances of the perception and decision functionalities, and on the di culties raised by this integration.", "title": "" }, { "docid": "e18131e86ee96edf815cbf8f80f3ab24", "text": "This dissertation investigates the use of hierarchy and problem decomposition as a means of solving large, stochastic, sequential decision problems. These problems are framed as Markov decision problems (MDPs). The new technical content of this dissertation begins with a discussion of the concept of temporal abstraction. Temporal abstraction is shown to be equivalent to the transformation of a policy deened over a region of an MDP to an action in a semi-Markov decision problem (SMDP). Several algorithms are presented for performing this transformation eeciently. This dissertation introduces the HAM method for generating hierarchical, temporally abstract actions. This method permits the partial speciication of abstract actions in a way that corresponds to an abstract plan or strategy. Abstract actions speciied as HAMs can be optimally reened for new tasks by solving a reduced SMDP. The formal results show that traditional MDP algorithms can be used to optimally reene HAMs for new tasks. This can be achieved in much less time than it would take to learn a new policy for the task from scratch. HAMs complement some novel decomposition algorithms that are presented in this dissertation. These algorithms work by constructing a cache of policies for diierent regions of the MDP and then optimally combining the cached solution to produce a global solution that is within provable bounds of the optimal solution. Together, the methods developed in this dissertation provide important tools for 2 producing good policies for large MDPs. Unlike some ad-hoc methods, these methods provide strong formal guarantees. They use prior knowledge in a principled way, and they reduce larger MDPs into smaller ones while maintaining a well-deened relationship between the smaller problem and the larger problem.", "title": "" }, { "docid": "d0641206af1afeab7143fa82d56ba727", "text": "This paper outlines possible evolution trends of e-learning, supported by most recent advancements in the World Wide Web. Specifically, we consider a situation in which the Semantic Web technology and tools are widely adopted, and fully integrated within a context of applications exploiting the Internet of Things paradigm. Such a scenario will dramatically impact on learning activities, as well as on teaching strategies and instructional design methodology. In particular, the models characterized by learning pervasiveness and interactivity will be greatly empowered.", "title": "" }, { "docid": "25eedd2defb9e0a0b22e44195a4b767b", "text": "Social media such as Twitter have become an important method of communication, with potential opportunities for NLG to facilitate the generation of social media content. We focus on the generation of indicative tweets that contain a link to an external web page. While it is natural and tempting to view the linked web page as the source text from which the tweet is generated in an extractive summarization setting, it is unclear to what extent actual indicative tweets behave like extractive summaries. We collect a corpus of indicative tweets with their associated articles and investigate to what extent they can be derived from the articles using extractive methods. We also consider the impact of the formality and genre of the article. Our results demonstrate the limits of viewing indicative tweet generation as extractive summarization, and point to the need for the development of a methodology for tweet generation that is sensitive to genre-specific issues.", "title": "" }, { "docid": "3ed5a33db314d464973577c9a4442d33", "text": "Augmented Reality (AR) was first demonstrated in the 1960s, but only recently have technologies emerged that can be used to easily deploy AR applications to many users. Cameraequipped cell phones with significant processing power and graphics abilities provide an inexpensive and versatile platform for AR applications, while the social networking technology of Web 2.0 provides a large-scale infrastructure for collaboratively producing and distributing geo-referenced AR content. This combination of widely used mobile hardware and Web 2.0 software allows the development of a new type of AR platform that can be used on a global scale. In this paper we describe the Augmented Reality 2.0 concept and present existing work on mobile AR and web technologies that could be used to create AR 2.0 applications.", "title": "" }, { "docid": "cf5f3db56feb7d46c4806be434f6a665", "text": "Computational propaganda has recently exploded into public consciousness. The U.S. presidential campaign of 2016 was marred by evidence, which continues to emerge, of targeted political propaganda and the use of bots to distribute political messages on social media. This computational propaganda is both a social and technical phenomenon. Technical knowledge is necessary to work with the massive databases used for audience targeting; it is necessary to create the bots and algorithms that distribute propaganda; it is necessary to monitor and evaluate the results of these efforts in agile campaigning. Thus, a technical knowledge comparable to those who create and distribute this propaganda is necessary to investigate the phenomenon. However, viewing computational propaganda only from a technical perspective—as a set of variables, models, codes, and algorithms—plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it. The very act of making something technical and impartial makes it seem inevitable and unbiased. This undermines the opportunities to argue for change in the social value and meaning of this content and the structures in which it exists. Bigdata research is necessary to understand the sociotechnical issue of computational propaganda and the influence of technology in politics. However, big data researchers must maintain a critical stance toward the data being used and analyzed so as to ensure that we are critiquing as we go about describing, predicting, or recommending changes. If research studies of computational propaganda and political big data do not engage with the forms of power and knowledge that produce it, then the very possibility for improving the role of social-media platforms in public life evaporates. Definitionally, computational propaganda has two important parts: the technical and the social. Focusing on the technical, Woolley and Howard define computational propaganda as the assemblage of social-media platforms, autonomous agents, and big data tasked with the manipulation of public opinion. In contrast, the social definition of computational propaganda derives from the definition of propaganda—communications that deliberately misrepresent symbols, appealing to emotions and prejudices and bypassing rational thought, to achieve a specific goal of its creators—with computational propaganda understood as propaganda created or disseminated using computational (technical) means. Propaganda has a long history. Scholars who study propaganda as an offline or historical phenomenon have long been split over whether the existence of propaganda is necessarily detrimental to the functioning of democracies. However, the rise of the Internet and, in particular, social media has profoundly changed the landscape of propaganda. It has opened the creation and dissemination of propaganda messages, which were once the province of states and large institutions, to a wide variety of individuals and groups. It has allowed cross-border computational propaganda and interference in domestic political processes by foreign states. The anonymity of the Internet has allowed stateproduced propaganda to be presented as if it were not produced by state actors. The Internet has also provided new affordances for the efficient dissemination of propaganda, through the manipulation of the algorithms and processes that govern online information and through audience targeting based on big data analytics. The social effects of the changing nature of propaganda are only just beginning to be understood, and the advancement of this understanding is complicated by the unprecedented marrying of the social and the technical that the Internet age has enabled. The articles in this special issue showcase the state of the art in the use of big data in the study of computational propaganda and the influence of social media on politics. This rapidly emerging field represents a new clash of the highly social and highly technical in both", "title": "" }, { "docid": "479c250bd9284ab1a216a11fa5199f61", "text": "Two Gram-stain-negative, non-motile, non-spore-forming, rod-shaped bacterial strains, designated 3B-2(T) and 10AO(T), were isolated from a sand sample collected from the west coast of the Korean peninsula by using low-nutrient media, and their taxonomic positions were investigated in a polyphasic study. The strains did not grow on marine agar. They grew optimally at 30 °C and pH 6.5-7.5. Strains 3B-2(T) and 10AO(T) shared 97.5 % 16S rRNA gene sequence similarity and mean level of DNA-DNA relatedness of 12 %. In phylogenetic trees based on 16S rRNA gene sequences, strains 3B-2(T) and 10AO(T), together with several uncultured bacterial clones, formed independent lineages within the evolutionary radiation encompassed by the phylum Bacteroidetes. Strains 3B-2(T) and 10AO(T) contained MK-7 as the predominant menaquinone and iso-C(15 : 0) and C(16 : 1)ω5c as the major fatty acids. The DNA G+C contents of strains 3B-2(T) and 10AO(T) were 42.8 and 44.6 mol%, respectively. Strains 3B-2(T) and 10AO(T) exhibited very low levels of 16S rRNA gene sequence similarity (<85.0 %) to the type strains of recognized bacterial species. These data were sufficient to support the proposal that the novel strains should be differentiated from previously known genera of the phylum Bacteroidetes. On the basis of the data presented, we suggest that strains 3B-2(T) and 10AO(T) represent two distinct novel species of a new genus, for which the names Ohtaekwangia koreensis gen. nov., sp. nov. (the type species; type strain 3B-2(T)  = KCTC 23018(T)  = CCUG 58939(T)) and Ohtaekwangia kribbensis sp. nov. (type strain 10AO(T)  = KCTC 23019(T)  = CCUG 58938(T)) are proposed.", "title": "" }, { "docid": "f869114bdde885da6b384fa98ec03e94", "text": "It is increasingly recognized that the human planum temporale is not a dedicated language processor, but is in fact engaged in the analysis of many types of complex sound. We propose a model of the human planum temporale as a computational engine for the segregation and matching of spectrotemporal patterns. The model is based on segregating the components of the acoustic world and matching these components with learned spectrotemporal representations. Spectrotemporal information derived from such a 'computational hub' would be gated to higher-order cortical areas for further processing, leading to object recognition and the perception of auditory space. We review the evidence for the model and specific predictions that follow from it.", "title": "" }, { "docid": "947a8fde673e41df6937a95c87e9316f", "text": "The multilevel thresholding is an important technique for image processing and pattern recognition. The maximum entropy thresholding has been widely applied in the literature. In this paper, a new multilevel MET algorithm based on the technology of the firefly algorithm is proposed. This proposed method is called the maximum entropy based firefly thresholding method. Four different methods are implemented for comparing to this proposed method: the exhaustive search, the particle swarm optimization, the hybrid cooperative-comprehensive learning based PSO algorithm and the honey bee mating optimization. The experimental results demonstrated that the proposed MEFFT algorithm can search for multiple thresholds which are very close to the optimal ones examined by the exhaustive search method. Compared to the PSO and HCOCLPSO, the segmentation results of using the MEFFT algorithm is significantly improved and the computation time of the proposed MEFFT algorithm is shortest.", "title": "" } ]
scidocsrr
a98cccbdc5cbdfc539a8746fcb96cdf7
Radar Cross Section Reduction of a Microstrip Antenna Based on Polarization Conversion Metamaterial
[ { "docid": "6545ea7d281be5528d9217f3b891a5da", "text": "In this paper, a novel metamaterial absorber working in the C band frequency range has been proposed to reduce the in-band Radar Cross Section (RCS) of a typical planar antenna. The absorber is first designed in the shape of a hexagonal ring structure having dipoles at the corresponding arms of the rings. The various geometrical parameters of the proposed metamaterial structure have first been optimized using the numerical simulator, and the structure is fabricated and tested. In the second step, the metamaterial absorber is loaded on a microstrip patch antenna working in the same frequency band as that of the metamaterial absorber to reduce the in-band Radar Cross Section (RCS) of the antenna. The prototype is simulated, fabricated and tested. The simulated results show the 99% absorption of the absorber at 6.35 GHz which is in accordance with the measured data. A close agreement between the simulated and the measured results shows that the proposed absorber can be used for the RCS reduction of the planar antenna in order to improve its in-band stealth performance.", "title": "" } ]
[ { "docid": "543dc9543221b507746ebf1fe8d14928", "text": "Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models’ usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study population. This article presents the results of a simulation study that examines the performance of likelihood-based tests and the traditionally used Information Criterion (ICs) used for determining the number of classes in mixture modeling. We look at the performance of these tests and indexes for 3 types of mixture models: latent class analysis (LCA), a factor mixture model (FMA), and a growth mixture models (GMM). We evaluate the ability of the tests and indexes to correctly identify the number of classes at three different sample sizes (n D 200, 500, 1,000). Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test proved to be a very consistent indicator of classes across all of the models considered.", "title": "" }, { "docid": "ee223b75a3a99f15941e4725d261355e", "text": "BACKGROUND\nIn Mexico, stunting and anemia have declined but are still high in some regions and subpopulations, whereas overweight and obesity have increased at alarming rates in all age and socioeconomic groups.\n\n\nOBJECTIVE\nThe objective was to describe the coexistence of stunting, anemia, and overweight and obesity at the national, household, and individual levels.\n\n\nDESIGN\nWe estimated national prevalences of and trends for stunting, anemia, and overweight and obesity in children aged <5 y and in school-aged children (5-11 y old) and anemia and overweight and obesity in women aged 20-49 y by using the National Health and Nutrition Surveys conducted in 1988, 1999, 2006, and 2012. With the use of the most recent data (2012), the double burden of malnutrition at the household level was estimated and defined as the coexistence of stunting in children aged <5 y and overweight or obesity in the mother. At the individual level, double burden was defined as concurrent stunting and overweight and obesity in children aged 5-11 y and concurrent anemia and overweight or obesity in children aged 5-11 y and in women. We also tested if the coexistence of the conditions corresponded to expected values, under the assumption of independent distributions of each condition.\n\n\nRESULTS\nAt the household level, the prevalence of concurrent stunting in children aged <5 y and overweight and obesity in mothers was 8.4%; at the individual level, prevalences were 1% for stunting and overweight or obesity and 2.9% for anemia and overweight or obesity in children aged 5-11 y and 7.6% for anemia and overweight or obesity in women. At the household and individual levels in children aged 5-11 y, prevalences of double burden were significantly lower than expected, whereas anemia and the prevalence of overweight or obesity in women were not different from that expected.\n\n\nCONCLUSIONS\nAlthough some prevalences of double burden were lower than expected, assuming independent distributions of the 2 conditions, the coexistence of stunting, overweight or obesity, and anemia at the national, household, and intraindividual levels in Mexico calls for policies and programs to prevent the 3 conditions.", "title": "" }, { "docid": "8e10d20723be23d699c0c581c529ee19", "text": "Insect-scale legged robots have the potential to locomote on rough terrain, crawl through confined spaces, and scale vertical and inverted surfaces. However, small scale implies that such robots are unable to carry large payloads. Limited payload capacity forces miniature robots to utilize simple control methods that can be implemented on a simple onboard microprocessor. In this study, the design of a new version of the biologically-inspired Harvard Ambulatory MicroRobot (HAMR) is presented. In order to find the most suitable control inputs for HAMR, maneuverability experiments are conducted for several drive parameters. Ideal input candidates for orientation and lateral velocity control are identified as a result of the maneuverability experiments. Using these control inputs, two simple feedback controllers are implemented to control the orientation and the lateral velocity of the robot. The controllers are used to force the robot to track trajectories with a minimum turning radius of 55 mm and a maximum lateral to normal velocity ratio of 0.8. Due to their simplicity, the controllers presented in this work are ideal for implementation with on-board computation for future HAMR prototypes.", "title": "" }, { "docid": "3d0e5f0dbca6406b8b8eda4447ee6474", "text": "We describe a watermarking scheme for ownership verification and authentication. Depending on the desire of the user, the watermark can be either visible or invisible. The scheme can detect any modification made to the image and indicate the specific locations that have been modified. If the correct key is specified in the watermark extraction procedure, then an output image is returned showing a proper watermark, indicating the image is authentic and has not been changed since the insertion of the watermark. Any modification would be reflected in a corresponding error in the watermark. If the key is incorrect, or if the image was not watermarked, or if the watermarked image is cropped, the watermark extraction algorithm will return an image that resembles random noise. Since it requires a user key during both the insertion and the extraction procedures, it is not possible for an unauthorized user to insert a new watermark or alter the existing watermark so that the resulting image will pass the test. We present secret key and public key versions of the technique.", "title": "" }, { "docid": "a2688a1169babed7e35a52fa875505d4", "text": "Crowdsourcing label generation has been a crucial component for many real-world machine learning applications. In this paper, we provide finite-sample exponential bounds on the error rate (in probability and in expectation) of hyperplane binary labeling rules for the Dawid-Skene (and Symmetric DawidSkene ) crowdsourcing model. The bounds can be applied to analyze many commonly used prediction methods, including the majority voting, weighted majority voting and maximum a posteriori (MAP) rules. These bound results can be used to control the error rate and design better algorithms. In particular, under the Symmetric Dawid-Skene model we use simulation to demonstrate that the data-driven EM-MAP rule is a good approximation to the oracle MAP rule which approximately optimizes our upper bound on the mean error rate for any hyperplane binary labeling rule. Meanwhile, the average error rate of the EM-MAP rule is bounded well by the upper bound on the mean error rate of the oracle MAP rule in the simulation.", "title": "" }, { "docid": "7ca0ceb19e47f9848db1a5946c19d561", "text": "This thesis performs an empirical analysis of Word2Vec by comparing its output to WordNet, a well-known, human-curated lexical database. It finds that Word2Vec tends to uncover more of certain types of semantic relations than others – with Word2Vec returning more hypernyms, synonomyns and hyponyms than hyponyms or holonyms. It also shows the probability that neighbors separated by a given cosine distance in Word2Vec are semantically related in WordNet. This result both adds to our understanding of the stillunknown Word2Vec and helps to benchmark new semantic tools built from word vectors. Word2Vec, Natural Language Processing, WordNet, Distributional Semantics", "title": "" }, { "docid": "31c62f403e6d7f06ff2ab028894346ff", "text": "Automated text summarization is important to for humans to better manage the massive information explosion. Several machine learning approaches could be successfully used to handle the problem. This paper reports the results of our study to compare the performance between neural networks and support vector machines for text summarization. Both models have the ability to discover non-linear data and are effective model when dealing with large datasets.", "title": "" }, { "docid": "c9284c30e686c1fe1b905b776b520e0e", "text": "Two decades since the idea of using software diversity for security was put forward, ASLR is the only technique to see widespread deployment. This is puzzling since academic security researchers have published scores of papers claiming to advance the state of the art in the area of code randomization. Unfortunately, these improved diversity techniques are generally less deployable than integrity-based techniques, such as control-flow integrity, due to their limited compatibility with existing optimization, development, and distribution practices. This paper contributes yet another diversity technique called pagerando. Rather than trading off practicality for security, we first and foremost aim for deployability and interoperability. Most code randomization techniques interfere with memory sharing and deduplication optimization across processes and virtual machines, ours does not. We randomize at the granularity of individual code pages but never rewrite page contents. This also avoids incompatibilities with code integrity mechanisms that only allow signed code to be mapped into memory and prevent any subsequent changes. On Android, pagerando fully adheres to the default SELinux policies. All practical mitigations must interoperate with unprotected legacy code, our implementation transparently interoperates with unmodified applications and libraries. To support our claims of practicality, we demonstrate that our technique can be integrated into and protect all shared libraries shipped with stock Android 6.0. We also consider hardening of non-shared libraries and executables and other concerns that must be addressed to put software diversity defenses on par with integrity-based mitigations such as CFI.", "title": "" }, { "docid": "88e4c785587b5b195758034119955474", "text": "We consider adaptive meshless discretisation of the Dirichlet problem for Poisson equation based on numerical differentiation stencils obtained with the help of radial basis functions. New meshless stencil selection and adaptive refinement algorithms are proposed in 2D. Numerical experiments show that the accuracy of the solution is comparable with, and often better than that achieved by the mesh-based adaptive finite element method.", "title": "" }, { "docid": "e5ddbe32d1beed6de2e342c5d5fea274", "text": "Link prediction appears as a central problem of network science, as it calls for unfolding the mechanisms that govern the micro-dynamics of the network. In this work, we are interested in ego-networks, that is the mere information of interactions of a node to its neighbors, in the context of social relationships. As the structural information is very poor, we rely on another source of information to predict links among egos’ neighbors: the timing of interactions. We define several features to capture different kinds of temporal information and apply machine learning methods to combine these various features and improve the quality of the prediction. We demonstrate the efficiency of this temporal approach on a cellphone interaction dataset, pointing out features which prove themselves to perform well in this context, in particular the temporal profile of interactions and elapsed time between contacts.", "title": "" }, { "docid": "f77107a84778699e088b94c1a75bfd78", "text": "Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the \"state instability\" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.", "title": "" }, { "docid": "1f121c30e686d25f44363f44dc71b495", "text": "In this paper we show that the Euler number of the compactified Jacobian of a rational curve C with locally planar singularities is equal to the multiplicity of the δ-constant stratum in the base of a semi-universal deformation of C. In particular, the multiplicity assigned by Yau, Zaslow and Beauville to a rational curve on a K3 surface S coincides with the multiplicity of the normalisation map in the moduli space of stable maps to S. Introduction Let C be a reduced and irreducible projective curve with singular set Σ ⊂ C and let n : C̃ −→ C be its normalisation. The generalised Jacobian JC of C is an extension of JC̃ by an affine commutative group of dimension δ := dimH0(n∗(OC̃)/OC) = ∑", "title": "" }, { "docid": "8f183ac262aac98c563bf9dcc69b1bf5", "text": "Functional infrared thermal imaging (fITI) is considered a promising method to measure emotional autonomic responses through facial cutaneous thermal variations. However, the facial thermal response to emotions still needs to be investigated within the framework of the dimensional approach to emotions. The main aim of this study was to assess how the facial thermal variations index the emotional arousal and valence dimensions of visual stimuli. Twenty-four participants were presented with three groups of standardized emotional pictures (unpleasant, neutral and pleasant) from the International Affective Picture System. Facial temperature was recorded at the nose tip, an important region of interest for facial thermal variations, and compared to electrodermal responses, a robust index of emotional arousal. Both types of responses were also compared to subjective ratings of pictures. An emotional arousal effect was found on the amplitude and latency of thermal responses and on the amplitude and frequency of electrodermal responses. The participants showed greater thermal and dermal responses to emotional than to neutral pictures with no difference between pleasant and unpleasant ones. Thermal responses correlated and the dermal ones tended to correlate with subjective ratings. Finally, in the emotional conditions compared to the neutral one, the frequency of simultaneous thermal and dermal responses increased while both thermal or dermal isolated responses decreased. Overall, this study brings convergent arguments to consider fITI as a promising method reflecting the arousal dimension of emotional stimulation and, consequently, as a credible alternative to the classical recording of electrodermal activity. The present research provides an original way to unveil autonomic implication in emotional processes and opens new perspectives to measure them in touchless conditions.", "title": "" }, { "docid": "a42e6ef132c872c72de49bf47b5ff56f", "text": "A compact dual-band bandstop filter (BSF) is presented. It combines a conventional open-stub BSF and three spurlines. This filter generates two stopbands at 2.0 GHz and 3.0 GHz with the same circuit size as the conventional BSF.", "title": "" }, { "docid": "b27dd00e5ef38d678959b3922af8ae0a", "text": "0167-8655/$ see front matter 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.patrec.2013.07.007 ⇑ Corresponding author at: Department of Computer Science, Triangle Research & Development Center, Kafr Qarea, Israel. Fax: +972 4 6356168. E-mail addresses: [email protected] (R. Saabni), [email protected] (A. Asi), [email protected] (J. El-Sana). 1 These authors contributed equally to this work. Raid Saabni a,b,⇑,1, Abedelkadir Asi , Jihad El-Sana c", "title": "" }, { "docid": "cbf32934e275e8d95a584762b270a5c2", "text": "Online telemedicine systems are useful due to the possibility of timely and efficient healthcare services. These systems are based on advanced wireless and wearable sensor technologies. The rapid growth in technology has remarkably enhanced the scope of remote health monitoring systems. In this paper, a real-time heart monitoring system is developed considering the cost, ease of application, accuracy, and data security. The system is conceptualized to provide an interface between the doctor and the patients for two-way communication. The main purpose of this study is to facilitate the remote cardiac patients in getting latest healthcare services which might not be possible otherwise due to low doctor-to-patient ratio. The developed monitoring system is then evaluated for 40 individuals (aged between 18 and 66 years) using wearable sensors while holding an Android device (i.e., smartphone under supervision of the experts). The performance analysis shows that the proposed system is reliable and helpful due to high speed. The analyses showed that the proposed system is convenient and reliable and ensures data security at low cost. In addition, the developed system is equipped to generate warning messages to the doctor and patient under critical circumstances.", "title": "" }, { "docid": "77214b0522c0cb7772e094351b5bfa82", "text": "One of the key aspects in the implementation of reactive behaviour in the Web and, most importantly, in the semantic Web is the development of event detection engines. An event engine detects events occurring in a system and notifies their occurrences to its clients. Although primitive events are useful for modelling a good number of applications, certain other applications require the combination of primitive events in order to support reactive behaviour. This paper presents the implementation of an event detection engine that detects composite events specified by expressions of an illustrative sublanguage of the SNOOP event algebra", "title": "" }, { "docid": "13cb793ca9cdf926da86bb6fc630800a", "text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.", "title": "" }, { "docid": "19863150313643b977f72452bb5a8a69", "text": "Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes.", "title": "" } ]
scidocsrr
c2050d0282ef62b949e49bcd0c985e48
Engineering Methodologies : A Review of the Waterfall Model and Object-Oriented Approach
[ { "docid": "1d1ba5f131c9603fe3d919ad493a6dc1", "text": "By its very nature, software development consists of many knowledge-intensive processes. One of the most difficult to model, however, is requirements elicitation. This paper presents a mathematical model of the requirements elicitation process that clearly shows the critical role of knowledge in its performance. One metaprocess of requirements elicitation, selection of an appropriate elicitation technique, is also captured in the model. The values of this model are: (1) improved understanding of what needs to be performed during elicitation helps analysts improve their elicitation efforts, (2) improved understanding of how elicitation techniques are selected helps less experienced analysts be as successful as more experienced analysts, and (3) as we improve our ability to perform elicitation, we improve the likelihood that the systems we create will meet their intended customers’ needs. Many papers have been written that promulgate specific elicitation methods. A few have been written that model elicitation in general. However, none have yet to model elicitation in a way that makes clear the critical role played by knowledge. This paper’s model captures the critical roles played by knowledge in both elicitation and elicitation technique selection.", "title": "" } ]
[ { "docid": "62c208682a7e87dcefbe0083d0f14b07", "text": "BACKGROUND\nThere is conflicting evidence about the relationship between the dose of enteral caloric intake and survival in critically ill patients. The objective of this systematic review and meta-analysis is to compare the effect of lower versus higher dose of enteral caloric intake in adult critically ill patients on outcome.\n\n\nMETHODS\nWe reviewed MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Scopus from inception through November 2015. We included randomized and quasi-randomized studies in which there was a significant difference in the caloric intake in adult critically ill patients, including trials in which caloric restriction was the primary intervention (caloric restriction trials) and those with other interventions (non-caloric restriction trials). Two reviewers independently extracted data on study characteristics, caloric intake, and outcomes with hospital mortality being the primary outcome.\n\n\nRESULTS\nTwenty-one trials mostly with moderate bias risk were included (2365 patients in the lower caloric intake group and 2352 patients in the higher caloric group). Lower compared with higher caloric intake was not associated with difference in hospital mortality (risk ratio (RR) 0.953; 95 % confidence interval (CI) 0.838-1.083), ICU mortality (RR 0.885; 95 % CI 0.751-1.042), total nosocomial infections (RR 0.982; 95 % CI 0.878-1.077), mechanical ventilation duration, or length of ICU or hospital stay. Blood stream infections (11 trials; RR 0.718; 95 % CI 0.519-0.994) and incident renal replacement therapy (five trials; RR 0.711; 95 % CI 0.545-0.928) were lower with lower caloric intake. The associations between lower compared with higher caloric intake and primary and secondary outcomes, including pneumonia, were not different between caloric restriction and non-caloric restriction trials, except for the hospital stay which was longer with lower caloric intake in the caloric restriction trials.\n\n\nCONCLUSIONS\nWe found no association between the dose of caloric intake in adult critically ill patients and hospital mortality. Lower caloric intake was associated with lower risk of blood stream infections and incident renal replacement therapy (five trials only). The heterogeneity in the design, feeding route and timing and caloric dose among the included trials could limit our interpretation. Further studies are needed to clarify our findings.", "title": "" }, { "docid": "bbc2645372369d0ad68551b20e57e24b", "text": "The objective of this paper is to present an approach to electromagnetic field simulation based on the systematic use of the global (i.e. integral) quantities. In this approach, the equations of electromagnetism are obtained directly in a finite form starting from experimental laws without resorting to the differential formulation. This finite formulation is the natural extension of the network theory to electromagnetic field and it is suitable for computational electromagnetics.", "title": "" }, { "docid": "b53ee86e671ea8db6f9f84c8c02c2b5b", "text": "The accurate estimation of students’ grades in future courses is important as it can inform the selection of next term’s courses and create personalized degree pathways to facilitate successful and timely graduation. This paper presents future course grade predictions methods based on sparse linear and low-rank matrix factorization models that are specific to each course or student–course tuple. These methods identify the predictive subsets of prior courses on a course-by-course basis and better address problems associated with the not-missing-at-random nature of the student–course historical grade data. The methods were evaluated on a dataset obtained from the University of Minnesota, for two different departments with different characteristics. This evaluation showed that focusing on course-specific data improves the accuracy of grade prediction.", "title": "" }, { "docid": "59bd3e5db7291e43a8439e63d957aa31", "text": "Semi-supervised classifier design that simultaneously utilizes both labeled and unlabeled samples is a major research issue in machine learning. Existing semisupervised learning methods belong to either generative or discriminative approaches. This paper focuses on probabilistic semi-supervised classifier design and presents a hybrid approach to take advantage of the generative and discriminative approaches. Our formulation considers a generative model trained on labeled samples and a newly introduced bias correction model. Both models belong to the same model family. The proposed hybrid model is constructed by combining both generative and bias correction models based on the maximum entropy principle. The parameters of the bias correction model are estimated by using training data, and combination weights are estimated so that labeled samples are correctly classified. We use naive Bayes models as the generative models to apply the hybrid approach to text classification problems. In our experimental results on three text data sets, we confirmed that the proposed method significantly outperformed pure generative and discriminative methods when the classification performances of the both methods were comparable.", "title": "" }, { "docid": "a8d6a864092b3deb58be27f0f76b02c2", "text": "High-quality word representations have been very successful in recent years at improving performance across a variety of NLP tasks. These word representations are the mappings of each word in the vocabulary to a real vector in the Euclidean space. Besides high performance on specific tasks, learned word representations have been shown to perform well on establishing linear relationships among words. The recently introduced skipgram model improved performance on unsupervised learning of word embeddings that contains rich syntactic and semantic word relations both in terms of accuracy and speed. Word embeddings that have been used frequently on English language, is not applied to Turkish yet. In this paper, we apply the skip-gram model to a large Turkish text corpus and measured the performance of them quantitatively with the \"question\" sets that we generated. The learned word embeddings and the question sets are publicly available at our website. Keywords—Word embeddings, Natural Language Processing, Deep Learning", "title": "" }, { "docid": "731d9faffc834156d5218a09fbb82e27", "text": "With this paper we take a first step to understand the appropriation of social media by the police. For this purpose we analyzed the Twitter communication by the London Metropolitan Police (MET) and the Greater Manchester Police (GMP) during the riots in August 2011. The systematic comparison of tweets demonstrates that the two forces developed very different practices for using Twitter. While MET followed an instrumental approach in their communication, in which the police aimed to remain in a controlled position and keep a distance to the general public, GMP developed an expressive approach, in which the police actively decreased the distance to the citizens. In workshops and interviews, we asked the police officers about their perspectives, which confirmed the identified practices. Our study discusses benefits and risks of the two approaches and the potential impact of social media on the evolution of the role of police in society.", "title": "" }, { "docid": "5e2e5ba17b6f44f2032c6c542918e23c", "text": "BACKGROUND\nSubfertility and poor nutrition are increasing problems in Western countries. Moreover, nutrition affects fertility in both women and men. In this study, we investigate the association between adherence to general dietary recommendations in couples undergoing IVF/ICSI treatment and the chance of ongoing pregnancy.\n\n\nMETHODS\nBetween October 2007 and October 2010, couples planning pregnancy visiting the outpatient clinic of the Department of Obstetrics and Gynaecology of the Erasmus Medical Centre in Rotterdam, the Netherlands were offered preconception counselling. Self-administered questionnaires on general characteristics and diet were completed and checked during the visit. Six questions, based on dietary recommendations of the Netherlands Nutrition Centre, covered the intake of six main food groups (fruits, vegetables, meat, fish, whole wheat products and fats). Using the questionnaire results, we calculated the Preconception Dietary Risk score (PDR), providing an estimate of nutritional habits. Dietary quality increases with an increasing PDR score. We define ongoing pregnancy as an intrauterine pregnancy with positive heart action confirmed by ultrasound. For this analysis we selected all couples (n=199) who underwent a first IVF/ICSI treatment within 6 months after preconception counselling. We applied adjusted logistic regression analysis on the outcomes of interest using SPSS.\n\n\nRESULTS\nAfter adjustment for age of the woman, smoking of the woman, PDR of the partner, BMI of the couple and treatment indication we show an association between the PDR of the woman and the chance of ongoing pregnancy after IVF/ICSI treatment (odds ratio 1.65, confidence interval: 1.08-2.52; P=0.02]. Thus, a one-point increase in the PDR score associates with a 65% increased chance of ongoing pregnancy.\n\n\nCONCLUSIONS\nOur results show that increasing adherence to Dutch dietary recommendations in women undergoing IVF/ICSI treatment increases the chance of ongoing pregnancy. These data warrant further confirmation in couples achieving a spontaneous pregnancy and in randomized controlled trials.", "title": "" }, { "docid": "bd110cfe3a3dbb31057fec06e6a5e8d9", "text": "In this study, it proposes a new optimization algorithm called APRIORI-IMPROVE based on the insufficient of Apriori. APRIORI-IMPROVE algorithm presents optimizations on 2-items generation, transactions compression and so on. APRIORI-IMPROVE uses hash structure to generate L2, uses an efficient horizontal data representation and optimized strategy of storage to save time and space. The performance study shows that APRIORI-IMPROVE is much faster than Apriori.", "title": "" }, { "docid": "6ee2ee4a1cff7b1ddb8e5e1e2faf3aa5", "text": "An array of four uniform half-width microstrip leaky-wave antennas (MLWAs) was designed and tested to obtain maximum radiation in the boresight direction. To achieve this, uniform MLWAs are placed at 90 ° and fed by a single probe at the center. Four beams from four individual branches combine to form the resultant directive beam. The measured matched bandwidth of the array is 300 MHz (3.8-4.1 GHz). Its beam toward boresight occurs over a relatively wide 6.4% (3.8-4.05 GHz) band. The peak measured boresight gain of the array is 10.1 dBi, and its variation within the 250-MHz boresight radiation band is only 1.7 dB.", "title": "" }, { "docid": "e93f87593396f8b8ab09bc2f378eee33", "text": "The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.", "title": "" }, { "docid": "8f304c738458fa2ccae77b3f222b45ab", "text": "A vehicular ad hoc network (VANET) serves as an application of the intelligent transportation system that improves traffic safety as well as efficiency. Vehicles in a VANET broadcast traffic and safety-related information used by road safety applications, such as an emergency electronic brake light. The broadcast of these messages in an open-access environment makes security and privacy critical and challenging issues in the VANET. A misuse of this information may lead to a traffic accident and loss of human lives atworse and, therefore, vehicle authentication is a necessary requirement. During authentication, a vehicle’s privacy-related data, such as identity and location information, must be kept private. This paper presents an approach for privacy-preserving authentication in a VANET. Our hybrid approach combines the useful features of both the pseudonym-based approaches and the group signature-based approaches to preclude their respective drawbacks. The proposed approach neither requires a vehicle to manage a certificate revocation list, nor indulges vehicles in any group management. The proposed approach utilizes efficient and lightweight pseudonyms that are not only used for message authentication, but also serve as a trapdoor in order to provide conditional anonymity. We present various attack scenarios that show the resilience of the proposed approach against various security and privacy threats. We also provide analysis of computational and communication overhead to show the efficiency of the proposed technique. In addition, we carry out extensive simulations in order to present a detailed network performance analysis. The results show the feasibility of our proposed approach in terms of end-to-end delay and packet delivery ratio.", "title": "" }, { "docid": "bd1523c64d8ec69d87cbe68a4d73ea17", "text": "BACKGROUND AND OBJECTIVE\nThe effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library.\n\n\nMETHODS\nBased on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library.\n\n\nRESULTS\nWe have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest.\n\n\nCONCLUSIONS\nThe IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems.", "title": "" }, { "docid": "e1edaf3e8754e8403b9be29f58ba3550", "text": "This paper presents a simulation framework for pathological gait assistance with a hip exoskeleton. Previously we had developed an event-driven controller for gait assistance [1]. We now simulate (or optimize) the gait assistance in ankle pathologies (e.g., weak dorsiflexion or plantarflexion). It is done by 1) utilizing the neuromuscular walking model, 2) parameterizing assistive torques for swing and stance legs, and 3) performing dynamic optimizations that takes into account the human-robot interactive dynamics. We evaluate the energy expenditures and walking parameters for the different gait types. Results show that each gait type should have a different assistance strategy comparing with the assistance of normal gait. Although we need further studies about the pathologies, our simulation model is feasible to design the gait assistance for the ankle muscle weaknesses.", "title": "" }, { "docid": "8400bb9a7c979932683e742a6ee67176", "text": "BACKGROUND & AIMS\nHepatitis B and D viruses (HBV and HDV) are human pathogens with restricted host ranges and high selectivity for hepatocytes; the HBV L-envelope protein interacts specifically with a receptor on these cells. We aimed to identify this receptor and analyze whether it is the recently described sodium-taurocholate co-transporter polypeptide (NTCP), encoded by the SLC10A1 gene.\n\n\nMETHODS\nTo identify receptor candidates, we compared gene expression patterns between differentiated HepaRG cells, which express the receptor, and naïve cells, which do not. Receptor candidates were evaluated by small hairpin RNA silencing in HepaRG cells; the ability of receptor expression to confer binding and infection were tested in transduced hepatoma cell lines. We used interspecies domain swapping to identify motifs for receptor-mediated host discrimination of HBV and HDV binding and infection.\n\n\nRESULTS\nBioinformatic analyses of comparative expression arrays confirmed that NTCP, which was previously identified through a biochemical approach is a bona fide receptor for HBV and HDV. NTCPs from rat, mouse, and human bound Myrcludex B, a peptide ligand derived from the HBV L-protein. Myrcludex B blocked NTCP transport of bile salts; small hairpin RNA-mediated knockdown of NTCP in HepaRG cells prevented their infection by HBV or HDV. Expression of human but not mouse NTCP in HepG2 and HuH7 cells conferred a limited cell-type-related and virus-dependent susceptibility to infection; these limitations were overcome when cells were cultured with dimethyl sulfoxide. We identified 2 short-sequence motifs in human NTCP that were required for species-specific binding and infection by HBV and HDV.\n\n\nCONCLUSIONS\nHuman NTCP is a specific receptor for HBV and HDV. NTCP-expressing cell lines can be efficiently infected with these viruses, and might be used in basic research and high-throughput screening studies. Mapping of motifs in NTCPs have increased our understanding of the species specificities of HBV and HDV, and could lead to small animal models for studies of viral infection and replication.", "title": "" }, { "docid": "96ea7f2a0fd0a630df87d22d846d1575", "text": "BACKGROUND\nRecent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies.\n\n\nRESULTS\nWe analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches.\n\n\nCONCLUSION\nSystems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based tools. For the task of automatic structure-based classification of chemical entities, essential to managing the vast swathes of chemical data being brought online, systems which are capable of hybrid reasoning combining several different approaches are crucial. We provide a thorough review of the available tools and methodologies, and identify areas of open research.", "title": "" }, { "docid": "df69a701bca12d3163857a9932ef51e2", "text": "Students often have their own individual laptop computers in university classes, and researchers debate the potential benefits and drawbacks of laptop use. In the presented research, we used a combination of surveys and in-class observations to study how students use their laptops in an unmonitored and unrestricted class setting—a large lecture-based university class with nearly 3000 enrolled students. By analyzing computer use over the duration of long (165 minute) classes, we demonstrate how computer use changes over time. The observations and studentreports provided similar descriptions of laptop activities. Note taking was the most common use for the computers, followed by the use of social media web sites. Overall, the data show that students engaged in off-task computer activities for nearly two-thirds of the time. An analysis of the frequency of the various laptop activities over time showed that engagement in individual activities varied significantly over the duration of the class.", "title": "" }, { "docid": "949e6376eb352482603e6168894744fb", "text": "Search over encrypted data is a technique of great interest in the cloud computing era, because many believe that sensitive data has to be encrypted before outsourcing to the cloud servers in order to ensure user data privacy. Devising an efficient and secure search scheme over encrypted data involves techniques from multiple domains – information retrieval for index representation, algorithms for search efficiency, and proper design of cryptographic protocols to ensure the security and privacy of the overall system. This chapter provides a basic introduction to the problem definition, system model, and reviews the state-of-the-art mechanisms for implementing privacy-preserving keyword search over encrypted data. We also present one integrated solution, which hopefully offer more insights into this important problem.", "title": "" }, { "docid": "affa48f455d5949564302b4c23324458", "text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.", "title": "" }, { "docid": "314722d112f5520f601ed6917f519466", "text": "In this work we propose an online multi person pose tracking approach which works on two consecutive frames It−1 and It . The general formulation of our temporal network allows to rely on any multi person pose estimation approach as spatial network. From the spatial network we extract image features and pose features for both frames. These features serve as input for our temporal model that predicts Temporal Flow Fields (TFF). These TFF are vector fields which indicate the direction in which each body joint is going to move from frame It−1 to frame It . This novel representation allows to formulate a similarity measure of detected joints. These similarities are used as binary potentials in a bipartite graph optimization problem in order to perform tracking of multiple poses. We show that these TFF can be learned by a relative small CNN network whilst achieving state-of-the-art multi person pose tracking results.", "title": "" }, { "docid": "ab74bef6dce156cd335267109e6fc0bc", "text": "We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.", "title": "" } ]
scidocsrr
b9b1c839c4d62acd8950d1a9b58e9744
Graph Visualization and Navigation in Information Visualization: A Survey
[ { "docid": "1649b2776fcc2b8a736306128f8a2331", "text": "The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.", "title": "" } ]
[ { "docid": "327042fae16e69b15a4e8ea857ccdb18", "text": "Do countries with lower policy-induced barriers to international trade grow faster, once other relevant country characteristics are controlled for? There exists a large empirical literature providing an affirmative answer to this question. We argue that methodological problems with the empirical strategies employed in this literature leave the results open to diverse interpretations. In many cases, the indicators of \"openness\" used by researchers are poor measures of trade barriers or are highly correlated with other sources of bad economic performance. In other cases, the methods used to ascertain the link between trade policy and growth have serious shortcomings. Papers that we review include Dollar (1992), Ben-David (1993), Sachs and Warner (1995), and Edwards (1998). We find little evidence that open trade policies--in the sense of lower tariff and non-tariff barriers to trade--are significantly associated with economic growth. Francisco Rodríguez Dani R odrik Department of Economics John F. Kennedy School of Government University of Maryland Harvard University College Park, MD 20742 79 Kennedy Street Cambridge, MA 02138 Phone: (301) 405-3480 Phone: (617) 495-9454 Fax: (301) 405-3542 Fax: (617) 496-5747 TRADE POLICY AND ECONOMIC GROWTH: A SKEPTIC'S GUIDE TO THE CROSS-NATIONAL EVIDENCE \"It isn't what we don't know that kills us. It's what we know that ain't so.\" -Mark Twain", "title": "" }, { "docid": "39debcb0aa41eec73ff63a4e774f36fd", "text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.", "title": "" }, { "docid": "73a556ff210ad26742e05f7fb91c2dab", "text": "Supervised learning, more specifically Convolutional Neural Networks (CNN), has surpassed human ability in some visual recognition tasks such as detection of traffic signs, faces and handwritten numbers. On the other hand, even stateof-the-art reinforcement learning (RL) methods have difficulties in environments with sparse and binary rewards. They requires manually shaping reward functions, which might be challenging to come up with. These tasks, however, are trivial to human. One of the reasons that human are better learners in these tasks is that we are embedded with much prior knowledge of the world. These knowledge might be either embedded in our genes or learned from imitation a type of supervised learning. For that reason, the best way to narrow the gap between machine and human learning ability should be to mimic how we learn so well in various tasks by a combination of RL and supervised learning. Our method, which integrates Deep Deterministic Policy Gradients and Hindsight Experience Replay (RL method specifically dealing with sparse rewards) with an experience ranking CNN, provides a significant speedup over the learning curve on simulated robotics tasks. Experience ranking allows high-reward transitions to be replayed more frequently, and therefore help learn more efficiently. Our proposed approach can also speed up learning in any other tasks that provide additional information for experience ranking.", "title": "" }, { "docid": "8621332351bd2af6148a891d183f3eae", "text": "Recent researches on neural network have shown signi€cant advantage in machine learning over traditional algorithms based on handcra‰ed features and models. Neural network is now widely adopted in regions like image, speech and video recognition. But the high computation and storage complexity of neural network inference poses great diculty on its application. CPU platforms are hard to o‚er enough computation capacity. GPU platforms are the €rst choice for neural network process because of its high computation capacity and easy to use development frameworks. On the other hand, FPGA-based neural network inference accelerator is becoming a research topic. With speci€cally designed hardware, FPGA is the next possible solution to surpass GPU in speed and energy eciency. Various FPGA-based accelerator designs have been proposed with so‰ware and hardware optimization techniques to achieve high speed and energy eciency. In this paper, we give an overview of previous work on neural network inference accelerators based on FPGA and summarize the main techniques used. An investigation from so‰ware to hardware, from circuit level to system level is carried out to complete analysis of FPGA-based neural network inference accelerator design and serves as a guide to future work.", "title": "" }, { "docid": "e64271530ae4314745f1bb54237d79ca", "text": "The concept of ecosystem emanates from ecology and subsequently has been broadly used in business studies to describe and investigate complex interrelationships between companies and other organizations. Concepts that are transferred from other disciplines (and used both in research and in practice) can, however, be ambiguous and problematic. For example, the use of the ecosystem concept has been questioned in the literature. To better understand the potential ambiguities between the business ecosystem concept and other related concepts, this study presents a conceptual analysis of business ecosystem. We continue by analytically comparing business ecosystem with other concepts used to describe business relationships, namely industry, population, cluster, and inter-organizational network. The results indicate a need for conceptual clarity when describing business networks. We conclude with a synthesis and discuss under what circumstances using the business ecosystem concept may add value for research and practice. The paper contributes to the business ecosystem literature by positioning the business ecosystem concept in relation to other closely related concepts", "title": "" }, { "docid": "49fbe9ddc3087c26ecc373c6731fca77", "text": "Alarm correlation plays an important role in improving the service and reliability in modern telecommunication networks. Most previous research of alarm correlation didn’t consider the effects of noise data in the database. This paper focuses on the method of discovering alarm correlation rules from the database containing noise data. We firstly define two parameters Win_freq and Win_add as the measures of noise data and then present the Robust_search algorithm to solve the problem. At different size of Win_freq and Win_add, the experiments on alarm database containing noise data show that the Robust_search Algorithm can discover more rules with the bigger size of Win_add. We also compare two different interestingness measures of confidence and correlation by experiments.", "title": "" }, { "docid": "086f5e6dd7889d8dcdaddec5852afbdb", "text": "Fast advances in the wireless technology and the intensive penetration of cell phones have motivated banks to spend large budget on building mobile banking systems, but the adoption rate of mobile banking is still underused than expected. Therefore, research to enrich current knowledge about what affects individuals to use mobile banking is required. Consequently, this study employs the Unified Theory of Acceptance and Use of Technology (UTAUT) to investigate what impacts people to adopt mobile banking. Through sampling 441 respondents, this study empirically concluded that individual intention to adopt mobile banking was significantly influenced by social influence, perceived financial cost, performance expectancy, and perceived credibility, in their order of influencing strength. The behavior was considerably affected by individual intention and facilitating conditions. As for moderating effects of gender and age, this study discovered that gender significantly moderated the effects of performance expectancy and perceived financial cost on behavioral intention, and the age considerably moderated the effects of facilitating conditions and perceived self-efficacy on actual adoption behavior.", "title": "" }, { "docid": "c599ee48bd5696f0ea4595be4f2725f3", "text": "Ultrasound has been recently proposed as an alternative modality for efficient wireless power transmission (WPT) to biomedical implants with millimeter (mm) dimensions. This paper presents the theory and design methodology of ultrasonic WPT links that involve mm-sized receivers (Rx). For given load <inline-formula><tex-math notation=\"LaTeX\">$(R_{L})$</tex-math></inline-formula> and powering distance <inline-formula><tex-math notation=\"LaTeX\">$(d)$</tex-math></inline-formula>, the optimal geometries of transmitter (Tx) and Rx ultrasonic transducers, including their diameter and thickness, as well as the optimal operation frequency <inline-formula><tex-math notation=\"LaTeX\">$(f_{c})$</tex-math></inline-formula> are found through a recursive design procedure to maximize the power transmission efficiency (PTE). First, a range of realistic <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula>s is found based on the Rx thickness constrain. For a chosen <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula> within the range, the diameter and thickness of the Rx transducer are then swept together to maximize PTE. Then, the diameter and thickness of the Tx transducer are optimized to maximize PTE. Finally, this procedure is repeated for different <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula>s to find the optimal <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula> and its corresponding transducer geometries that maximize PTE. A design example of ultrasonic link has been presented and optimized for WPT to a 1 mm<sup>3</sup> implant, including a disk-shaped piezoelectric transducer on a silicon die. In simulations, a PTE of 2.11% at <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula> of 1.8 MHz was achieved for <inline-formula><tex-math notation=\"LaTeX\">$R_{L}$</tex-math></inline-formula> of 2.5 <inline-formula><tex-math notation=\"LaTeX\">$\\text{k}\\Omega$</tex-math></inline-formula> at <inline-formula><tex-math notation=\"LaTeX\">$d = 3\\ \\text{cm}$</tex-math></inline-formula>. In order to validate our simulations, an ultrasonic link was optimized for a 1 mm<sup>3</sup> piezoelectric transducer mounted on a printed circuit board (PCB), which led to simulated and measured PTEs of 0.65% and 0.66% at <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula> of 1.1 MHz for <inline-formula><tex-math notation=\"LaTeX\">$R_{L}$</tex-math></inline-formula> of 2.5 <inline-formula><tex-math notation=\"LaTeX\">$\\text{k}\\Omega$</tex-math></inline-formula> at <inline-formula><tex-math notation=\"LaTeX\">$d = 3\\ \\text{cm}$</tex-math></inline-formula>, respectively.", "title": "" }, { "docid": "7a56a39d50eb8ad9752ec01bb6f24f76", "text": "We study bandlimited signals with fractional Fourier transform (FRFT). We show that if a nonzero signal f is bandlimited with FRFT F/sub /spl alpha// for a certain real /spl alpha/, then it is not bandlimited with FRFT F/sub /spl beta// for any /spl beta/ with /spl beta//spl ne//spl plusmn//spl alpha/+n/spl pi/ for any integer n. This is a generalization of the fact that a nonzero signal can not be both timelimited and bandlimited. We also provide sampling theorems for bandlimited signals with FRFT that are similar to the Shannon sampling theorem.", "title": "" }, { "docid": "589dd2ca6e12841f3dd4a6873e2ea564", "text": "As many automated test input generation tools for Android need to instrument the system or the app, they cannot be used in some scenarios such as compatibility testing and malware analysis. We introduce DroidBot, a lightweight UI-guided test input generator, which is able to interact with an Android app on almost any device without instrumentation. The key technique behind DroidBot is that it can generate UI-guided test inputs based on a state transition model generated on-the-fly, and allow users to integrate their own strategies or algorithms. DroidBot is lightweight as it does not require app instrumentation, thus users do not need to worry about the inconsistency between the tested version and the original version. It is compatible with most Android apps, and able to run on almost all Android-based systems, including customized sandboxes and commodity devices. Droidbot is released as an open-source tool on GitHub, and the demo video can be found at https://youtu.be/3-aHG_SazMY.", "title": "" }, { "docid": "3c89e7c5fdd2269ffb17adcaec237d6c", "text": "Numerical simulation of quantum systems is crucial to further our understanding of natural phenomena. Many systems of key interest and importance, in areas such as superconducting materials and quantum chemistry, are thought to be described by models which we cannot solve with sufficient accuracy, neither analytically nor numerically with classical computers. Using a quantum computer to simulate such quantum systems has been viewed as a key application of quantum computation from the very beginning of the field in the 1980s. Moreover, useful results beyond the reach of classical computation are expected to be accessible with fewer than a hundred qubits, making quantum simulation potentially one of the earliest practical applications of quantum computers. In this paper we survey the theoretical and experimental development of quantum simulation using quantum computers, from the first ideas to the intense research efforts currently underway.", "title": "" }, { "docid": "8d83568ca0c89b1a6e344341bb92c2d0", "text": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.", "title": "" }, { "docid": "34fc01272f6f41432c1bb3503a716e15", "text": "Multi-agent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, economics. Many tasks arising in these domains require that the agents learn behaviors online. A significant part of the research on multi-agent learning concerns reinforcement learning techniques. However, due to different viewpoints on central issues, such as the formal statement of the learning goal, a large number of different methods and approaches have been introduced. In this paper we aim to present an integrated survey of the field. First, the issue of the multi-agent learning goal is discussed, after which a representative selection of algorithms is reviewed. Finally, open issues are identified and future research directions are outlined", "title": "" }, { "docid": "b45a6b66cff9a10f0479e58d9d02aae8", "text": "Chess programs ha ve three major components: mo ve generation, search, and evaluation. All components are important, although e valuation with its quiescence analysis is the part which mak es each program’ s play unique. The speed of a chess program is a function of its mo ve generation cost, the comple xity of the position under study and the bre vity of its evaluation. Moreimportant, however, is the quality of the mechanisms used to discontinue (prune) search of unprofitable continuations. The most reliable pruning method in popular use is the rob ust alpha-beta algorithm, and its man y supporting aids. These essential parts of g ame-tree searching and pruning are reviewed here, and the performance of refinements, such as aspiration and principal variation search, and aids lik e transposition and history tables are compared. † Much of this article is a re vision of material condensed from an entry entitled ‘‘ Computer Chess Methods, ’’ p repared for theEncyclopedia of Artificial Intellig ence, S. Shapiro (editor), to be published by John W iley & Sons in 1987.The transposition table pseudo code of Figure 7 is similar to that in another paper: ‘ ‘Pa allel Search of Strongly Ordered Game T rees, ’’ T. A. Marsland and M. Campbell, ACM Computing Surveys, Vol 14, No. 4, cop yright 1982, Association for Computing Machinery Inc., and is reprinted by permission. Final draft: ICCA Journal, V ol. 9, No. 1, March 1986, pp. 3-19.", "title": "" }, { "docid": "a85c6e8a666d079c60b9bc31d6d9ae62", "text": "When pedestrians encounter vehicles, they typically stop and wait for a signal from the driver to either cross or wait. What happens when the car is autonomous and there isn’t a human driver to signal them? This paper seeks to address this issue with an intent communication system (ICS) that acts in place of a human driver. This intent system has been developed to take into account the psychology behind what pedestrians are familiar with and what they expect from machines. The system integrates those expectations into the design of physical systems and mathematical algorithms. The goal of the system is to ensure that communication is simple, yet effective without leaving pedestrians with a sense of distrust in autonomous vehicles. To validate the ICS, two types of experiments have been run: field tests with an autonomous vehicle to determine how humans actually interact with the ICS and simulations to account for multiple potential behaviors.The results from both experiments show that humans react positively and more predictably when the intent of the vehicle is communicated compared to when the intent of the vehicle is unknown. In particular, the results from the simulation specifically showed a 142 percent difference between the pedestrian’s trust in the vehicle’s actions when the ICS is enabled and the pedestrian has prior knowledge of the vehicle than when the ICS is not enabled and the pedestrian having no prior knowledge of the vehicle.", "title": "" }, { "docid": "3fe3d1f8b5e141b9044686491fffe12f", "text": "Data stream is a potentially massive, continuous, rapid sequence of data information. It has aroused great concern and research upsurge in the field of data mining. Clustering is an effective tool of data mining, so data stream clustering will undoubtedly become the focus of the study in data stream mining. In view of the characteristic of the high dimension, dynamic, real-time, many effective data stream clustering algorithms have been proposed. In addition, data stream information are not deterministic and always exist outliers and contain noises, so developing effective data stream clustering algorithm is crucial. This paper reviews the development and trend of data stream clustering and analyzes typical data stream clustering algorithms proposed in recent years, such as Birch algorithm, Local Search algorithm, Stream algorithm and CluStream algorithm. We also summarize the latest research achievements in this field and introduce some new strategies to deal with outliers and noise data. At last, we put forward the focal points and difficulties of future research for data stream clustering.", "title": "" }, { "docid": "6ec9be993cf6f4ac3e4b0ba4e4f6c313", "text": "Femoroacetabular impingement is a well-documented cause of hip pain. There is, however, increasing evidence for the presence of a previously unrecognised impingement-type condition around the hip - ischiofemoral impingement. This is caused by abnormal contact between the lesser trochanter of the femur and the ischium, and presents as atypical groin and/or posterior buttock pain. The symptoms are gradual in onset and may be similar to those of iliopsoas tendonitis, hamstring injury or bursitis. The presence of ischiofemoral impingement may be indicated by pain caused by a combination of hip extension, adduction and external rotation. Magnetic resonance imaging demonstrates inflammation and oedema in the ischiofemoral space and quadratus femoris, and is distinct from an acute tear. To date this has only appeared in the specialist orthopaedic literature as a problem that has developed after total hip replacement, not in the unreplaced joint.", "title": "" }, { "docid": "5e95aaa54f8acf073ccc11c08c148fe0", "text": "Billions of dollars of loss are caused every year due to fraudulent credit card transactions. The design of efficient fraud detection algorithms is key for reducing these losses, and more and more algorithms rely on advanced machine learning techniques to assist fraud investigators. The design of fraud detection algorithms is however particularly challenging due to non stationary distribution of the data, highly imbalanced classes distributions and continuous streams of transactions. At the same time public data are scarcely available for confidentiality issues, leaving unanswered many questions about which is the best strategy to deal with them. In this paper we provide some answers from the practitioner’s perspective by focusing on three crucial issues: unbalancedness, non-stationarity and assessment. The analysis is made possible by a real credit card dataset provided by our industrial partner.", "title": "" }, { "docid": "7feda29a5edf6855895f91f80c3286a4", "text": "The ability to conduct logical reasoning is a fundamental aspect of intelligent behavior, and thus an important problem along the way to human-level artificial intelligence. Traditionally, symbolic logic-based methods from the field of knowledge representation and reasoning have been used to equip agents with capabilities that resemble human logical reasoning qualities. More recently, however, there has been an increasing interest in using machine learning rather than symbolic logic-based formalisms to tackle these tasks. In this paper, we employ state-of-the-art methods for training deep neural networks to devise a novel model that is able to learn how to effectively perform logical reasoning in the form of basic ontology reasoning. This is an important and at the same time very natural logical reasoning task, which is why the presented approach is applicable to a plethora of important real-world problems. We present the outcomes of several experiments, which show that our model learned to perform precise ontology reasoning on diverse and challenging tasks. Furthermore, it turned out that the suggested approach suffers much less from different obstacles that prohibit logic-based symbolic reasoning, and, at the same time, is surprisingly plausible from a biological point of view.", "title": "" }, { "docid": "4d4a90b28d0454f7ceded05d35c1b04e", "text": "Analysis of satellite images plays an increasingly vital role in environment and climate monitoring, especially in detecting and managing natural disaster. In this paper, we proposed an automatic disaster detection system by implementing one of the advance deep learning techniques, convolutional neural network (CNN), to analysis satellite images. The neural network consists of 3 convolutional layers, followed by max-pooling layers after each convolutional layer, and 2 fully connected layers. We created our own disaster detection training data patches, which is currently focusing on 2 main disasters in Japan and Thailand: landslide and flood. Each disaster's training data set consists of 30000~40000 patches and all patches are trained automatically in CNN to extract region where disaster occurred instantaneously. The results reveal accuracy of 80%~90% for both disaster detection. The results presented here may facilitate improvements in detecting natural disaster efficiently by establishing automatic disaster detection system.", "title": "" } ]
scidocsrr
9c50b948f6621f5dbacc2a9ce01b2f6e
Monopole Antenna With Inkjet-Printed EBG Array on Paper Substrate for Wearable Applications
[ { "docid": "6f13503bf65ff58b7f0d4f3282f60dec", "text": "Body centric wireless communication is now accepted as an important part of 4th generation (and beyond) mobile communications systems, taking the form of human to human networking incorporating wearable sensors and communications. There are also a number of body centric communication systems for specialized occupations, such as paramedics and fire-fighters, military personnel and medical sensing and support. To support these developments there is considerable ongoing research into antennas and propagation for body centric communications systems, and this paper will summarise some of it, including the characterisation of the channel on the body, the optimisation of antennas for these channels, and communications to medical implants where advanced antenna design and characterisation and modelling of the internal body channel are important research needs. In all of these areas both measurement and simulation pose very different and challenging issues to be faced by the researcher.", "title": "" }, { "docid": "e99d7b425ab1a2a9a2de4e10a3fbe766", "text": "In this paper, a review of the authors' work on inkjet-printed flexible antennas, fabricated on paper substrates, is given. This is presented as a system-level solution for ultra-low-cost mass production of UHF radio-frequency identification (RFID) tags and wireless sensor nodes (WSN), in an approach that could be easily extended to other microwave and wireless applications. First, we discuss the benefits of using paper as a substrate for high-frequency applications, reporting its very good electrical/dielectric performance up to at least 1 GHz. The RF characteristics of the paper-based substrate are studied by using a microstrip-ring resonator, in order to characterize the dielectric properties (dielectric constant and loss tangent). We then give details about the inkjet-printing technology, including the characterization of the conductive ink, which consists of nano-silver particles. We highlight the importance of this technology as a fast and simple fabrication technique, especially on flexible organic (e.g., LCP) or paper-based substrates. A compact inkjet-printed UHF ldquopassive RFIDrdquo antenna, using the classic T-match approach and designed to match the IC's complex impedance, is presented as a demonstration prototype for this technology. In addition, we briefly touch upon the state-of-the-art area of fully-integrated wireless sensor modules on paper. We show the first-ever two-dimensional sensor integration with an RFID tag module on paper, as well as the possibility of a three-dimensional multilayer paper-based RF/microwave structure.", "title": "" }, { "docid": "784f3100dbd852b249c0e9b0761907f1", "text": "The bi-directional beam from an equiangular spiral antenna (EAS) is changed to a unidirectional beam using an electromagnetic band gap (EBG) reflector. The antenna height, measured from the upper surface of the EBG reflector to the spiral arms, is chosen to be extremely small to realize a low-profile antenna: 0.07 wavelength at the lowest analysis frequency of 3 GHz. The analysis shows that the EAS backed by the EBG reflector does not reproduce the inherent wideband axial ratio characteristic observed when the EAS is isolated in free space. The deterioration in the axial ratio is examined by decomposing the total radiation field into two field components: one component from the equiangular spiral and the other from the EBG reflector. The examination reveals that the amplitudes and phases of these two field components do not satisfy the constructive relationship necessary for circularly polarized radiation. Based on this finding, next, the EBG reflector is modified by gradually removing the patch elements from the center region of the reflector, thereby satisfying the required constructive relationship between the two field components. This equiangular spiral with a modified EBG reflector shows wideband characteristics with respect to the axial ratio, input impedance and gain within the design frequency band (4-9 GHz). Note that, for comparison, the antenna characteristics for an EAS isolated in free space and an EAS backed by a perfect electric conductor are also presented.", "title": "" } ]
[ { "docid": "0cd96187b257ee09060768650432fe6d", "text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.", "title": "" }, { "docid": "69bb52e45db91f142b8c5297abd21282", "text": "IP-based solutions to accommodate mobile hosts within existing internetworks do not address the distinctive features of wireless mobile computing. IP-based transport protocols thus suffer from poor performance when a mobile host communicates with a host on the fixed network. This is caused by frequent disruptions in network layer connectivity due to — i) mobility and ii) unreliable nature of the wireless link. We describe the design and implementation of I-TCP, which is an indirect transport layer protocol for mobile hosts. I-TCP utilizes the resources of Mobility Support Routers (MSRs) to provide transport layer communication between mobile hosts and hosts on the fixed network. With I-TCP, the problems related to mobility and the unreliability of wireless link are handled entirely within the wireless link; the TCP/IP software on the fixed hosts is not modified. Using I-TCP on our testbed, the throughput between a fixed host and a mobile host improved substantially in comparison to regular TCP.", "title": "" }, { "docid": "3b47a88f37a06ec44d510a4dbfc0993d", "text": "Governance, Risk and Compliance (GRC) as an integrated concept has gained great interest recently among researchers in the Information Systems (IS) field. The need for more effective and efficient business processes in the area of financial controls drives enterprises to successfully implement GRC systems as an overall goal when they are striving for enterprise value of their integrated systems. The GRC implementation process is a significant parameter influencing the success of operational performance and financial governance and supports the practices for competitive advantage within the organisations. However, GRC literature is limited regarding the analysis of their implementation and adoption success. Therefore, there is a need for further research and contribution in the area of GRC systems and more specifically their implementation process. The research at hand recognizes GRC as a fundamental business requirement and focuses on the need to analyse the implementation process of such enterprise solutions. The research includes theoretical and empirical investigation of the GRC implementation within an enterprise and develops a framework for the analysis of the GRC adoption. The approach suggests that the three success factors (integration, optimisation, information) influence the adoption of the GRC and more specifically their implementation process. The proposed framework followed a case study approach to confirm its functionality and is evaluated through interviews with stakeholders involved in GRC implementations. Furthermore, it can be used by the organisations when considering the adoption of GRC solutions and can also suggest a tool for researchers to analyse and explain further the GRC implementation process.", "title": "" }, { "docid": "d7c2d97fbd7591bdd53e711ed5582f6c", "text": "Progress in Information and Communication Technologies (ICTs) is shaping more and more the healthcare domain. ICTs adoption provides new opportunities, as well as discloses novel and unforeseen application scenarios. As a result, the overall health sector is potentially benefited, as the quality of medical services is expected to be enhanced and healthcare costs are reduced, in spite of the increasing demand due to the aging population. Notwithstanding the above, the scientific literature appears to be still quite scattered and fragmented, also due to the interaction of scientific communities with different background, skills, and approaches. A number of specific terms have become of widespread use (e.g., regarding ICTs-based healthcare paradigms as well as at health-related data formats), but without commonly-agreed definitions. While scientific surveys and reviews have also been proposed, none of them aims at providing a holistic view of how today ICTs are able to support healthcare. This is the more and more an issue, as the integrated application of most if not all the main ICTs pillars is the most agreed upon trend, according to the Industry 4.0 paradigm about ongoing and future industrial revolution. In this paper we aim at shedding light on how ICTs and healthcare are related, identifying the most popular ICTs-based healthcare paradigms, together with the main ICTs backing them. Studying more than 300 papers, we survey outcomes of literature analyses and results from research activities carried out in this field. We characterize the main ICTs-based healthcare paradigms stemmed out in recent years fostered by the evolution of ICTs. Dissecting the scientific literature, we also identify the technological pillars underpinning the novel applications fueled by these technological advancements. Guided by the scientific literature, we review a number of application scenarios gaining momentum thanks to the beneficial impact of ICTs. As the evolution of ICTs enables to gather huge and invaluable data from numerous and highly varied sources in easier ways, here we also focus on the shapes that this healthcare-related data may take. This survey provides an up-to-date picture of the novel healthcare applications enabled by the ICTs advancements, with a focus on their specific hottest research challenges. It helps the interested readership (from both technological and medical fields) not to lose orientation in the complex landscapes possibly generated when advanced ICTs are adopted in application scenarios dictated by the critical healthcare domain.", "title": "" }, { "docid": "ce6e5532c49b02988588f2ac39724558", "text": "hlany modern computing environments involve dynamic peer groups. Distributed Simdation, mtiti-user games, conferencing and replicated servers are just a few examples. Given the openness of today’s networks, communication among group members must be secure and, at the same time, efficient. This paper studies the problem of authenticated key agreement. in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation and integrity. It begins by considering 2-party authenticateed key agreement and extends the restits to Group Dfi*Hehart key agreement. In the process, some new security properties (unique to groups) are discussed.", "title": "" }, { "docid": "46465926afb62b9f73386a962047875d", "text": "Cervical cancer represents the second leading cause of death for women worldwide. The importance of the diet and its impact on specific types of neoplasia has been highlighted, focusing again interest in the analysis of dietary phytochemicals. Polyphenols have shown a wide range of cellular effects: they may prevent carcinogens from reaching the targeted sites, support detoxification of reactive molecules, improve the elimination of transformed cells, increase the immune surveillance and the most important factor is that they can influence tumor suppressors and inhibit cellular proliferation, interfering in this way with the steps of carcinogenesis. From the studies reviewed in this paper, it is clear that certain dietary polyphenols hold great potential in the prevention and therapy of cervical cancer, because they interfere in carcinogenesis (in the initiation, development and progression) by modulating the critical processes of cellular proliferation, differentiation, apoptosis, angiogenesis and metastasis. Specifically, polyphenols inhibit the proliferation of HPV cells, through induction of apoptosis, growth arrest, inhibition of DNA synthesis and modulation of signal transduction pathways. The effects of combinations of polyphenols with chemotherapy and radiotherapy used in the treatment of cervical cancer showed results in the resistance of cervical tumor cells to chemo- and radiotherapy, one of the main problems in the treatment of cervical neoplasia that can lead to failure of the treatment because of the decreased efficiency of the therapy.", "title": "" }, { "docid": "6085fab45784706f5c99e7c316a0fc55", "text": "The localization of photosensitizers in the subcellular compartments during photodynamic therapy (PDT) plays a major role in the cell destruction; therefore, the aim of this study was to investigate the intracellular localization of Chlorin e6-PVP (Photolon™) in malignant and normal cells. Our study involves the characterization of the structural determinants of subcellular localization of Photolon, and how subcellular localization affects the selective toxicity of Photolon towards tumor cells. Using confocal laser scanning microscopy (CLSM) and fluorescent organelle probes; we examined the subcellular localization of Photolon™ in the murine colon carcinoma CT-26 and normal fibroblast (NHLC) cells. Our results demonstrated that after 30 min of incubation, the distribution of Photolon was localized mainly in the cytoplasmic organelles including the mitochondria, lysosomes, Golgi apparatus, around the nuclear envelope and also in the nucleus but not in the endo-plasmic reticulum whereas in NHLC cells, Photolon was found to be localized minimally only in the nucleus not in other organelles studied. The relationship between subcellular localization of Photolon and PDT-induced apoptosis was investigated. Apoptotic cell death was judged by the formation of known apoptotic hallmarks including, the phosphatidylserine externalization (PS), PARP cleavage, a substrate for caspase-3 and the formation of apoptotic nuclei. At the irradiation dose of 1 J/cm2, the percentage of apoptotic cells was 80%, respectively. This study provided substantial evidence that Photolon preferentially localized in the subcellular organelles in the following order: nucleus, mitochondria, lysosomes and the Golgi apparatus and subsequent photodamage of the mitochondria and lyso-somes played an important role in PDT-mediated apoptosis CT-26 cells. Our results based on the cytoplasmic organelles and the intranuclear localization extensively enhance the efficacy of PDT with appropriate photosensitizer and light dose and support the idea that PDT can contribute to elimination of malignant cells by inducing apoptosis, which is of physiological significance.", "title": "" }, { "docid": "9b5bccc259b512de43e5fe49a5b3fa21", "text": "A combination of techniques that is becoming increasingly popular is the construction of part-based object representations using the outputs of interest-point detectors. Our contributions in this paper are twofold: first, we propose a primal-sketch-based set of image tokens that are used for object representation and detection. Second, top-down information is introduced based on an efficient method for the evaluation of the likelihood of hypothesized part locations. This allows us to use graphical model techniques to complement bottom-up detection, by proposing and finding the parts of the object that were missed by the front-end feature detection stage. Detection results for four object categories validate the merits of this joint top-down and bottom-up approach.", "title": "" }, { "docid": "ac6b3d140b2e31b8b19dc37d25207eca", "text": "In this paper, a comparative study on frequency and time domain analyses for the evaluation of the seismic response of subsoil to the earthquake shaking is presented. After some remarks on the solutions given by the linear elasticity theory for this type of problem, the use of some widespread numerical codes is illustrated and the results are compared with the available theoretical predictions. Bedrock elasticity, viscous and hysteretic damping, stress-dependency of the stiffness and nonlinear behaviour of the soil are taken into account. A series of comparisons between the results obtained by the different computer programs is shown.", "title": "" }, { "docid": "ee727069682d1ed5181f05327e96aced", "text": "The problem of place recognition appears in different mobile robot navigation problems including localization, SLAM, or change detection in dynamic environments. Whereas this problem has been studied intensively in the context of robot vision, relatively few approaches are available for three-dimensional range data. In this paper, we present a novel and robust method for place recognition based on range images. Our algorithm matches a given 3D scan against a database using point features and scores potential transformations by comparing significant points in the scans. A further advantage of our approach is that the features allow for a computation of the relative transformations between scans which is relevant for registration processes. Our approach has been implemented and tested on different 3D data sets obtained outdoors. In several experiments we demonstrate the advantages of our approach also in comparison to existing techniques.", "title": "" }, { "docid": "2fee5493d0cec652a403f5659f6a2a2a", "text": "The lethal(3)malignant brain tumor [t(3)mbt] gene causes, when mutated, malignant growth of the adult optic neuroblasts and ganglion mother cells in the larval brain and imaginal disc overgrowth. Via overlapping deficiencies a genomic region of approximately 6.0 kb was identified, containing l(3)mbt+ gene sequences. The l(3)mbt+ gene encodes seven transcripts of 5.8 kb, 5.65 kb, 5.35 kb, 5.25 kb, 5.0 kb, 4.4 kb and 1.8 kb. The putative MBT163 protein, encompassing 1477 amino acids, is proline-rich and contains a novel zinc finger. In situ hybridizations of whole mount embryos and larval tissues revealed l(3)mbt+ RNA ubiquitously present in stage 1 embryos and throughout embryonic development in most tissues. In third instar larvae l(3)mbt+ RNA is detected in the adult optic anlagen and the imaginal discs, the tissues directly affected by l(3)mbt mutations, but also in tissues, showing normal development in the mutant, such as the gut, the goblet cells and the hematopoietic organs.", "title": "" }, { "docid": "47ddc934a733f5b2d05dcd0275c7fb06", "text": "Accurately forecasting pollution concentration of PM2.5 can provide early warning for the government to alert the persons suffering from air pollution. Many existing approaches fail at providing favorable results duo to shallow architecture in forecasting model that can not learn suitable features. In addition, multiple meteorological factors increase the difficulty for understanding the influence of the PM2.5 concentration. In this paper, a deep neural network is proposed for accurately forecasting PM2.5 pollution concentration based on manifold learning. Firstly, meteorological factors are specified by the manifold learning method, reducing the dimension without any expert knowledge. Secondly, a deep belief network (DBN) is developed to learn the features of the input candidates obtained by the manifold learning and the one-day ahead PM2.5 concentration. Finally, the deep features are modeled by a regression neural network, and the local PM2.5 forecast is yielded. The addressed model is evaluated by the dataset in the period of 28/10/2013 to 31/3/2017 in Chongqing municipality of China. The study suggests that deep learning is a promising technique in PM2.5 concentration forecasting based on the manifold learning.", "title": "" }, { "docid": "f6d9efb7cfee553bc02a5303a86fd626", "text": "OBJECTIVE\nTo perform a cross-cultural adaptation of the Portuguese version of the Maslach Burnout Inventory for students (MBI-SS), and investigate its reliability, validity and cross-cultural invariance.\n\n\nMETHODS\nThe face validity involved the participation of a multidisciplinary team. Content validity was performed. The Portuguese version was completed in 2009, on the internet, by 958 Brazilian and 556 Portuguese university students from the urban area. Confirmatory factor analysis was carried out using as fit indices: the χ²/df, the Comparative Fit Index (CFI), the Goodness of Fit Index (GFI) and the Root Mean Square Error of Approximation (RMSEA). To verify the stability of the factor solution according to the original English version, cross-validation was performed in 2/3 of the total sample and replicated in the remaining 1/3. Convergent validity was estimated by the average variance extracted and composite reliability. The discriminant validity was assessed, and the internal consistency was estimated by the Cronbach's alpha coefficient. Concurrent validity was estimated by the correlational analysis of the mean scores of the Portuguese version and the Copenhagen Burnout Inventory, and the divergent validity was compared to the Beck Depression Inventory. The invariance of the model between the Brazilian and the Portuguese samples was assessed.\n\n\nRESULTS\nThe three-factor model of Exhaustion, Disengagement and Efficacy showed good fit (c 2/df = 8.498, CFI = 0.916, GFI = 0.902, RMSEA = 0.086). The factor structure was stable (λ:χ²dif = 11.383, p = 0.50; Cov: χ²dif = 6.479, p = 0.372; Residues: χ²dif = 21.514, p = 0.121). Adequate convergent validity (VEM = 0.45;0.64, CC = 0.82;0.88), discriminant (ρ² = 0.06;0.33) and internal consistency (α = 0.83;0.88) were observed. The concurrent validity of the Portuguese version with the Copenhagen Inventory was adequate (r = 0.21, 0.74). The assessment of the divergent validity was impaired by the approach of the theoretical concept of the dimensions Exhaustion and Disengagement of the Portuguese version with the Beck Depression Inventory. Invariance of the instrument between the Brazilian and Portuguese samples was not observed (λ:χ²dif = 84.768, p<0.001; Cov: χ²dif = 129.206, p < 0.001; Residues: χ²dif = 518.760, p < 0.001).\n\n\nCONCLUSIONS\nThe Portuguese version of the Maslach Burnout Inventory for students showed adequate reliability and validity, but its factor structure was not invariant between the countries, indicating the absence of cross-cultural stability.", "title": "" }, { "docid": "32ca9711622abd30c7c94f41b91fa3f6", "text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.", "title": "" }, { "docid": "7735668d4f8407d9514211d9f5492ce6", "text": "This revision to the EEG Guidelines is an update incorporating current EEG technology and practice. The role of the EEG in making the determination of brain death is discussed as are suggested technical criteria for making the diagnosis of electrocerebral inactivity.", "title": "" }, { "docid": "f91238b11b84099cdbb16c8c4b7c75ae", "text": "This study investigates the case-based learning experience of 133 undergraduate veterinarian science students. Using qualitative methodologies from relational Student Learning Research, variation in the quality of the learning experience was identified, ranging from coherent, deep, quality experiences of the cases, to experiences that separated significant aspects, such as the online case histories, laboratory test results, and annotated images emphasizing symptoms, from the meaning of the experience. A key outcome of this study was that a significant percentage of the students surveyed adopted a poor approach to learning with online resources in a blended experience even when their overall learning experience was related to cohesive conceptions of veterinary science, and that the difference was even more marked for less successful students. The outcomes from the study suggest that many students are unsure of how to approach the use of online resources in ways that are likely to maximise benefits for learning in blended experiences, and that the benefits from case-based learning such as authenticity and active learning can be threatened if issues closely associated with qualitative variation arising from incoherence in the experience are not addressed.", "title": "" }, { "docid": "050c701f2663f4fa85aadd65a5dc96f2", "text": "The availability of multiple, essentially complete genome sequences of prokaryotes and eukaryotes spurred both the demand and the opportunity for the construction of an evolutionary classification of genes from these genomes. Such a classification system based on orthologous relationships between genes appears to be a natural framework for comparative genomics and should facilitate both functional annotation of genomes and large-scale evolutionary studies. We describe here a major update of the previously developed system for delineation of Clusters of Orthologous Groups of proteins (COGs) from the sequenced genomes of prokaryotes and unicellular eukaryotes and the construction of clusters of predicted orthologs for 7 eukaryotic genomes, which we named KOGs after euk aryotic o rthologous g roups. The COG collection currently consists of 138,458 proteins, which form 4873 COGs and comprise 75% of the 185,505 (predicted) proteins encoded in 66 genomes of unicellular organisms. The euk aryotic o rthologous g roups (KOGs) include proteins from 7 eukaryotic genomes: three animals (the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster and Homo sapiens), one plant, Arabidopsis thaliana, two fungi (Saccharomyces cerevisiae and Schizosaccharomyces pombe), and the intracellular microsporidian parasite Encephalitozoon cuniculi. The current KOG set consists of 4852 clusters of orthologs, which include 59,838 proteins, or ~54% of the analyzed eukaryotic 110,655 gene products. Compared to the coverage of the prokaryotic genomes with COGs, a considerably smaller fraction of eukaryotic genes could be included into the KOGs; addition of new eukaryotic genomes is expected to result in substantial increase in the coverage of eukaryotic genomes with KOGs. Examination of the phyletic patterns of KOGs reveals a conserved core represented in all analyzed species and consisting of ~20% of the KOG set. This conserved portion of the KOG set is much greater than the ubiquitous portion of the COG set (~1% of the COGs). In part, this difference is probably due to the small number of included eukaryotic genomes, but it could also reflect the relative compactness of eukaryotes as a clade and the greater evolutionary stability of eukaryotic genomes. The updated collection of orthologous protein sets for prokaryotes and eukaryotes is expected to be a useful platform for functional annotation of newly sequenced genomes, including those of complex eukaryotes, and genome-wide evolutionary studies.", "title": "" }, { "docid": "18c885e8cb799086219585e419140ba5", "text": "Reaction-time and eye-fixation data are analyzed to investigate how people infer the kinematics of simple mechanical systems (pulley systems) from diagrams showing their static configuration. It is proposed that this mental animation process involves decomposing the representation of a pulley system into smaller units corresponding to the machine components and animating these components in a sequence corresponding to the causal sequence of events in the machine's operation. Although it is possible for people to make inferences against the chain of causality in the machine, these inferences are more difficult, and people have a preference for inferences in the direction of causality. The mental animation process reflects both capacity limitations and limitations of mechanical knowledge.", "title": "" }, { "docid": "0a732282dc782b8893628697e39c9153", "text": "Neural networks have had many great successes in recent years, particularly with the advent of deep learning and many novel training techniques. One issue that has prevented reinforcement learning from taking full advantage of scalable neural networks is that of catastrophic forgetting. The latter affects supervised learning systems when highly correlated input samples are presented, as well as when input patterns are non-stationary. However, most real-world problems are non-stationary in nature, resulting in prolonged periods of time separating inputs drawn from different regions of the input space. Unfortunately, reinforcement learning presents a worst-case scenario when it comes to precipitating catastrophic forgetting in neural networks. Meaningful training examples are acquired as the agent explores different regions of its state/action space. When the agent is in one such region, only highly correlated samples from that region are typically acquired. Moreover, the regions that the agent is likely to visit will depend on its current policy, suggesting that an agent that has a good policy may avoid exploring particular regions. The confluence of these factors means that without some mitigation techniques, supervised neural networks as function approximation in temporal-difference learning will only be applicable to the simplest test cases. In this work, we develop a feed forward neural network architecture that mitigates catastrophic forgetting by partitioning the input space in a manner that selectively activates a different subset of hidden neurons for each region of the input space. We demonstrate the effectiveness of the proposed framework on a cart-pole balancing problem for which other neural network architectures exhibit training instability likely due to catastrophic forgetting. We demonstrate that our technique produces better results, particularly with respect to a performance-stability measure.", "title": "" }, { "docid": "0f699e9f14753b2cbfb7f7a3c7057f40", "text": "There has been much recent work on training neural attention models at the sequencelevel using either reinforcement learning-style methods or by optimizing the beam. In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models. Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for like setup. We also report new state of the art results on both IWSLT’14 German-English translation as well as Gigaword abstractive summarization. On the large WMT’14 English-French task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art.1", "title": "" } ]
scidocsrr
abb586c09275c904f91719164e593524
Sentence Ranking with the Semantic Link Network in Scientific Paper
[ { "docid": "0836e5d45582b0a0eec78234776aa419", "text": "‘Description’: ‘Microsoft will accelerate your journey to cloud computing with an! agile and responsive datacenter built from your existing technology investments.’,! ‘DisplayUrl’: ‘www.microsoft.com/en-us/server-cloud/ datacenter/virtualization.aspx’,! ‘ID’: ‘a42b0908-174e-4f25-b59c-70bdf394a9da’,! ‘Title’: ‘Microsoft | Server & Cloud | Datacenter | Virtualization ...’,! ‘Url’: ‘http://www.microsoft.com/en-us/server-cloud/datacenter/ virtualization.aspx’,! ...! Data! #Topics: 228! #Candidate Labels: ~6,000! Domains: BLOGS, BOOKS, NEWS, PUBMED! Candidate labels rated by humans (0-3) ! Published by Lau et al. (2011). 4. Scoring Candidate Labels! Candidate Label: L = {w1, w2, ..., wm}! Scoring Function: Task: The aim of the task is to associate labels with automatically generated topics.", "title": "" } ]
[ { "docid": "ef6040561aaae594f825a6cabd4aa259", "text": "This study investigated the extent of young adults’ (N = 393; 17–30 years old) experience of cyberbullying, from the perspectives of cyberbullies and cyber-victims using an online questionnaire survey. The overall prevalence rate shows cyberbullying is still present after the schooling years. No significant gender differences were noted, however females outnumbered males as cyberbullies and cyber-victims. Overall no significant differences were noted for age, but younger participants were found to engage more in cyberbullying activities (i.e. victims and perpetrators) than the older participants. Significant differences were noted for Internet frequency with those spending 2–5 h online daily reported being more victimized and engage in cyberbullying than those who spend less than an hour daily. Internet frequency was also found to significantly predict cyber-victimization and cyberbullying, indicating that as the time spent on Internet increases, so does the chances to be bullied and to bully someone. Finally, a positive significant association was observed between cyber-victims and cyberbullies indicating that there is a tendency for cyber-victims to become cyberbullies, and vice versa. Overall it can be concluded that cyberbullying incidences are still taking place, even though they are not as rampant as observed among the younger users. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "edacac86802497e0e43c4a03bfd3b925", "text": "This paper presents a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm, which provides accurate and robust localization within the globally consistent map in real time on a standard CPU. This is achieved by firstly performing the visual-inertial extended kalman filter(EKF) to provide motion estimate at a high rate. However the filter becomes inconsistent due to the well known linearization issues. So we perform a keyframe-based visual-inertial bundle adjustment to improve the consistency and accuracy of the system. In addition, a loop closure detection and correction module is also added to eliminate the accumulated drift when revisiting an area. Finally, the optimized motion estimates and map are fed back to the EKF-based visual-inertial odometry module, thus the inconsistency and estimation error of the EKF estimator are reduced. In this way, the system can continuously provide reliable motion estimates for the long-term operation. The performance of the algorithm is validated on public datasets and real-world experiments, which proves the superiority of the proposed algorithm.", "title": "" }, { "docid": "a0c92111e9d821ffd26e08f69b434002", "text": "Cell phones are a pervasive new communication technology, especially among college students. This paper examines college students cell phone usage from a behavioral and psychological perspective. Utilizing both qualitative (focus groups) and quantitative (survey) approaches, the study suggests these individuals use the devices for a variety of purposes: to help them feel safe, for financial benefits, to manage time efficiently, to keep in touch with friends and family members, et al. The degree to which the individuals are dependent on the cell phones and what they view as the negatives of their utilization are also examined. The findings suggest people have various feelings and attitudes toward cell phone usage. This study serves as a foundation on which future studies will be built. 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "1880bb9c3229cab3e614ca39079c7781", "text": "Emerging low-power radio triggering techniques for wireless motes are a promising approach to prolong the lifetime of Wireless Sensor Networks (WSNs). By allowing nodes to activate their main transceiver only when data need to be transmitted or received, wake-up-enabled solutions virtually eliminate the need for idle listening, thus drastically reducing the energy toll of communication. In this paper we describe the design of a novel wake-up receiver architecture based on an innovative pass-band filter bank with high selectivity capability. The proposed concept, demonstrated by a prototype implementation, combines both frequency-domain and time-domain addressing space to allow selective addressing of nodes. To take advantage of the functionalities of the proposed receiver, as well as of energy-harvesting capabilities modern sensor nodes are equipped with, we present a novel wake-up-enabled harvesting-aware communication stack that supports both interest dissemination and converge casting primitives. This stack builds on the ability of the proposed WuR to support dynamic address assignment, which is exploited to optimize system performance. Comparison against traditional WSN protocols shows that the proposed concept allows to optimize performance tradeoffs with respect to existing low-power communication stacks.", "title": "" }, { "docid": "4d12a4269e4969148f6d5331f5d8afdd", "text": "Money laundering has become of increasing concern to law makers in recent years, principally because of its associations with terrorism. Recent legislative changes in the United Kingdom mean that auditors risk becoming state law enforcement agents in the private sector. We examine this legislation from the perspective of the changing nature of the relationship between auditors and the state, and the surveillant assemblage within which this is located. Auditors are statutorily obliged to file Suspicious Activity Reports (SARs) into an online database, ELMER, but without much guidance regarding how suspicion is determined. Criminal rather than civil or regulatory sanctions apply to auditors’ instances of non-compliance. This paper evaluates the surveillance implications of the legislation for auditors through lenses developed in the accounting and sociological literature by Brivot andGendron, Neu andHeincke, Deleuze and Guattari, and Haggerty and Ericson. It finds that auditors are generating information flows which are subsequently reassembled into discrete and virtual ‘data doubles’ to be captured and utilised by authorised third parties for unknown purposes. The paper proposes that the surveillant assemblage has extended into the space of the auditor-client relationship, but this extension remains inhibited as a result of auditors’ relatively weak level of engagement in providing SARs, thereby pointing to a degree of resistance in professional service firms regarding the deployment of regulation that compromises the foundations of this", "title": "" }, { "docid": "869e01855c8cfb9dc3e64f7f3e73cd60", "text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.", "title": "" }, { "docid": "9ce1401e072fc09749d12f9132aa6b1e", "text": "In many applications based on the use of unmanned aerial vehicles (UAVs), it is possible to establish a cluster of UAVs in which each UAV knows the other vehicle's position. Assuming that the common channel condition between any two nodes of UAVs is line-of-sight (LOS), the time and energy consumption for data transmission on each path that connecting two nodes may be estimated by a node itself. In this paper, we use a modified Bellman-Ford algorithm to find the best selection of relay nodes in order to minimize the time and energy consumption for data transmission between any UAV node in the cluster and the UAV acting as the cluster head. This algorithm is applied with a proposed cooperative MAC protocol that is compatible with the IEEE 802.11 standard. The evaluations under data saturation conditions illustrate noticeable benefits in successful packet delivery ratio, average delay, and in particular the cost of time and energy.", "title": "" }, { "docid": "2917b7b1453f9e6386d8f47129b605fb", "text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).", "title": "" }, { "docid": "6573629e918822c0928e8cf49f20752c", "text": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https:// github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.", "title": "" }, { "docid": "1aa51d3ef39773eb3250564ae87c6205", "text": "relatedness between terms using the links found within their corresponding Wikipedia articles. Unlike other techniques based on Wikipedia, WLM is able to provide accurate measures efficiently, using only the links between articles rather than their textual content. Before describing the details, we first outline the other systems to which it can be compared. This is followed by a description of the algorithm, and its evaluation against manually-defined ground truth. The paper concludes with a discussion of the strengths and weaknesses of the new approach. Abstract", "title": "" }, { "docid": "7063d3eb38008bcd344f0ae1508cca61", "text": "The fitness of an evolutionary individual can be understood in terms of its two basic components: survival and reproduction. As embodied in current theory, trade-offs between these fitness components drive the evolution of life-history traits in extant multicellular organisms. Here, we argue that the evolution of germ-soma specialization and the emergence of individuality at a new higher level during the transition from unicellular to multicellular organisms are also consequences of trade-offs between the two components of fitness-survival and reproduction. The models presented here explore fitness trade-offs at both the cell and group levels during the unicellular-multicellular transition. When the two components of fitness negatively covary at the lower level there is an enhanced fitness at the group level equal to the covariance of components at the lower level. We show that the group fitness trade-offs are initially determined by the cell level trade-offs. However, as the transition proceeds to multicellularity, the group level trade-offs depart from the cell level ones, because certain fitness advantages of cell specialization may be realized only by the group. The curvature of the trade-off between fitness components is a basic issue in life-history theory and we predict that this curvature is concave in single-celled organisms but becomes increasingly convex as group size increases in multicellular organisms. We argue that the increasingly convex curvature of the trade-off function is driven by the initial cost of reproduction to survival which increases as group size increases. To illustrate the principles and conclusions of the model, we consider aspects of the biology of the volvocine green algae, which contain both unicellular and multicellular members.", "title": "" }, { "docid": "b66846f076d41c8be3f5921cc085d997", "text": "We present a novel hierarchical force-directed method for drawing large graphs. The algorithm produces a graph embedding in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Projecting high-dimensional drawings onto two or three dimensions often results in drawings that are “smoother” and more symmetric. Among the other notable features of our approach are the utilization of a maximal independent set filtration of the set of vertices of a graph, a fast energy function minimization strategy, efficient memory management, and an intelligent initial placement of vertices. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC.", "title": "" }, { "docid": "59ac2e47ed0824eeba1621673f2dccf5", "text": "In this paper we present a framework for grasp planning with a humanoid robot arm and a five-fingered hand. The aim is to provide the humanoid robot with the ability of grasping objects that appear in a kitchen environment. Our approach is based on the use of an object model database that contains the description of all the objects that can appear in the robot workspace. This database is completed with two modules that make use of this object representation: an exhaustive offline grasp analysis system and a real-time stereo vision system. The offline grasp analysis system determines the best grasp for the objects by employing a simulation system, together with CAD models of the objects and the five-fingered hand. The results of this analysis are added to the object database using a description suited to the requirements of the grasp execution modules. A stereo camera system is used for a real-time object localization using a combination of appearance-based and model-based methods. The different components are integrated in a controller architecture to achieve manipulation task goals for the humanoid robot", "title": "" }, { "docid": "af5645e4c2b37d229b525ff3bbac505f", "text": "PURPOSE OF REVIEW\nTo analyze the role of prepuce preservation in various disorders and discuss options available to reconstruct the prepuce.\n\n\nRECENT FINDINGS\nThe prepuce can be preserved in selected cases of penile degloving procedures, phimosis or hypospadias repair, and penile cancer resection. There is no clear evidence that debilitating and persistent preputial lymphedema develops after a prepuce-sparing penile degloving procedure. In fact, the prepuce can at times be preserved even if lymphedema develops. The prepuce can potentially be preserved in both phimosis and hypospadias repair. Penile cancer localized to the prepuce can be excised using Mohs' micrographic surgery without compromising survival. Reconstruction of the prepuce still remains a theoretical topic. There has been no study that has systematically evaluated efficacy of any reconstructive procedures.\n\n\nSUMMARY\nThe standard practice for preputial disorders remains circumcision. However, prepuce preservation is often technically feasible without compromising treatment. Preservative surgery combined with reconstruction may lead to better patient satisfaction and quality of life.", "title": "" }, { "docid": "7a67bccffa6222f8129a90933962e285", "text": "BACKGROUND\nPast research has found that playing a classic prosocial video game resulted in heightened prosocial behavior when compared to a control group, whereas playing a classic violent video game had no effect. Given purported links between violent video games and poor social behavior, this result is surprising. Here our aim was to assess whether this finding may be due to the specific games used. That is, modern games are experienced differently from classic games (more immersion in virtual environments, more connection with characters, etc.) and it may be that playing violent video games impacts prosocial behavior only when contemporary versions are used.\n\n\nMETHODS AND FINDINGS\nExperiments 1 and 2 explored the effects of playing contemporary violent, non-violent, and prosocial video games on prosocial behavior, as measured by the pen-drop task. We found that slight contextual changes in the delivery of the pen-drop task led to different rates of helping but that the type of game played had little effect. Experiment 3 explored this further by using classic games. Again, we found no effect.\n\n\nCONCLUSIONS\nWe failed to find evidence that playing video games affects prosocial behavior. Research on the effects of video game play is of significant public interest. It is therefore important that speculation be rigorously tested and findings replicated. Here we fail to substantiate conjecture that playing contemporary violent video games will lead to diminished prosocial behavior.", "title": "" }, { "docid": "8649d115dea8cb6b3353745476b5c57d", "text": "OBJECTIVES\nTo test a brief, non-sectarian program of meditation training for effects on perceived stress and negative emotion, and to determine effects of practice frequency and test the moderating effects of neuroticism (emotional lability) on treatment outcome.\n\n\nDESIGN AND SETTING\nThe study used a single-group, open-label, pre-test post-test design conducted in the setting of a university medical center.\n\n\nPARTICIPANTS\nHealthy adults (N=200) interested in learning meditation for stress-reduction were enrolled. One hundred thirty-three (76% females) completed at least 1 follow-up visit and were included in data analyses.\n\n\nINTERVENTION\nParticipants learned a simple mantra-based meditation technique in 4, 1-hour small-group meetings, with instructions to practice for 15-20 minutes twice daily. Instruction was based on a psychophysiological model of meditation practice and its expected effects on stress.\n\n\nOUTCOME MEASURES\nBaseline and monthly follow-up measures of Profile of Mood States; Perceived Stress Scale; State-Trait Anxiety Inventory (STAI); and Brief Symptom Inventory (BSI). Practice frequency was indexed by monthly retrospective ratings. Neuroticism was evaluated as a potential moderator of treatment effects.\n\n\nRESULTS\nAll 4 outcome measures improved significantly after instruction, with reductions from baseline that ranged from 14% (STAI) to 36% (BSI). More frequent practice was associated with better outcome. Higher baseline neuroticism scores were associated with greater improvement.\n\n\nCONCLUSIONS\nPreliminary evidence suggests that even brief instruction in a simple meditation technique can improve negative mood and perceived stress in healthy adults, which could yield long-term health benefits. Frequency of practice does affect outcome. Those most likely to experience negative emotions may benefit the most from the intervention.", "title": "" }, { "docid": "d51f0b51f03e310dd183e3a7cb199288", "text": "Traditional vision-based localization methods such as visual SLAM suffer from practical problems in outdoor environments such as unstable feature detection and inability to perform location recognition under lighting, perspective, weather and appearance change. Additionally map construction on a large scale in these systems presents its own challenges. In this work, we present a novel method for precisely localizing vehicles on the road using signs marked on the road (road markings), which have the advantage of being distinct and easy to detect, their detection being robust under changes in lighting and weather. Our method uses corners detected on road markings to perform localization in global coordinates. The method consists of two phases - a mapping phase when a high-quality GPS device is used to automatically survey road marks and add them to a light-weight “map” or database, and a localization phase where road mark detection and look-up in the map, combined with visual odometry, produces precise localization. We present experiments using a real-time implementation operating in a car that demonstrates the improved localization robustness and accuracy of our system even when using road marks alone. However, in this case the trajectory between road marks has to be filled-in by visual odometry, which contributes drift. Hence, we also present a mechanism for combining road-mark-based maps with sparse feature-based maps that results in greater accuracy still. We see our use of road marks as a significant step in the general trend of using higher-level features for improved localization performance irrespective of environment conditions.", "title": "" }, { "docid": "215b65a1777fd4076c97770ad339c59f", "text": "Interactive visualization requires the translation of data into a screen space of limited resolution. While currently ignored by most visualization models, this translation entails a loss of information and the introduction of a number of artifacts that can be useful, (e.g., aggregation, structures) or distracting (e.g., over-plotting, clutter) for the analysis. This phenomenon is observed in parallel coordinates, where overlapping lines between adjacent axes form distinct patterns, representing the relation between variables they connect. However, even for a small number of dimensions, the challenge is to effectively convey the relationships for all combinations of dimensions. The size of the dataset and a large number of dimensions only add to the complexity of this problem. To address these issues, we propose Pargnostics, parallel coordinates diagnostics, a model based on screen-space metrics that quantify the different visual structures. Pargnostics metrics are calculated for pairs of axes and take into account the resolution of the display as well as potential axis inversions. Metrics include the number of line crossings, crossing angles, convergence, overplotting, etc. To construct a visualization view, the user can pick from a ranked display showing pairs of coordinate axes and the structures between them, or examine all possible combinations of axes at once in a matrix display. Picking the best axes layout is an NP-complete problem in general, but we provide a way of automatically optimizing the display according to the user's preferences based on our metrics and model.", "title": "" }, { "docid": "b6f026f8b2e37406ee68b9214fb82955", "text": "Human visual behaviour has significant potential for activity recognition and computational behaviour analysis, but previous works focused on supervised methods and recognition of predefined activity classes based on short-term eye movement recordings. We propose a fully unsupervised method to discover users' everyday activities from their long-term visual behaviour. Our method combines a bag-of-words representation of visual behaviour that encodes saccades, fixations, and blinks with a latent Dirichlet allocation (LDA) topic model. We further propose different methods to encode saccades for their use in the topic model. We evaluate our method on a novel long-term gaze dataset that contains full-day recordings of natural visual behaviour of 10 participants (more than 80 hours in total). We also provide annotations for eight sample activity classes (outdoor, social interaction, focused work, travel, reading, computer work, watching media, eating) and periods with no specific activity. We show the ability of our method to discover these activities with performance competitive with that of previously published supervised methods.", "title": "" }, { "docid": "c07516bc86b7a082bcc2bd405757d387", "text": "The trend towards more commercial-off-the-shelf (COTS) components in complex safety-critical systems is increasing the difficulty of verifying system correctness. Runtime verification (RV) is a lightweight technique to verify that certain properties hold over execution traces. RV is usually implemented as runtime monitors that can be used as runtime fault detectors or test oracles to analyze a system under test for bad behaviors. Most existing RV methods utilize some form of system or code instrumentation and thus are not designed to monitor potentially black-box COTS components. This thesis presents a suitable runtime monitoring framework for monitoring safety-critical embedded systems with black-box components. We provide an end-to-end framework including proven correct monitoring algorithms, a formal specification language with semi-formal techniques to map the system onto our formal system trace model, specification design patterns to aid translating informal specifications into the formal specification language, and a safety-case pattern example showing the argument that our monitor design can be safely integrated with a target system. We utilized our monitor implementation to check test logs from several system tests. We show the monitor being used to check system test logs offline for interesting properties. We also performed real-time replay of logs from a system network bus, demonstrating the feasibility of our embedded monitor implementation in real-time operation.", "title": "" } ]
scidocsrr
591d57b53ed828ce4587b1b8deaaaf29
A Meaning-based English Math Word Problem Solver with Understanding, Reasoning and Explanation
[ { "docid": "59e29fa12539757b5084cab8f1e1b292", "text": "This article addresses the problem of understanding mathematics described in natural language. Research in this area dates back to early 1960s. Several systems have so far been proposed to involve machines to solve mathematical problems of various domains like algebra, geometry, physics, mechanics, etc. This correspondence provides a state of the art technical review of these systems and approaches proposed by different research groups. A unified architecture that has been used in most of these approaches is identified and differences among the systems are highlighted. Significant achievements of each method are pointed out. Major strengths and weaknesses of the approaches are also discussed. Finally, present efforts and future trends in this research area are presented.", "title": "" }, { "docid": "8fd830d62cceb6780d0baf7eda399fdf", "text": "Little work from the Natural Language Processing community has targeted the role of quantities in Natural Language Understanding. This paper takes some key steps towards facilitating reasoning about quantities expressed in natural language. We investigate two different tasks of numerical reasoning. First, we consider Quantity Entailment, a new task formulated to understand the role of quantities in general textual inference tasks. Second, we consider the problem of automatically understanding and solving elementary school math word problems. In order to address these quantitative reasoning problems we first develop a computational approach which we show to successfully recognize and normalize textual expressions of quantities. We then use these capabilities to further develop algorithms to assist reasoning in the context of the aforementioned tasks.", "title": "" } ]
[ { "docid": "eb761eb499b2dc82f7f2a8a8a5ff64a7", "text": "We consider the situation in which digital data is to be reliably transmitted over a discrete, memoryless channel (dmc) that is subjected to a wire-tap at the receiver. We assume that the wire-tapper views the channel output via a second dmc). Encoding by the transmitter and decoding by the receiver are permitted. However, the code books used in these operations are assumed to be known by the wire-tapper. The designer attempts to build the encoder-decoder in such a way as to maximize the transmission rate R, and the equivocation d of the data as seen by the wire-tapper. In this paper, we find the trade-off curve between R and d, assuming essentially perfect (“error-free”) transmission. In particular, if d is equal to Hs, the entropy of the data source, then we consider that the transmission is accomplished in perfect secrecy. Our results imply that there exists a Cs > 0, such that reliable transmission at rates up to Cs is possible in approximately perfect secrecy.", "title": "" }, { "docid": "4e9b1776436950ed25353a8731eda76a", "text": "This paper presents the design and implementation of VibeBin, a low-cost, non-intrusive and easy-to-install waste bin level detection system. Recent popularity of Internet-of-Things (IoT) sensors has brought us unprecedented opportunities to enable a variety of new services for monitoring and controlling smart buildings. Indoor waste management is crucial to a healthy environment in smart buildings. Measuring the waste bin fill-level helps building operators schedule garbage collection more responsively and optimize the quantity and location of waste bins. Existing systems focus on directly and intrusively measuring the physical quantities of the garbage (weight, height, volume, etc.) or its appearance (image), and therefore require careful installation, laborious calibration or labeling, and can be costly. Our system indirectly measures fill-level by sensing the changes in motor-induced vibration characteristics on the outside surface of waste bins. VibeBin exploits the physical nature of vibration resonance of the waste bin and the garbage within, and learns the vibration features of different fill-levels through a few garbage collection (emptying) cycles in a completely unsupervised manner. VibeBin identifies vibration features of different fill-levels by clustering historical vibration samples based on a custom distance metric which measures the dissimilarity between two samples. We deploy our system on eight waste bins of different types and sizes, and show that under normal usage and real waste, it can deliver accurate level measurements after just 3 garbage collection cycles. The average F-score (harmonic mean of precision and recall) of measuring empty, half, and full levels achieves 0.912. A two-week deployment also shows that the false positive and false negative events are satisfactorily rare.", "title": "" }, { "docid": "1c8b8d8322e403fae0d2f361bc00c969", "text": "We explore several image processing methods to automatically identify the make of a vehicle based focused on the manufacturer’s iconic logo. Our findings reveal that large variations in brightness, vehicle features in the foreground, and specular reflections render the scale-invariant feature transform (SIFT) approach practically useless. Methods such as Fourier shape descriptors and inner structure mean square error analysis are able to achieve more reliable results.", "title": "" }, { "docid": "1d9e5ea84617c934083f607561a196e0", "text": "Coherent optical OFDM (CO-OFDM) has recently been proposed and the proof-of-concept transmission experiments have shown its extreme robustness against chromatic dispersion and polarization mode dispersion. In this paper, we first review the theoretical fundamentals for CO-OFDM and its channel model in a 2x2 MIMO-OFDM representation. We then present various design choices for CO-OFDM systems and perform the nonlinearity analysis for RF-to-optical up-converter. We also show the receiver-based digital signal processing to mitigate self-phase-modulation (SPM) and Gordon-Mollenauer phase noise, which is equivalent to the midspan phase conjugation.", "title": "" }, { "docid": "bdf3417010f59745e4aaa1d47b71c70e", "text": "Recent studies witness the success of Bag-of-Features (BoF) frameworks for video based human action recognition. The detection and description of local interest regions are two fundamental problems in BoF framework. In this paper, we propose a motion boundary based sampling strategy and spatialtemporal (3D) co-occurrence descriptors for action video representation and recognition. Our sampling strategy is partly inspired by the recent success of dense trajectory (DT) based features [1] for action recognition. Compared with DT, we densely sample spatial-temporal cuboids along motion boundary which can greatly reduce the number of valid trajectories while preserve the discriminative power. Moreover, we develop a set of 3D co-occurrence descriptors which take account of the spatial-temporal context within local cuboids and deliver rich information for recognition. Furthermore, we decompose each 3D co-occurrence descriptor at pixel level and bin level and integrate the decomposed components with a multi-channel framework, which can improve the performance significantly. To evaluate the proposed methods, we conduct extensive experiments on three benchmarks including KTH, YouTube and HMDB51. The results show that our sampling strategy significantly reduces the computational cost of point tracking without degrading performance. Meanwhile, we achieve superior performance than the state-ofthe-art methods. We report 95.6% on KTH, 87.6% on YouTube and 51.8% on HMDB51.", "title": "" }, { "docid": "94186f28a550878aa564954d723b06a9", "text": "Color transfer between images uses the statistics information of image effectively. We present a novel approach of local color transfer between images based on the simple statistics and locally linear embedding. A sketching interface is proposed for quickly and easily specifying the color correspondences between target and source image. The user can specify the correspondences of local region using scribes, which more accurately transfers the target color to the source image while smoothly preserving the boundaries, and exhibits more natural output results. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in different regions in the source image. Moreover, our algorithm does not require to choose the same color style and image size between source and target images. We propose the sub-sampling to reduce the computational load. Comparing with other approaches, our algorithm is much better in color blending in the input data. Our approach preserves the other color details in the source image. Various experimental results show that our approach specifies the correspondences of local color region in source and target images. And it expresses the intention of users and generates more actual and natural results of visual effect.", "title": "" }, { "docid": "4ca4ccd53064c7a9189fef3e801612a0", "text": "workflows, data warehousing, business intelligence Process design and automation technologies are being increasingly used by both traditional and newly-formed, Internet-based enterprises in order to improve the quality and efficiency of their administrative and production processes, to manage e-commerce transactions, and to rapidly and reliably deliver services to businesses and individual customers.", "title": "" }, { "docid": "38524d91bcff648f96f5d693425dff7f", "text": "This paper presents a predictive current control method and its application to a voltage source inverter. The method uses a discrete-time model of the system to predict the future value of the load current for all possible voltage vectors generated by the inverter. The voltage vector which minimizes a quality function is selected. The quality function used in this work evaluates the current error at the next sampling time. The performance of the proposed predictive control method is compared with hysteresis and pulsewidth modulation control. The results show that the predictive method controls very effectively the load current and performs very well compared with the classical solutions", "title": "" }, { "docid": "9b2f4394cabd31008773049c32dea963", "text": "Twenty-two decision tree, nine statistical, and two neural network algorithms are compared on thirty-two datasets in terms of classification accuracy, training time, and (in the case of trees) number of leaves. Classification accuracy is measured by mean error rate and mean rank of error rate. Both criteria place a statistical, spline-based, algorithm called POLYCLSSS at the top, although it is not statistically significantly different from twenty other algorithms. Another statistical algorithm, logistic regression, is second with respect to the two accuracy criteria. The most accurate decision tree algorithm is QUEST with linear splits, which ranks fourth and fifth, respectively. Although spline-based statistical algorithms tend to have good accuracy, they also require relatively long training times. POLYCLASS, for example, is third last in terms of median training time. It often requires hours of training compared to seconds for other algorithms. The QUEST and logistic regression algorithms are substantially faster. Among decision tree algorithms with univariate splits, C4.5, IND-CART, and QUEST have the best combinations of error rate and speed. But C4.5 tends to produce trees with twice as many leaves as those from IND-CART and QUEST.", "title": "" }, { "docid": "545562f49534f9cf502f420e2e6fa420", "text": "Automatic optimization of spoken dialog management policies that are robust to environmental noise has long been the goal for both academia and industry. Approaches based on reinforcement learning have been proved to be effective. However, the numerical representation of dialog policy is human-incomprehensible and difficult for dialog system designers to verify or modify, which limits its practical application. In this paper we propose a novel framework for optimizing dialog policies specified in domain language using genetic algorithm. The human-interpretable representation of policy makes the method suitable for practical employment. We present learning algorithms using user simulation and real human-machine dialogs respectively. Empirical experimental results are given to show the effectiveness of the proposed approach.", "title": "" }, { "docid": "e5b125bdb5a17cbe926c03c3bac6935c", "text": "We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-to-image translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets.", "title": "" }, { "docid": "913b4f19a98ef3466b13d37ced3b2134", "text": "In this paper we present DAML-S, a DAML+OIL ontology for describing the properties and capabilities of Web Services. Web Services – Web-accessible programs and devices – are garnering a great deal of interest from industry, and standards are emerging for low-level descriptions of Web Services. DAML-S complements this effort by providing Web Service descriptions at the application layer, describing what a service can do, and not just how it does it. In this paper we describe three aspects of our ontology: the service profile, the process model, and the service grounding. The paper focuses on the grounding, which connects our ontology with low-level XML-based descriptions of Web Services. 1 Services on the Semantic Web The Semantic Web [2] is rapidly becoming a reality through the development of Semantic Web markup languages such as DAML+OIL [9]. These markup languages enable the creation of arbitrary domain ontologies that support the unambiguous description of Web content. Web Services [15] – Web-accessible programs and devices – are among the most important resources on the Web, not only to provide information to a user, but to enable a user to effect change in the world. Web Services are garnering a great deal of interest from industry, and standards are being developed for low-level descriptions of Web Services. Languages such as WSDL (Web Service Description Language) provide a communication level description of the messages and protocols used by a Web Service. To complement this effort, our interest is in developing semantic markup that will sit at the application level above WSDL, and describe what is being sent across the wires and why, not just how it is being sent. We are developing a DAML+OIL ontology for Web Services, called DAML-S [5], with the objective of making Web Services computer-interpretable and hence enabling the following tasks [15]: discovery, i.e. locating Web Services (typically through a registry service) that provide a particular service and that adhere to specified constraints; invocation or activation and execution of an identified service by an agent or other service; interoperation, i.e. breaking down interoperability barriers through semantics, and the automatic insertion of message parameter translations between clients and services [10, 13, 22]; composition of new services through automatic selection, composition and interoperation of existing services [15, 14]; verification of service properties [19]; and execution monitoring, i.e. tracking the execution of complex or composite tasks performed by a service or a set of services, thus identifying failure cases, or providing explanations of different execution traces. To make use of a Web Service, a software agent needs a computer-interpretable description of the service, and the means by which it is accessed. This paper describes a collaborative effort by BBN Technologies, Carnegie Mellon University, Nokia, Stanford University, SRI International, and Yale University, to define the DAML-S Web Services ontology. An earlier version of the DAML-S specification is described in [5]; an updated version of DAML-S is presented at http://www.daml.org/services/daml-s/2001/10/. In this paper we briefly summarize and update this specification, and discuss the important problem of the grounding, i.e. how to translate what is being sent in a message to or from a service into how it is to be sent. In particular, we present the linking of DAML-S to the Web Services Description Language (WSDL). DAML-S complements WSDL, by providing an abstract or application level description lacking in WSDL. 2 An Upper Ontology for Services In DAML+OIL, abstract categories of entities, events, etc. are defined in terms of classes and properties. DAML-S defines a set of classes and properties, specific to the description of services, within DAML+OIL. The class Service is at the top of the DAML-S ontology. Service properties at this level are very general. The upper ontology for services is silent as to what the particular subclasses of Service should be, or even the conceptual basis for structuring this taxonomy, but it is expected that the taxonomy will be structured according to functional and domain differences and market needs. For example, one might imagine a broad subclass, B2C-transaction, which would encompass services for purchasing items from retail Web sites, tracking purchase status, establishing and maintaining accounts with the sites, and so on. The ontology of services provides two essential types of knowledge about a service, characterized by the questions: – What does the service require of agents, and provide for them? This is provided by the profile, a class that describes the capabilities and parameters of the service. We say that the class Service presents a ServiceProfile. – How does it work? The answer to this question is given in the model, a class that describes the workflow and possible execution paths of the service. Thus, the class Service is describedBy a ServiceModel The ServiceProfile provides information about a service that can be used by an agent to determine if the service meets its rough needs, and if it satisfies constraints such as security, locality, affordability, quality-requirements, etc. In contrast, the ServiceModel enables an agent to: (1) perform a more in-depth analysis of whether the service meets its needs; (2) compose service descriptions from multiple services to perform a specific task; (3) coordinate the activities of different agents; and (4) monitor the execution of the service. Generally speaking, the ServiceProfile provides the information needed for an agent to discover a service, whereas the ServiceModel provides enough information for an agent to make use of a service. In the following sections we discuss the service profile and the service model in greater detail, and introduce the service grounding, which describes how agents can communicate with and thus invoke the service.", "title": "" }, { "docid": "52fb72d1b6f5384baa76e76aae2eeee0", "text": "Data mining techniques have been successfully applied in stock, insurance, medicine, banking and retailing domains. In the sport domain, for transforming sport data into actionable knowledge, coaches can use data mining techniques to plan training sessions more effectively, and to reduce the impact of testing activity on athletes. This paper presents one such model, which uses clustering techniques, such as improved K-Means, Expectation-Maximization (EM), DBSCAN, COBWEB and hierarchical clustering approaches to analyze sport physiological data collected during incremental tests. Through analyzing the progress of a test session, the authors assign the tested athlete to a group of athletes and evaluate these groups to support the planning of training sessions.", "title": "" }, { "docid": "fd7799d569bdc4ad48a88070974f6c13", "text": "This paper presents a new large scale dataset targeting evaluation of local shape descriptors and 3d object recognition algorithms. The dataset consists of point clouds and triangulated meshes from 292 physical scenes taken from 11 different views, a total of approximately 3204 views. Each of the physical scenes contain 10 occluded objects resulting in a dataset with 32040 unique object poses and 45 different object models. The 45 object models are full 360 degree models which are scanned with a high precision structured light scanner and a turntable. All the included objects belong to different geometric groups, concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching performance of 7 different state-of-the-art local shape descriptors. Further, we validate the dataset in a 3D object recognition pipeline. Our benchmark shows as expected that local shape feature descriptors without any global point relation across the surface have a poor matching performance with flat and cylindrical objects. It is our objective that this dataset contributes to the future development of next generation of 3D object recognition algorithms. The dataset is public available at http://roboimagedata.compute.dtu.dk/.", "title": "" }, { "docid": "f47ef0357ba3cb0e6a25be8fc3758a01", "text": "In real-time systems such as automotives, a distribution system is used to increase the reliability of the system. As the demand and complexity of the distribution system have increased, several automotive communication protocols have been introduced such as LIN, CAN, and FlexRay. Each node of the system chooses the communication protocol that is suitable for the specific purpose. Each node doesn't need to have all of communication protocols because of cost, space, efficiency, and other factors. Therefore, the gateway system was introduced in the automotive system and has became one of the most important components. The gateway makes possible node-to-node communicate over different communication protocols. However, the gateway system has high probability of error because each protocol has different features such as signaling rate, data length, and so on. Moreover, it is difficult to detect the reason and location of errors. If the gateway reports the protocol conversion result when each protocol is converted into another protocol, this report helps developers find the reason and location of errors to debug errors easily. In this paper, we implement the gateway system with a diagnostic function. LIN, CAN, and FlexRay are used as communication protocols.", "title": "" }, { "docid": "37cca578319bd55d0784c24fc9773913", "text": "Natural DNA can encode complexity on an enormous scale. Researchers are attempting to achieve the same representational efficiency in computers by implementing developmental encodings, i.e. encodings that map the genotype to the phenotype through a process of growth from a small starting point to a mature form. A major challenge in in this effort is to find the right level of abstraction of biological development to capture its essential properties without introducing unnecessary inefficiencies. In this paper, a novel abstraction of natural development, called Compositional Pattern Producing Networks (CPPNs), is proposed. Unlike currently accepted abstractions such as iterative rewrite systems and cellular growth simulations, CPPNs map to the phenotype without local interaction, that is, each individual component of the phenotype is determined independently of every other component. Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.", "title": "" }, { "docid": "ef208f640807a377c4301fb22cd172cb", "text": "Providing access to relevant biomedical literature in a clinical setting has the potential to bridge a critical gap in evidence-based medicine. Here, our goal is specifically to provide relevant articles to clinicians to improve their decision-making in diagnosing, treating, and testing patients. To this end, the TREC 2014 Clinical Decision Support Track evaluated a system’s ability to retrieve relevant articles in one of three categories (Diagnosis, Treatment, Test) using an idealized form of a patient medical record . Over 100 submissions from over 25 participants were evaluated on 30 topics, resulting in over 37k relevance judgments. In this article, we provide an overview of the task, a survey of the information retrieval methods employed by the participants, an analysis of the results, and a discussion on the future directions for this challenging yet important task.", "title": "" }, { "docid": "ce0cfd1dd69e235f942b2e7583b8323b", "text": "Increasing use of the World Wide Web as a B2C commercial tool raises interest in understanding the key issues in building relationships with customers on the Internet. Trust is believed to be the key to these relationships. Given the differences between a virtual and a conventional marketplace, antecedents and consequences of trust merit re-examination. This research identifies a number of key factors related to trust in the B2C context and proposes a framework based on a series of underpinning relationships among these factors. The findings in this research suggest that people are more likely to purchase from the web if they perceive a higher degree of trust in e-commerce and have more experience in using the web. Customer’s trust levels are likely to be influenced by the level of perceived market orientation, site quality, technical trustworthiness, and user’s web experience. People with a higher level of perceived site quality seem to have a higher level of perceived market orientation and trustworthiness towards e-commerce. Furthermore, people with a higher level of trust in e-commerce are more likely to participate in e-commerce. Positive ‘word of mouth’, money back warranty and partnerships with well-known business partners, rank as the top three effective risk reduction tactics. These findings complement the previous findings on e-commerce and shed light on how to establish a trust relationship on the World Wide Web.  2003 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
357eff2ff2aaed72bedd619fad1d4577
Real-Time Movie-Induced Discrete Emotion Recognition from EEG Signals
[ { "docid": "fb2ce776c503168e82cc3ffac9c205dd", "text": "Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes.", "title": "" } ]
[ { "docid": "c4a2e600e54fc42e878897e5cda40ac7", "text": "Convolutional-deconvolution networks can be adopted to perform end-to-end saliency detection. But, they do not work well with objects of multiple scales. To overcome such a limitation, in this work, we propose a recurrent attentional convolutional-deconvolution network (RACDNN). Using spatial transformer and recurrent network units, RACDNN is able to iteratively attend to selected image sub-regions to perform saliency refinement progressively. Besides tackling the scale problem, RACDNN can also learn context-aware features from past iterations to enhance saliency refinement in future iterations. Experiments on several challenging saliency detection datasets validate the effectiveness of RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection methods.", "title": "" }, { "docid": "1fab94344313400bbef96c81bc14017b", "text": "The American Society for Apheresis (ASFA) Journal of Clinical Apheresis (JCA) Special Issue Writing Committee is charged with reviewing, updating, and categorizing indications for the evidence-based use of therapeutic apheresis in human disease. Since the 2007 JCA Special Issue (Fourth Edition), the Committee has incorporated systematic review and evidence-based approaches in the grading and categorization of apheresis indications. This Seventh Edition of the JCA Special Issue continues to maintain this methodology and rigor to make recommendations on the use of apheresis in a wide variety of diseases/conditions. The JCA Seventh Edition, like its predecessor, has consistently applied the category and grading system definitions in the fact sheets. The general layout and concept of a fact sheet that was used since the fourth edition has largely been maintained in this edition. Each fact sheet succinctly summarizes the evidence for the use of therapeutic apheresis in a specific disease entity. The Seventh Edition discusses 87 fact sheets (14 new fact sheets since the Sixth Edition) for therapeutic apheresis diseases and medical conditions, with 179 indications, which are separately graded and categorized within the listed fact sheets. Several diseases that are Category IV which have been described in detail in previous editions and do not have significant new evidence since the last publication are summarized in a separate table. The Seventh Edition of the JCA Special Issue serves as a key resource that guides the utilization of therapeutic apheresis in the treatment of human disease. J. Clin. Apheresis 31:149-162, 2016. © 2016 Wiley Periodicals, Inc.", "title": "" }, { "docid": "2f2be97ad06ded172333c29b32fd3f0d", "text": "Measurement uncertainty is traditionally represented in the form of expanded uncertainty as defined through the Guide to the Expression of Uncertainty in Measurement (GUM). The International Organization for Standardization GUM represents uncertainty through confidence intervals based on the variances and means derived from probability density functions. A new approach to the evaluation of measurement uncertainty based on the polynomial chaos theory is presented and compared with the traditional GUM method", "title": "" }, { "docid": "d1041afcb50a490034740add2cce3f0d", "text": "Inverse synthetic aperture radar imaging of moving targets with a stepped frequency waveform presents unique challenges. Intra-step target motion introduces phase discontinuities between frequency bands, which in turn produce degraded range side lobes. Frequency stitching of the stepped-frequency waveform to emulate a contiguous bandwidth can dramatically reduce the effective pulse repetition frequency, which then may impact the maximize target size that can be unambiguously measured and imaged via ISAR. This paper analyzes these effects and validates results via simulated data.", "title": "" }, { "docid": "19cc879d09bb01ae363b532ef9056ae8", "text": "This paper proposes a system that can detect and rephrase profanity in Chinese text. Rather than just masking detected profanity, we want to revise the input sentence by using inoffensive words while keeping their original meanings. 29 of such rephrasing rules were invented after observing sentences on real-word social websites. The overall accuracy of the proposed system is 85.56%", "title": "" }, { "docid": "aa23e075bbd0f87ae8a8a9eadae4e697", "text": "Mammogram classification is directly related to computer-aided diagnosis of breast cancer. Traditional methods requires great effort to annotate the training data by costly manual labeling and specialized computational models to detect these annotations during test. Inspired by the success of using deep convolutional features for natural image analysis and multi-instance learning for labeling a set of instances/patches, we propose end-to-end trained deep multiinstance networks for mass classification based on whole mammogram without the aforementioned costly need to annotate the training data. We explore three different schemes to construct deep multi-instance networks for whole mammogram classification. Experimental results on the INbreast dataset demonstrate the robustness of proposed deep networks compared to previous work using segmentation and detection annotations in the training.", "title": "" }, { "docid": "efb9686dbd690109e8e5341043648424", "text": "Because of the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial on the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition, which are unobservable with behavioral methods alone. I then review a number of important methodological issues and often-forgotten facts that should be considered when evaluating or planning ERP experiments.", "title": "" }, { "docid": "e06e2690d53892918c3deb9db35e34d1", "text": "There is a growing demand for accurate high-resolution land cover maps in many fields, e.g., in land-use planning and biodiversity conservation. Developing such maps has been performed using Object-Based Image Analysis (OBIA) methods, which usually reach good accuracies, but require a high human supervision and the best configuration for one image can hardly be extrapolated to a different image. Recently, the deep learning Convolutional Neural Networks (CNNs) have shown outstanding results in object recognition in the field of computer vision. However, they have not been fully explored yet in land cover mapping for detecting species of high biodiversity conservation interest. This paper analyzes the potential of CNNs-based methods for plant species detection using free high-resolution Google Earth images and provides an objective comparison with the state-of-the-art OBIA-methods. We consider as case study the detection of Ziziphus lotus shrubs, which are protected as a priority habitat under the European Union Habitats Directive. According to our results, compared to OBIA-based methods, the proposed CNNbased detection model, in combination with data-augmentation, transfer learning and pre-processing, achieves higher performance with less human ∗Both authors have contributed equally to this work 1 ar X iv :1 70 6. 00 91 7v 1 [ cs .C V ] 3 J un 2 01 7 intervention and the knowledge it acquires in the first image can be transferred to other images, which makes the detection process very fast. The provided methodology can be systematically reproduced for other species detection.", "title": "" }, { "docid": "393513f676132d333bb1ebff884da7b7", "text": "This paper reports an investigation of some methods for isolating, or segmenting, characters during the reading of machineprinted text by optical character recognition systems. Two new segmentation algorithms using feature extraction techniques are presented; both are intended for use in the recognition of machine-printed lines of lo-, 11and 12-pitch serif-type multifont characters. One of the methods, called quasi-topological segmentation, bases the decision to “section” a character on a combination of featureextraction and character-width measurements. The other method, topological segmentation, involves feature extraction alone. The algorithms have been tested with an evaluation method that is independent of any particular recognition system. Test results are based on application of the algorithm to upper-case alphanumeric characters gathered from print sources that represent the existing world of machine printing. The topological approach demonstrated better performance on the test data than did the quasitopological approach. Introduction When character recognition systems are structured to recognize one character at a time, some means must be provided to divide the incoming data stream into segments that define the beginning and end of each character. Writing about this aspect of pattern recognition in his review article, G. Nagy [l] stated that “object isolation is all too often ignored in laboratory studies. Yet touching characters are responsible for the majority of errors in the automatic reading of both machine-printed and hand-printed text. . . . ” The importance of the touching-character problem in the design of practical character recognition machines motivated the laboratory study reported in this paper. We present two new algorithms for separating upper-case serif characters, develop a general philosophy for evaluating the effectiveness of segmentation algorithms, and evaluate the performance of our algorithms when they are applied to lo-, 11and 12-pitch alphanumeric characters. The segmentation algorithms were developed specifically for potential use with recognition systems that use a raster-type scanner to produce an analog video signal that is digitized before presentation of the data to the recognition logic. The raster is assumed to move from right to left across a line of printed characters and to make approximately 20 vertical scans per character. This approach to recognition technology is the one most commonly used in IBM’s current optical character recognition machines. A paper on the IBM 1975 Optical Page Reader [2] gives one example of how the approach has been implemented. Other approaches to recognition technology may not require that decisions be made to identify the beginning and end of characters. Nevertheless, the performance of any recognition system is affected by the presence of touching characters and the design of recognition algorithms must take the problem into account (see Clayden, Clowes and Parks [3]). Simple character recognition systeMs of the type we are concerned with perform segmentation by requiring that bit patterns of characters be separated by scans containing no “black” bits. However, this method is rarely adequate to separate characters printed in the common business-machine and typewriter fonts. These fonts, after all, were not designed with machine recognition in mind; but they are nevertheless the fonts it is most desirable for a machine to be able to recognize. In the 12-pitch, serif-type fonts examined for the present study, up to 35 percent of the segments occurred not at blank scans, but within touching character pairs. 153 SEGMENTATION ALGORITHMS MARCH 1971", "title": "" }, { "docid": "61af1eead475eb4489b4a421fb9cbb09", "text": "This article describes a reliable gateway for in-vehicle networks. Such networks include local interconnect networks, controller area networks, and FlexRay. There is some latency when transferring a message from one node (source) to another node (destination). A high probability of error exists due to different protocol specifications such as baud-rate, and message frame format. Therefore, deploying a reliable gateway is a challenge to the automotive industry. We propose a reliable gateway based on the OSEK/VDX components for in-vehicle networks. We also examine the gateway system developed, and then we evaluate the performance of our proposed system.", "title": "" }, { "docid": "210e22e098340e4f858b4ceab1c643e6", "text": "Dimethylsulfoxide (DMSO) controlled puff induction and repression (or non-induction) in larval polytene chromosomes of Chironomus tentans were studied for the case of the Balbiani rings (BR). A characteristic reaction pattern, involving BR 1, BR 2 and BR 3, all in salivary gland chromosome IV was found. In vivo exposure of 4th instar larvae (not prepupae) to 10% DMSO at 18° C first evokes an over-stimulation of BR 3 while DMSO-stimulation of puffing at BR 1 and BR 2 always follows that of BR 3. After removal of the drug, a rapid uniform collapse of all puffs occurs, thus more or less restoring the banding pattern of all previously decondensed chromosome segments. Recovery proceeds as BR's and other puffs reappear. By observing the restoration, one can locate the site from which a BR (puff) originates. BR 2, which is normally the most active non-ribosomal gene locus in untreated larvae, here serves as an example. As the sizes of BR 3, BR 1 and BR 2 change, so do the quantities of the transcriptional products in these gene loci (and vice versa), as estimated electron-microscopically in ultrathin sections and autoradiographically in squash preparations. In autoradiograms, the DMSO-stimulated BRs exhibit the most dense concentration of silver grains and therefore the highest rate of transcriptional activity. In DMSO-repressed BRs (and other puffs) the transcription of the locus specific genes is not completely shut off. In chromosomes from nuclei with high labelling intensities the repressed BRs (and other puffs) always exhibit a low level of 3H-uridine incorporation in vivo. The absence of cytologically visible BR (puff) formation therefore does not necessarily indicate complete transcriptional inactivity. Typically, before the stage of puff formation the 3H-uridine labelling first appears in the interband-like regions.", "title": "" }, { "docid": "79b3ed4c5e733c73b5e7ebfdf6069293", "text": "This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, metal etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.", "title": "" }, { "docid": "47c88bb234a6e21e8037a67e6dd2444f", "text": "Lacking an operational theory to explain the organization and behaviour of matter in unicellular and multicellular organisms hinders progress in biology. Such a theory should address life cycles from ontogenesis to death. This theory would complement the theory of evolution that addresses phylogenesis, and would posit theoretical extensions to accepted physical principles and default states in order to grasp the living state of matter and define proper biological observables. Thus, we favour adopting the default state implicit in Darwin’s theory, namely, cell proliferation with variation plus motility, and a framing principle, namely, life phenomena manifest themselves as non-identical iterations of morphogenetic processes. From this perspective, organisms become a consequence of the inherent variability generated by proliferation, motility and self-organization. Morphogenesis would then be the result of the default state plus physical constraints, like gravity, and those present in living organisms, like muscular tension.", "title": "" }, { "docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c", "text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.", "title": "" }, { "docid": "a0f8af71421d484cbebb550a0bf59a6d", "text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.", "title": "" }, { "docid": "886c284d72a01db9bc4eb9467e14bbbb", "text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.", "title": "" }, { "docid": "1874bd466665e39dbb4bd28b2b0f0d6e", "text": "Pattern recognition encompasses two fundamental tasks: description and classification. Given an object to analyze, a pattern recognition system first generates a description of it (i.e., the pattern) and then classifies the object based on that description (i.e., the recognition). Two general approaches for implementing pattern recognition systems, statistical and structural, employ different techniques for description and classification. Statistical approaches to pattern recognition use decision-theoretic concepts to discriminate among objects belonging to different groups based upon their quantitative features. Structural approaches to pattern recognition use syntactic grammars to discriminate among objects belonging to different groups based upon the arrangement of their morphological (i.e., shape-based or structural) features. Hybrid approaches to pattern recognition combine aspects of both statistical and structural pattern recognition. Structural pattern recognition systems are difficult to apply to new domains because implementation of both the description and classification tasks requires domain knowledge. Knowledge acquisition techniques necessary to obtain domain knowledge from experts are tedious and often fail to produce a complete and accurate knowledge base. Consequently, applications of structural pattern recognition have been primarily restricted to domains in which the set of useful morphological features has been established in the literature (e.g., speech recognition and character recognition) and the syntactic grammars can be composed by hand (e.g., electrocardiogram diagnosis). To overcome this limitation, a domain-independent approach to structural pattern recognition is needed that is capable of extracting morphological features and performing classification without relying on domain knowledge. A hybrid system that employs a statistical classification technique to perform discrimination based on structural features is a natural solution. While a statistical classifier is inherently domain independent, the domain knowledge necessary to support the description task can be eliminated with a set of generally-useful morphological features. Such a set of morphological features is suggested as the foundation for the development of a suite of structure detectors to perform generalized feature extraction for structural pattern recognition in time-series data. The ability of the suite of structure detectors to generate features useful for structural pattern recognition is evaluated by comparing the classification accuracies achieved when using the structure detectors versus commonly-used statistical feature extractors. Two real-world databases with markedly different characteristics and established ground truth serve as sources of data for the evaluation. The classification accuracies achieved using the features extracted by the structure detectors were consistently as good as or better than the classification accuracies achieved when using the features generated by the statistical feature extractors, thus demonstrating that the suite of structure detectors effectively performs generalized feature extraction for structural pattern recognition in time-series data.", "title": "" }, { "docid": "3601a56b6c68864da31ac5aaa67bff1a", "text": "Information asymmetry exists amongst stakeholders in the current food supply chain. Lack of standardization in data format, lack of regulations, and siloed, legacy information systems exasperate the problem. Global agriculture trade is increasing creating a greater need for traceability in the global supply chain. This paper introduces Harvest Network, a theoretical end-to-end, vis a vie “farm-to-fork”, food traceability application integrating the Ethereum blockchain and IoT devices exchanging GS1 message standards. The goal is to create a distributed ledger accessible for all stakeholders in the supply chain. Our design effort creates a basic framework (artefact) for building a prototype or simulation using existing technologies and protocols [1]. The next step is for industry practitioners and researchers to apply AGILE methods for creating working prototypes and advanced projects that bring about greater transparency.", "title": "" }, { "docid": "db0c9001e1be19b57e954a19ada18d06", "text": "A small-size broadband circularly polarized U-slot patch antenna with dual-feed is proposed. By introducing an additional feeding probe near the vertical slot of the conventional singly fed square U-slot patch antenna printed on a high-permittivity substrate, two series resonances in close proximity are excited. The two resonant frequencies are found to be independent of the orientation of U-slot with respect to the patch, and broadband circular polarization is achieved by introducing a nonquadrature phase difference between two feeding ports. Experimental results show that the overlapped bandwidth of VSWR ≤ 1.5 and AR ≤ 3 dB is over 20% with a small overall size of 0.33λ<sub>0</sub> × 0.33λ<sub>0</sub> × 0.068λ<sub>0</sub>, where λ<sub>0</sub> is the free-space wavelength at the center frequency within the operating band.", "title": "" }, { "docid": "ce31be5bfeb05a30c5479a3192d20f93", "text": "Network embedding represents nodes in a continuous vector space and preserves structure information from the Network. Existing methods usually adopt a “one-size-fits-all” approach when concerning multi-scale structure information, such as firstand second-order proximity of nodes, ignoring the fact that different scales play different roles in the embedding learning. In this paper, we propose an Attention-based Adversarial Autoencoder Network Embedding(AAANE) framework, which promotes the collaboration of different scales and lets them vote for robust representations. The proposed AAANE consists of two components: 1) Attention-based autoencoder effectively capture the highly non-linear network structure, which can de-emphasize irrelevant scales during training. 2) An adversarial regularization guides the autoencoder learn robust representations by matching the posterior distribution of the latent embeddings to given prior distribution. This is the first attempt to introduce attention mechanisms to multi-scale network embedding. Experimental results on realworld networks show that our learned attention parameters are different for every network and the proposed approach outperforms existing state-ofthe-art approaches for network embedding.", "title": "" } ]
scidocsrr
9432e1f552681e034a3e8875c681fa59
A Retrieve-and-Edit Framework for Predicting Structured Outputs
[ { "docid": "8ac8ad61dc5357f3dc3ab1020db8bada", "text": "We show how to learn many layers of features on color images and we use these features to initialize deep autoencoders. We then use the autoencoders to map images to short binary codes. Using semantic hashing [6], 28-bit codes can be used to retrieve images that are similar to a query image in a time that is independent of the size of the database. This extremely fast retrieval makes it possible to search using multiple di erent transformations of the query image. 256-bit binary codes allow much more accurate matching and can be used to prune the set of images found using the 28-bit codes.", "title": "" }, { "docid": "121daac04555fd294eef0af9d0fb2185", "text": "In this paper, we apply a general deep learning (DL) framework for the answer selection task, which does not depend on manually defined features or linguistic tools. The basic framework is to build the embeddings of questions and answers based on bidirectional long short-term memory (biLSTM) models, and measure their closeness by cosine similarity. We further extend this basic model in two directions. One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework. The other direction is to utilize a simple but efficient attention mechanism in order to generate the answer representation according to the question context. Several variations of models are provided. The models are examined by two datasets, including TREC-QA and InsuranceQA. Experimental results demonstrate that the proposed models substantially outperform several strong baselines.", "title": "" }, { "docid": "1a6ece40fa87e787f218902eba9b89f7", "text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.", "title": "" } ]
[ { "docid": "2cddde920b40a245a5e1b4b1abb2e92b", "text": "The aim of this research was to understand what affects people's privacy preferences in smartphone apps. We ran a four-week study in the wild with 34 participants. Participants were asked to answer questions, which were used to gather their personal context and to measure their privacy preferences by varying app name and purpose of data collection. Our results show that participants shared the most when no information about data access or purpose was given, and shared the least when both of these details were specified. When just one of either purpose or the requesting app was shown, participants shared less when just the purpose was specified than when just the app name was given. We found that the purpose for data access was the predominant factor affecting users' choices. In our study the purpose condition vary from being not specified, to vague to be very specific. Participants were more willing to disclose data when no purpose was specified. When a vague purpose was shown, participants became more privacy-aware and were less willing to disclose their information. When specific purposes were shown participants were more willing to disclose when the purpose for requesting the information appeared to be beneficial to them, and shared the least when the purpose for data access was solely beneficial to developers.", "title": "" }, { "docid": "38cbdd5d5cea74dfe381547dee53d0aa", "text": "Type confusion, often combined with use-after-free, is the main attack vector to compromise modern C++ software like browsers or virtual machines. Typecasting is a core principle that enables modularity in C++. For performance, most typecasts are only checked statically, i.e., the check only tests if a cast is allowed for the given type hierarchy, ignoring the actual runtime type of the object. Using an object of an incompatible base type instead of a derived type results in type confusion. Attackers abuse such type confusion issues to attack popular software products including Adobe Flash, PHP, Google Chrome, or Firefox. We propose to make all type checks explicit, replacing static checks with full runtime type checks. To minimize the performance impact of our mechanism HexType, we develop both low-overhead data structures and compiler optimizations. To maximize detection coverage, we handle specific object allocation patterns, e.g., placement new or reinterpret_cast which are not handled by other mechanisms. Our prototype results show that, compared to prior work, HexType has at least 1.1 -- 6.1 times higher coverage on Firefox benchmarks. For SPEC CPU2006 benchmarks with overhead, we show a 2 -- 33.4 times reduction in overhead. In addition, HexType discovered 4 new type confusion bugs in Qt and Apache Xerces-C++.", "title": "" }, { "docid": "a93969b08efbc81c80129790d93e39de", "text": "Text simplification aims to rewrite text into simpler versions, and thus make information accessible to a broader audience. Most previous work simplifies sentences using handcrafted rules aimed at splitting long sentences, or substitutes difficult words using a predefined dictionary. This paper presents a datadriven model based on quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. We describe how such a grammar can be induced from Wikipedia and propose an integer linear programming model for selecting the most appropriate simplification from the space of possible rewrites generated by the grammar. We show experimentally that our method creates simplifications that significantly reduce the reading difficulty of the input, while maintaining grammaticality and preserving its meaning.", "title": "" }, { "docid": "94a35547a45c06a90f5f50246968b77e", "text": "In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method.", "title": "" }, { "docid": "47fb3483c8f4a5c0284fec3d3a309c09", "text": "The Knowledge Base Population (KBP) track at the Text Analysis Conference 2010 marks the second year of this important information extraction evaluation. This paper describes the design and implementation of LCC’s systems which participated in the tasks of Entity Linking, Slot Filling, and the new task of Surprise Slot Filling. For the entity linking task, our top score was achieved through a robust context modeling approach which incorporates topical evidence. For slot filling, we used the output of the entity linking system together with a combination of different types of relation extractors. For surprise slot filling, our customizable extraction system was extremely useful due to the time sensitive nature of the task.", "title": "" }, { "docid": "ea33654bb04b06bae122fbded4b8df49", "text": "The volume, veracity, variability, and velocity of data produced from the ever increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks, and open problems in the field of neuromemristive circuits for edge computing.", "title": "" }, { "docid": "8e1b10ebb48b86ce151ab44dc0473829", "text": "─ Cuckoo Search (CS) is a new met heuristic algorithm. It is being used for solving optimization problem. It was developed in 2009 by XinShe Yang and Susah Deb. Uniqueness of this algorithm is the obligatory brood parasitism behavior of some cuckoo species along with the Levy Flight behavior of some birds and fruit flies. Cuckoo Hashing to Modified CS have also been discussed in this paper. CS is also validated using some test functions. After that CS performance is compared with those of GAs and PSO. It has been shown that CS is superior with respect to GAs and PSO. At last, the effect of the experimental results are discussed and proposed for future research. Index terms ─ Cuckoo search, Levy Flight, Obligatory brood parasitism, NP-hard problem, Markov Chain, Hill climbing, Heavy-tailed algorithm.", "title": "" }, { "docid": "bc85e28da375e2a38e06f0332a18aef0", "text": "Background: Statistical reviews of the theories of reasoned action (TRA) and planned behavior (TPB) applied to exercise are limited by methodological issues including insufficient sample size and data to examine some moderator associations. Methods: We conducted a meta-analytic review of 111 TRA/TPB and exercise studies and examined the influences of five moderator variables. Results: We found that: a) exercise was most strongly associated with intention and perceived behavioral control; b) intention was most strongly associated with attitude; and c) intention predicted exercise behavior, and attitude and perceived behavioral control predicted intention. Also, the time interval between intention to behavior; scale correspondence; subject age; operationalization of subjective norm, intention, and perceived behavioral control; and publication status moderated the size of the effect. Conclusions: The TRA/TPB effectively explained exercise intention and behavior and moderators of this relationship. Researchers and practitioners are more equipped to design effective interventions by understanding the TRA/TPB constructs.", "title": "" }, { "docid": "499a37563d171054ad0b0d6b8f7007bf", "text": "For cold-start recommendation, it is important to rapidly profile new users and generate a good initial set of recommendations through an interview process --- users should be queried adaptively in a sequential fashion, and multiple items should be offered for opinion solicitation at each trial. In this work, we propose a novel algorithm that learns to conduct the interview process guided by a decision tree with multiple questions at each split. The splits, represented as sparse weight vectors, are learned through an L_1-constrained optimization framework. The users are directed to child nodes according to the inner product of their responses and the corresponding weight vector. More importantly, to account for the variety of responses coming to a node, a linear regressor is learned within each node using all the previously obtained answers as input to predict item ratings. A user study, preliminary but first in its kind in cold-start recommendation, is conducted to explore the efficient number and format of questions being asked in a recommendation survey to minimize user cognitive efforts. Quantitative experimental validations also show that the proposed algorithm outperforms state-of-the-art approaches in terms of both the prediction accuracy and user cognitive efforts.", "title": "" }, { "docid": "aee91ee5d4cbf51d9ce1344be4e5448c", "text": "Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.", "title": "" }, { "docid": "5e503aaee94e2dc58f9311959d5a142e", "text": "The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections. T INTRODLCTION HIS PAPER outlines a method for the application of the fast Fourier transform algorithm to the estimation of power spectra, which involves sectioning the record, taking modified periodograms of these sections, and averaging these modified periodo-grams. In many instances this method involves fewer computations than other methods. Moreover, it involves the transformation of sequences which are shorter than the whole record which is an advantage when computations are to be performed on a machine with limited core storage. Finally, it directly yields a potential resolution in the time dimension which is useful for testing and measuring nonstationarity. As will be pointed out, it is closely related to the method of complex demodulation described Let X(j), j= 0, N-1 be a sample from a stationary , second-order stochastic sequence. Assume for simplicity that E(X) 0. Let X(j) have spectral density Pcf), I f \\ 5%. We take segments, possibly overlapping, of length L with the starting points of these segments D units apart. Let X,(j),j=O, L 1 be the first such segment. Then Xdj) X($ and finally X&) X(j+ (K 1)D) j 0, ,L-1. We suppose we have K such segments; Xl(j), X,($, and that they cover the entire record, Le., that (K-1)DfL N. This segmenting is illustrated in Fig. 1. The method of estimation is as follows. For each segment of length L we calculate a modified periodo-gram. That is, we select a data window W(j), j= 0, L-1, and form the sequences Xl(j)W(j), X,(j) W(j). We then take the finite Fourier transforms A1(n), AK(~) of these sequences. Here ~k(n) xk(j) w(j)e-z~cijnlL 1 L-1 L j-0 and i= Finally, we obtain the K modified periodograms L U Ik(fn) I Ah(%) k 1, 2, K, where f n 0 , o-,L/2 n \" L and 1 Wyj). L j=o The spectral estimate is the average of these periodo", "title": "" }, { "docid": "7f6de1ca650840d1a4fe5dcd8d97541a", "text": "While child and adolescent physicians are familiar with the treatment of attention-deficit/hyperac-tivity disorder (ADHD), many adult physicians have had little experience with the disorder. It is difficult to develop clinical skills in the management of residual adult manifestations of developmental disorders without clinical experience with their presentation in childhood. Adult patients are increasingly seeking treatment for the symptoms of ADHD, and physicians need practice guidelines. Adult ADHD often presents differently from childhood ADHD. Because adult ADHD can be comorbid with other disorders and has symptoms similar to those of other disorders, it is important to understand differential diagnoses. Physicians should work with patients to provide feedback about their symptoms, to educate them about ADHD, and to set treatment goals. Treatment for ADHD in adults should include a medication trial, restructuring of the patient's environment to make it more compatible with the symptoms of ADHD, and ongoing supportive management to address any residual impairment and to facilitate functional and developmental improvements.", "title": "" }, { "docid": "c718a2f9eb395e3b4a27ddf3208c4233", "text": "Our objective is to efficiently and accurately estimate the upper body pose of humans in gesture videos. To this end, we build on the recent successful applications of deep convolutional neural networks (ConvNets). Our novelties are: (i) our method is the first to our knowledge to use ConvNets for estimating human pose in videos; (ii) a new network that exploits temporal information from multiple frames, leading to better performance; (iii) showing that pre-segmenting the foreground of the video improves performance; and (iv) demonstrating that even without foreground segmentations, the network learns to abstract away from the background and can estimate the pose even in the presence of a complex, varying background. We evaluate our method on the BBC TV Signing dataset and show that our pose predictions are significantly better, and an order of magnitude faster to compute, than the state of the art [3].", "title": "" }, { "docid": "6b5bde39af1260effa0587d8c6afa418", "text": "This survey highlights the major issues concerning privacy and security in online social networks. Firstly, we discuss research that aims to protect user data from the various attack vantage points including other users, advertisers, third party application developers, and the online social network provider itself. Next we cover social network inference of user attributes, locating hubs, and link prediction. Because online social networks are so saturated with sensitive information, network inference plays a major privacy role. As a response to the issues brought forth by client-server architectures, distributed social networks are discussed. We then cover the challenges that providers face in maintaining the proper operation of an online social network including minimizing spam messages, and reducing the number of sybil accounts. Finally, we present research in anonymizing social network data. This area is of particular interest in order to continue research in this field both in academia and in industry.", "title": "" }, { "docid": "f5f56d680fbecb94a08d9b8e5925228f", "text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.", "title": "" }, { "docid": "fee78b996d88584499f342f7da89addf", "text": "It has become standard for search engines to augment result lists with document summaries. Each document summary consists of a title, abstract, and a URL. In this work, we focus on the task of selecting relevant sentences for inclusion in the abstract. In particular, we investigate how machine learning-based approaches can effectively be applied to the problem. We analyze and evaluate several learning to rank approaches, such as ranking support vector machines (SVMs), support vector regression (SVR), and gradient boosted decision trees (GBDTs). Our work is the first to evaluate SVR and GBDTs for the sentence selection task. Using standard TREC test collections, we rigorously evaluate various aspects of the sentence selection problem. Our results show that the effectiveness of the machine learning approaches varies across collections with different characteristics. Furthermore, the results show that GBDTs provide a robust and powerful framework for the sentence selection task and significantly outperform SVR and ranking SVMs on several data sets.", "title": "" }, { "docid": "ea5697d417fe154be77d941c19d8a86e", "text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.", "title": "" }, { "docid": "2ccae5b48fc5ac10f948b79fc4fb6ff3", "text": "Hierarchical attention networks have recently achieved remarkable performance for document classification in a given language. However, when multilingual document collections are considered, training such models separately for each language entails linear parameter growth and lack of cross-language transfer. Learning a single multilingual model with fewer parameters is therefore a challenging but potentially beneficial objective. To this end, we propose multilingual hierarchical attention networks for learning document structures, with shared encoders and/or shared attention mechanisms across languages, using multi-task learning and an aligned semantic space as input. We evaluate the proposed models on multilingual document classification with disjoint label sets, on a large dataset which we provide, with 600k news documents in 8 languages, and 5k labels. The multilingual models outperform monolingual ones in low-resource as well as full-resource settings, and use fewer parameters, thus confirming their computational efficiency and the utility of cross-language transfer.", "title": "" }, { "docid": "d464711e6e07b61896ba6efe2bbfa5e4", "text": "This paper presents a simple model for body-shadowing in off-body and body-to-body channels. The model is based on a body shadowing pattern associated with the on-body antenna, represented by a cosine function whose amplitude parameter is calculated from measurements. This parameter, i.e the maximum body-shadowing loss, is found to be linearly dependent on distance. The model was evaluated against a set of off-body channel measurements at 2.45 GHz in an indoor office environment, showing a good fit. The coefficient of determination obtained for the linear model of the maximum body-shadowing loss is greater than 0.6 in all considered scenarios, being higher than 0.8 for the ones with a static user.", "title": "" }, { "docid": "610922e925ccb52308dcc68ca2e7bc6b", "text": "In this brief, we introduce an architecture for accelerating convolution stages in convolutional neural networks (CNNs) implemented in embedded vision systems. The purpose of the architecture is to exploit the inherent parallelism in CNNs to reduce the required bandwidth, resource usage, and power consumption of highly computationally complex convolution operations as required by real-time embedded applications. We also implement the proposed architecture using fixed-point arithmetic on a ZC706 evaluation board that features a Xilinx Zynq-7000 system on-chip, where the embedded ARM processor with high clocking speed is used as the main controller to increase the flexibility and speed. The proposed architecture runs under a frequency of 150 MHz, which leads to 19.2 Giga multiply accumulation operations per second while consuming less than 10 W in power. This is done using only 391 DSP48 modules, which shows significant utilization improvement compared to the state-of-the-art architectures.", "title": "" } ]
scidocsrr
ed910b1868e9eb961d6864df9a9a738c
Deep Attention Recurrent Q-Network
[ { "docid": "2f20bca0134eb1bd9d65c4791f94ddcc", "text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "title": "" } ]
[ { "docid": "67066c7ea843279d67d9230d79d03867", "text": "Convolutional neural networks (CNNs) have been successfully applied in artificial intelligent systems to perform sensory processing, sequence learning, and image processing. In contrast to conventional computing-centric applications, CNNs are known to be both computationally and memory intensive. The computational and memory resources of CNN applications are mixed together in the network weights. This incurs a significant amount of data movement, especially for high-dimensional convolutions. The emerging Processing-in-Memory (PIM) alleviates this memory bottleneck by integrating both processing elements and memory into a 3D-stacked architecture. Although this architecture can offer fast near-data processing to reduce data movement, memory is still a limiting factor of the entire system. We observe that an unsolved key challenge is how to efficiently allocate convolutions to 3D-stacked PIM to combine the advantages of both neural and computational processing. This paper presents MemoNet, a memory-efficient data allocation strategy for convolutional neural networks on 3D PIM architecture. MemoNet offers fine-grained parallelism that can fully exploit the computational power of PIM architecture. The objective is to capture the characteristics of neural network applications and perfectly match the underlining hardware resources provided by PIM, resulting in a hardware-independent design to transparently allocate data. We formulate the target problem as a dynamic programming model and present an optimal solution. To demonstrate the viability of the proposed MemoNet, we conduct a set of experiments using a variety of realistic convolutional neural network applications. The extensive evaluations show that, MemoNet can significantly improve the performance and the cache utilization compared to representative schemes.", "title": "" }, { "docid": "a32411be8c0fabc872808fd37c6ae41b", "text": "Sentence classification, serving as the foundation of the subsequent text-based processing, continues attracting researchers attentions. Recently, with the great success of deep learning, convolutional neural network (CNN), a kind of common architecture of deep learning, has been widely used to this filed and achieved excellent performance. However, most CNN-based studies focus on using complex architectures to extract more effective category information, requiring more time in training models. With the aim to get better performance with less time cost on classification, this paper proposes two simple and effective methods by fully combining information both extracted from statistics and CNN. The first method is S-SFCNN, which combines statistical features and CNN-based probabilistic features of classification to build feature vectors, and then the vectors are used to train the logistic regression classifiers. And the second method is C-SFCNN, which combines CNN-based features and statistics-based probabilistic features of classification to build feature vectors. In the two methods, the Naive Bayes log-count ratios are selected as the text statistical features and the single-layer and single channel CNN is used as our CNN architecture. The testing results executed on 7 tasks show that our methods can achieve better performance than many other complex CNN models with less time cost. In addition, we summarized the main factors influencing the performance of our methods though experiment.", "title": "" }, { "docid": "582cae6ea4776c7e74923cfe70bab0ad", "text": "An increasing number of people are using dating websites to search for their life partners. This leads to the curiosity of how attractive a specific person is to the opposite gender on an average level. We propose a novel algorithm to evaluate people's objective attractiveness based on their interactions with other users on the dating websites and implement machine learning algorithms to predict their objective attractiveness ratings from their profiles. We validate our method on a large dataset gained from a Japanese dating website and yield convincing results. Our prediction based on users' profiles, which includes image and text contents, is over 80% correlated with the real values of the calculated objective attractiveness for the female and over 50% correlated with the real values of the calculated objective attractiveness for the male.", "title": "" }, { "docid": "54a6a5a6dfb38861a94f779d001bacb4", "text": "The information security community has come to realize that the weakest link in a cybersecurity chain is human behavior. To develop effective cybersecurity training programs for employees in the workplace, it is necessary to identify factors that contribute to employees’ cybersecurity behaviors and then build a theoretical model to understand how these factors affect employees’ self-reported security behavior in the workplace. Supported by a grant from the National Science Foundation (NSF), we developed a model for studying employees’ self-reported cybersecurity behaviors, and conducted a survey study to investigate the cybersecurity behavior and beliefs of employees. Five-hundred-seventy-nine employees from various U.S. organizations and companies completed an online survey with 87 items carefully designed by six experts in cybersecurity, information technology, psychology, and decision science. The results from statistical analysis of the cybersecurity behavior survey questionnaire will be presented in this TREO Talk. Some of the key findings include:  Prior Experience was correlated with self-reported cyber security behavior. However, it was not identified as a unique predictor in our regression analysis. This suggests that the prior training may indirectly affect cybersecurity behavior through other variables.  Peer Behavior was not a unique predictor of self-reported cybersecurity behavior. Perceptions of peer behavior may reflect people’s own self-efficacy with cybersecurity and their perceptions of the benefits from cybersecurity behaviors.  The regression model revealed four unique predictors of self-reported cybersecurity behavior: Computer Skill, Perceived Benefits, Perceived Barriers, and Security Self-efficacy. These variables should be assessed to identify employees who are at risk of cyber attacks and could be the target of interventions.  There are statistically significant gender-wise differences in terms of computer skills, prior experience, cues-to-action, security self-efficacy and self-reported cybersecurity behaviors. Since women’s self-efficacy is significantly lower than men, women’s self-efficacy may be a target for intervention.", "title": "" }, { "docid": "3208f5f01469ba028cf8f356613bf502", "text": "A review on applications of metal-based inkjet inks for printed electronics with a particular focus on inks containing metal nanoparticles, complexes and metallo-organic compounds. The review describes the preparation of such inks and obtaining conductive patterns by using various sintering methods: thermal, photonic, microwave, plasma, electrical, and chemically triggered. Various applications of metal-based inkjet inks (metallization of solar cell, RFID antennas, OLEDs, thin film transistors, electroluminescence devices) are reviewed.", "title": "" }, { "docid": "015326feea60387bc2a8cdc9ea6a7f81", "text": "Phosphorylation of the transcription factor CREB is thought to be important in processes underlying long-term memory. It is unclear whether CREB phosphorylation can carry information about the sign of changes in synaptic strength, whether CREB pathways are equally activated in neurons receiving or providing synaptic input, or how synapse-to-nucleus communication is mediated. We found that Ca(2+)-dependent nuclear CREB phosphorylation was rapidly evoked by synaptic stimuli including, but not limited to, those that induced potentiation and depression of synaptic strength. In striking contrast, high frequency action potential firing alone failed to trigger CREB phosphorylation. Activation of a submembranous Ca2+ sensor, just beneath sites of Ca2+ entry, appears critical for triggering nuclear CREB phosphorylation via calmodulin and a Ca2+/calmodulin-dependent protein kinase.", "title": "" }, { "docid": "8472f7d28618ce30dcf79f8788eeadc0", "text": "Speech recognition of inflectional and morphologically rich languages like Czech is currently quite a challenging task, because simple n-gram techniques are unable to capture important regularities in the data. Several possible solutions were proposed, namely class based models, factored models, decision trees and neural networks. This paper describes improvements obtained in recognition of spoken Czech lectures using language models based on neural networks. Relative reductions in word error rate are more than 15% over baseline obtained with adapted 4-gram backoff language model using modified Kneser-Ney smoothing.", "title": "" }, { "docid": "854b2bfdef719879a437f2d87519d8e8", "text": "The morality of transformational leadership has been sharply questioned, particularly by libertarians, “grass roots” theorists, and organizational development consultants. This paper argues that to be truly transformational, leadership must be grounded in moral foundations. The four components of authentic transformational leadership (idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration) are contrasted with their counterfeits in dissembling pseudo-transformational leadership on the basis of (1) the moral character of the leaders and their concerns for self and others; (2) the ethical values embedded in the leaders’ vision, articulation, and program, which followers can embrace or reject; and (3) the morality of the processes of social ethical choices and action in which the leaders and followers engage and collectively pursue. The literature on transformational leadership is linked to the long-standing literature on virtue and moral character, as exemplified by Socratic and Confucian typologies. It is related as well to the major themes of the modern Western ethical agenda: liberty, utility, and distributive justice Deception, sophistry, and pretense are examined alongside issues of transcendence, agency, trust, striving for congruence in values, cooperative action, power, persuasion, and corporate governance to establish the strategic and moral foundations of authentic transformational leadership.", "title": "" }, { "docid": "5de12a65c39626348c0e163a1a5b25bf", "text": "Network Security is playing a vital role in all types of networks. Nowadays the network is implemented in all places like offices, schools, banks etc. and almost all the individuals are taking part in social network media. Even though many types of network security systems are in use, the vulnerable activities are taking place now and then. This paper presents a survey about various types of network attacks mainly web attacks, and different Intrusion Detection Systems(IDS) which are in use. This may pave a path to design a new type of IDS which may protect the network system from various types of network attacks.", "title": "" }, { "docid": "77c72fe890aa1479fc6cd5d6737bcde3", "text": "Since smartphones have stored diverse sensitive privacy information, including credit card and so on, a great deal of malware are desired to tamper them. As one of the most prevalent platforms, Android contains sensitive resources that can only be accessed via corresponding APIs, and the APIs can be invoked only when user has authorized permissions in the Android permission model. However, a novel threat called privilege escalation attack may bypass this watchdog. It's presented as that an application with less permissions can access sensitive resources through public interfaces of a more privileged application, which is especially useful for malware to hide sensitive functions by dispersing them into multiple programs. We explore privilege-escalation malware evolution techniques on samples from Android Malware Genome Project. And they have showed great effectiveness against a set of powerful antivirus tools provided by VirusTotal. The detection ratios present different and distinguished reduction, compared to an average 61% detection ratio before transformation. In order to conquer this threat model, we have developed a tool called DroidAlarm to conduct a full-spectrum analysis for identifying potential capability leaks and present concrete capability leak paths by static analysis on Android applications. And we can still alarm all these cases by exposing capability leak paths in them.", "title": "" }, { "docid": "36a0b3223b83927f4dfe358086f2a660", "text": "We train a set of state of the art neural networks, the Maxout networks (Goodfellow et al., 2013a), on three benchmark datasets: the MNIST, CIFAR10 and SVHN, with three distinct storing formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those formats, we assess the impact of the precision of the storage on the final error of the training. We find that very low precision storage is sufficient not just for running trained networks but also for training them. For example, Maxout networks state-of-the-art results are nearly maintained with 10 bits for storing activations and gradients, and 12 bits for storing parameters.", "title": "" }, { "docid": "4f509a4fdc6bbffa45c214bc9267ea79", "text": "Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs on various tasks, including density estimation, image generation, and missing value imputation, and DGMs with memory can achieve state-ofthe-art quantitative results.", "title": "" }, { "docid": "1dbff7292f9578337781616d4a1bb96a", "text": "This paper proposes a novel approach and a new benchmark for video summarization. Thereby we focus on user videos, which are raw videos containing a set of interesting events. Our method starts by segmenting the video by using a novel “superframe” segmentation, tailored to raw videos. Then, we estimate visual interestingness per superframe using a set of low-, midand high-level features. Based on this scoring, we select an optimal subset of superframes to create an informative and interesting summary. The introduced benchmark comes with multiple human created summaries, which were acquired in a controlled psychological experiment. This data paves the way to evaluate summarization methods objectively and to get new insights in video summarization. When evaluating our method, we find that it generates high-quality results, comparable to manual, human-created summaries.", "title": "" }, { "docid": "9c67049b5f934b47346592b73bc57dbe", "text": "In this paper, the problem of switching stabilization for a class of switched nonlinear systems is studied by using average dwell time (ADT) switching, where the subsystems are possibly all unstable. First, a new concept of ADT is given, which is different from the traditional definition of ADT. Based on the new proposed switching signals, a sufficient condition of stabilization for switched nonlinear systems with unstable subsystems is derived. Then, the T-S fuzzy modeling approach is applied to represent the underlying nonlinear system to make the obtained condition easily verified. A novel multiple quadratic Lyapunov function approach is also proposed, by which some conditions are provided in terms of a set of linear matrix inequalities to guarantee the derived T-S fuzzy system to be asymptotically stable. Finally, a numerical example is given to demonstrate the effectiveness of our developed results.", "title": "" }, { "docid": "7a7e08f672be36af5b52a62c01457a96", "text": "The convenience of cell-phone cameras has made them one of the most common ways by which people document their lives, whether it is everyday pleasures or celebrations. With thousands of images, it might prove to be a daunting task to organize them by hand. When applying automated algorithms to help us, we would like to have both images that are dear to us but are also of good quality. In this paper we explore the performance of the MobileNet CNN architecture, and the different design (inputs size, and layer depth) choices, in their ability in solving various aesthetic inference task: binary classification, regression, image cropping. We show that the baseline MobileNet architecture achieves near state-of-the-art results for binary classification on the AVA dataset while being more than 10 times smaller and compute efficient. We further show that these models, when trained for fine-grained aesthetics inference, achieve better cropping performance than other aestheticsbased croppers.", "title": "" }, { "docid": "8e19c3513be332705f4e2bf5a8aa4429", "text": "The introduction of crowdsourcing offers numerous business opportunities. In recent years, manifold forms of crowdsourcing have emerged on the market -- also in logistics. Thereby, the ubiquitous availability and sensor-supported assistance functions of mobile devices support crowdsourcing applications, which promotes contextual interactions between users at the right place at the right time. This paper presents the results of an in-depth-analysis on crowdsourcing in logistics in the course of ongoing research in the field of location-based crowdsourcing (LBCS). This paper analyzes LBCS for both, 'classic' logistics as well as 'information' logistics. Real-world examples of crowdsourcing applications are used to underpin the two evaluated types of logistics using crowdsourcing. Potential advantages and challenges of logistics with the crowd ('crowd-logistics') are discussed. Accordingly, this paper aims to provide the necessary basis for a novel interdisciplinary research field.", "title": "" }, { "docid": "4b8823bffcc77968b7ac087579ab84c9", "text": "Numerous complains have been made by Android users who severely suffer from the sluggish response when interacting with their devices. However, very few studies have been conducted to understand the user-perceived latency or mitigate the UI-lagging problem. In this paper, we conduct the first systematic measurement study to quantify the user-perceived latency using typical interaction-intensive Android apps in running with and without background workloads. We reveal the insufficiency of Android system in ensuring the performance of foreground apps and therefore design a new system to address the insufficiency accordingly. We develop a lightweight tracker to accurately identify all delay-critical threads that contribute to the slow response of user interactions. We then build a resource manager that can efficiently schedule various system resources including CPU, I/O, and GPU, for optimizing the performance of these threads. We implement the proposed system on commercial smartphones and conduct comprehensive experiments to evaluate our implementation. Evaluation results show that our system is able to significantly reduce the user-perceived latency of foreground apps in running with aggressive background workloads, up to 10x, while incurring negligible system overhead of less than 3.1 percent CPU and 7 MB memory.", "title": "" }, { "docid": "61f079cb59505d9bf1de914330dd852e", "text": "Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9 percent. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, tokensequence, SBPH, and Markovian ddiscriminators. The results deomonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. MIT Spam Conference 2004 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2004 201 Broadway, Cambridge, Massachusetts 02139 The Spam-Filtering Accuracy Plateau at 99.9% Accuracy and How to Get Past It. William S. Yerazunis, PhD* Presented at the 2004 MIT Spam Conference January 18, 2004 MIT, Cambridge, Massachusetts Abstract: Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed.", "title": "" }, { "docid": "c26a3ffc5c94ef76358ffb7179879e19", "text": "Keyword extraction problem is one of the most significant tasks in information retrieval. High-quality keyword extraction sufficiently influences the progress in the following subtasks of information retrieval: classification and clustering, data mining, knowledge extraction and representation, etc. The research environment has specified a layout for keyphrase extraction. However, some of the possible decisions remain uninvolved in the paradigm. In the paper the authors observe the scope of interdisciplinary methods applicable to automatic stop list feeding. The chosen method belongs to the class of experiential models. The research procedure based on this method allows to improve the quality of keyphrase extraction on the stage of candidate keyphrase building. Several ways to automatic feeding of the stop lists are proposed in the paper as well. One of them is based on provisions of lexical statistics and the results of its application to the discussed task point out the non-gaussian nature of text corpora. The second way based on usage of the Inspec train collection to the feeding of stop lists improves the quality considerably.", "title": "" }, { "docid": "c0ce856c2e1a49aa75bfefbdbbffe455", "text": "In order to get real time image processing for mobile robot vision, we propose to use a discrete time cellular neural network implementation by a convolutional structure on Altora FPGA using VHDL language. We obtain at least 9 times faster processing than other emulations for the same problem.", "title": "" } ]
scidocsrr
cedfb0244b1ea9b24f594603745167e5
Dynamic Facet Ordering for Faceted Product Search Engines
[ { "docid": "0dbad8ca53615294bc25f7a2d8d41d99", "text": "Faceted search is becoming a popular method to allow users to interactively search and navigate complex information spaces. A faceted search system presents users with key-value metadata that is used for query refinement. While popular in e-commerce and digital libraries, not much research has been conducted on which metadata to present to a user in order to improve the search experience. Nor are there repeatable benchmarks for evaluating a faceted search engine. This paper proposes the use of collaborative filtering and personalization to customize the search interface to each user's behavior. This paper also proposes a utility based framework to evaluate the faceted interface. In order to demonstrate these ideas and better understand personalized faceted search, several faceted search algorithms are proposed and evaluated using the novel evaluation methodology.", "title": "" } ]
[ { "docid": "782396981f9d3fffb74d7e03048cdb6b", "text": "A high-voltage high-speed gate driver to enable synchronous rectifiers with zero-voltage-switching (ZVS) operation is presented in this paper. A capacitive-coupled level-shifter (CCLS) is developed to achieve negligible propagation delay and static current consumption. With only 1 off-chip capacitor, the proposed gate driver possesses strong driving capability and requires no external floating supply for the high-side driving. A dynamic timing control is also proposed not only to enable ZVS operation in the converter for minimizing the capacitive switching loss, but also to eliminate the converter short-circuit power loss. Implemented in a 0.5μm HV CMOS process, the proposed CCLS of the gate driver can shift up a 5V signal to the 100V DC rail with sub-nanosecond delay, improving the FoM by at least 29 times compared with that of state-of-the-art counterparts. The dynamic dead-time control properly enables ZVS operation in a synchronous buck converter under different input voltages (30V to 100V). The power losses of the high-voltage buck converter are thus greatly reduced under different load currents, achieving a maximum power efficiency improvement of 11.5%.", "title": "" }, { "docid": "cceb05e100fe8c9f9dab9f6525d435db", "text": "Conventional feedback control methods can solve various types of robot control problems very efficiently by capturing the structure with explicit models, such as rigid body equations of motion. However, many control problems in modern manufacturing deal with contacts and friction, which are difficult to capture with first-order physical modeling. Hence, applying control design methodologies to these kinds of problems often results in brittle and inaccurate controllers, which have to be manually tuned for deployment. Reinforcement learning (RL) methods have been demonstrated to be capable of learning continuous robot controllers from interactions with the environment, even for problems that include friction and contacts. In this paper, we study how we can solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL. The final control policy is a superposition of both control signals. We demonstrate our approach by training an agent to successfully perform a real-world block assembly task involving contacts and unstable objects.", "title": "" }, { "docid": "323abed1a623e49db50bed383ab26a92", "text": "Robust object detection is a critical skill for robotic applications in complex environments like homes and offices. In this paper we propose a method for using multiple cameras to simultaneously view an object from multiple angles and at high resolutions. We show that our probabilistic method for combining the camera views, which can be used with many choices of single-image object detector, can significantly improve accuracy for detecting objects from many viewpoints. We also present our own single-image object detection method that uses large synthetic datasets for training. Using a distributed, parallel learning algorithm, we train from very large datasets (up to 100 million image patches). The resulting object detector achieves high performance on its own, but also benefits substantially from using multiple camera views. Our experimental results validate our system in realistic conditions and demonstrates significant performance gains over using standard single-image classifiers, raising accuracy from 0.86 area-under-curve to 0.97.", "title": "" }, { "docid": "fca35510714dcf6f2a7a835291db382f", "text": "This paper considers the state of art real-time detection network single-shot multi-box detector (SSD) for multi-targets detection. It is built on top of a base network VGG16 that ends with some convolution layers. Its base network VGG16, designed for 1000 categories in Imagenet dataset, is obviously over-parametered, when used for 21 categories classification in VOC dataset. In this paper, we visualize the base network VGG16 in SSD network by deconvolution method. We analyze the discriminative feature learned by last layer conv5_3 of VGG16 network due to its semantic property. Redundancy intra-channel can be seen in the form of deconvolution image. Accordingly, we propose a pruning method to obtain a compressed network with high accuracy. Experiments illustrate the efficiency of our method by comparing different fine-tune methods. A reduced SSD network is obtained with even higher mAP than the original one by 2 percent. When only 4% of the original kernels in conv5_3 is remained, mAP is still as high as that of the original network.", "title": "" }, { "docid": "0ab4f0cf03c0a2d72b4e9ed079181a67", "text": "In this paper, we present a method for estimating articulated human poses in videos. We cast this as an optimization problem defined on body parts with spatio-temporal links between them. The resulting formulation is unfortunately intractable and previous approaches only provide approximate solutions. Although such methods perform well on certain body parts, e.g., head, their performance on lower arms, i.e., elbows and wrists, remains poor. We present a new approximate scheme with two steps dedicated to pose estimation. First, our approach takes into account temporal links with subsequent frames for the less-certain parts, namely elbows and wrists. Second, our method decomposes poses into limbs, generates limb sequences across time, and recomposes poses by mixing these body part sequences. We introduce a new dataset \"Poses in the Wild\", which is more challenging than the existing ones, with sequences containing background clutter, occlusions, and severe camera motion. We experimentally compare our method with recent approaches on this new dataset as well as on two other benchmark datasets, and show significant improvement.", "title": "" }, { "docid": "065e6db1710715ce5637203f1749e6f6", "text": "Software fault isolation (SFI) is an effective mechanism to confine untrusted modules inside isolated domains to protect their host applications. Since its debut, researchers have proposed different SFI systems for many purposes such as safe execution of untrusted native browser plugins. However, most of these systems focus on the x86 architecture. Inrecent years, ARM has become the dominant architecture for mobile devices and gains in popularity in data centers.Hence there is a compellingneed for an efficient SFI system for the ARM architecture. Unfortunately, existing systems either have prohibitively high performance overhead or place various limitations on the memory layout and instructions of untrusted modules.\n In this paper, we propose ARMlock, a hardware-based fault isolation for ARM. It uniquely leverages the memory domain support in ARM processors to create multiple sandboxes. Memory accesses by the untrusted module (including read, write, and execution) are strictly confined by the hardware,and instructions running inside the sandbox execute at the same speed as those outside it. ARMlock imposes virtually no structural constraints on untrusted modules. For example, they can use self-modifying code, receive exceptions, and make system calls. Moreover, system calls can be interposed by ARMlock to enforce the policies set by the host. We have implemented a prototype of ARMlock for Linux that supports the popular ARMv6 and ARMv7 sub-architecture. Our security assessment and performance measurement show that ARMlock is practical, effective, and efficient.", "title": "" }, { "docid": "b31f5af2510461479d653be1ddadaa22", "text": "Integrating smart temperature sensors into digital platforms facilitates information to be processed and transmitted, and open up new applications. Furthermore, temperature sensors are crucial components in computing platforms to manage power-efficiency trade-offs reliably under a thermal budget. This paper presents a holistic perspective about smart temperature sensor design from system- to device-level including manufacturing concerns. Through smart sensor design evolutions, we identify some scaling paths and circuit techniques to surmount analog/mixed-signal design challenges in 32-nm and beyond. We close with opportunities to design smarter temperature sensors.", "title": "" }, { "docid": "476e612f4124fc5e9f391e2fa4a49a3b", "text": "Debugging data processing logic in Data-Intensive Scalable Computing (DISC) systems is a difficult and time consuming effort. Today's DISC systems offer very little tooling for debugging programs, and as a result programmers spend countless hours collecting evidence (e.g., from log files) and performing trial and error debugging. To aid this effort, we built Titian, a library that enables data provenance-tracking data through transformations-in Apache Spark. Data scientists using the Titian Spark extension will be able to quickly identify the input data at the root cause of a potential bug or outlier result. Titian is built directly into the Spark platform and offers data provenance support at interactive speeds-orders-of-magnitude faster than alternative solutions-while minimally impacting Spark job performance; observed overheads for capturing data lineage rarely exceed 30% above the baseline job execution time.", "title": "" }, { "docid": "df9ed642b388f7eac9df492384c81efa", "text": "The predominantly anaerobic microbiota of the distal ileum and colon contain an extraordinarily complex variety of metabolically active bacteria and fungi that intimately interact with the host's epithelial cells and mucosal immune system. Crohn's disease, ulcerative colitis, and pouchitis are the result of continuous microbial antigenic stimulation of pathogenic immune responses as a consequence of host genetic defects in mucosal barrier function, innate bacterial killing, or immunoregulation. Altered microbial composition and function in inflammatory bowel diseases result in increased immune stimulation, epithelial dysfunction, or enhanced mucosal permeability. Although traditional pathogens probably are not responsible for these disorders, increased virulence of commensal bacterial species, particularly Escherichia coli, enhance their mucosal attachment, invasion, and intracellular persistence, thereby stimulating pathogenic immune responses. Host genetic polymorphisms most likely interact with functional bacterial changes to stimulate aggressive immune responses that lead to chronic tissue injury. Identification of these host and microbial alterations in individual patients should lead to selective targeted interventions that correct underlying abnormalities and induce sustained and predictable therapeutic responses.", "title": "" }, { "docid": "41cfe93db7c4635e106a1d620ea31036", "text": "Neuroblastoma (NBL) and medulloblastoma (MBL) are tumors of the neuroectoderm that occur in children. NBL and MBL express Trk family tyrosine kinase receptors, which regulate growth, differentiation, and cell death. CEP-751 (KT-6587), an indolocarbazole derivative, is an inhibitor of Trk family tyrosine kinases at nanomolar concentrations. This study was designed to determine the effect of CEP-751 on the growth of NBL and MBL cell lines as xenografts. In vivo studies were conducted on four NBL cell lines (IMR-5, CHP-134, NBL-S, and SY5Y) and three MBL cell lines (D283, D341, and DAOY) using two treatment schedules: (a) treatment was started after the tumors were measurable (therapeutic study); or (b) 4-6 days after inoculation, before tumors were palpable (prevention study). CEP-751 was given at 21 mg/kg/dose administered twice a day, 7 days a week; the carrier vehicle was used as a control. In therapeutic studies, a significant difference in tumor size was seen between treated and control animals with IMR-5 on day 8 (P = 0.01), NBL-S on day 17 (P = 0.016), and CHP-134 on day 15 (P = 0.034). CEP-751 also had a significant growth-inhibitory effect on the MBL line D283 (on day 39, P = 0.031). Inhibition of tumor growth of D341 did not reach statistical significance, and no inhibition was apparent with DAOY. In prevention studies, CEP-751 showed a modest growth-inhibitory effect on IMR5 (P = 0.062) and CHP-134 (P = 0.049). Furthermore, inhibition of growth was greater in the SY5Y cell line transfected with TrkB compared with the untransfected parent cell line expressing no detectable TrkB. Terminal deoxynucleotidyl transferase-mediated nick end labeling studies showed CEP-751 induced apoptosis in the treated CHP-134 tumors, whereas no evidence of apoptosis was seen in the control tumors. Finally, there was no apparent toxicity identified in any of the treated mice. These results suggest that CEP-751 may be a useful therapeutic agent for NBL or MBL.", "title": "" }, { "docid": "0c3387ec7ed161d931bc08151e722d10", "text": "New updated! The latest book from a very famous author finally comes out. Book of the tower of hanoi myths and maths, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.", "title": "" }, { "docid": "52dbfe369d1875c402220692ef985bec", "text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.", "title": "" }, { "docid": "6f3a5219346e4c6c8dd094e391f93e2f", "text": "We consider 27 population and community terms used frequently by parasitologists when describing the ecology of parasites. We provide suggestions for various terms in an attempt to foster consistent use and to make terms used in parasite ecology easier to interpret for those who study free-living organisms. We suggest strongly that authors, whether they agree or disagree with us, provide complete and unambiguous definitions for all parameters of their studies.", "title": "" }, { "docid": "5bece01bed7c5a9a2433d95379882a37", "text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.", "title": "" }, { "docid": "26ad79619be484ec239daf5b735ae5a4", "text": "The placenta is a complex organ, playing multiple roles during fetal development. Very little is known about the association between placental morphological abnormalities and fetal physiology. In this work, we present an open sourced, computationally tractable deep learning pipeline to analyse placenta histology at the level of the cell. By utilising two deep convolutional neural network architectures and transfer learning, we can robustly localise and classify placental cells within five classes with an accuracy of 89%. Furthermore, we learn deep embeddings encoding phenotypic knowledge that is capable of both stratifying five distinct cell populations and learn intraclass phenotypic variance. We envisage that the automation of this pipeline to population scale studies of placenta histology has the potential to improve our understanding of basic cellular placental biology and its variations, particularly its role in predicting adverse birth outcomes.", "title": "" }, { "docid": "ed7826f37cf45f56ba6e7abf98c509e7", "text": "The progressive ability of a six-strains L. monocytogenes cocktail to form biofilm on stainless steel (SS), under fish-processing simulated conditions, was investigated, together with the biocide tolerance of the developed sessile communities. To do this, the pathogenic bacteria were left to form biofilms on SS coupons incubated at 15°C, for up to 240h, in periodically renewable model fish juice substrate, prepared by aquatic extraction of sea bream flesh, under both mono-species and mixed-culture conditions. In the latter case, L. monocytogenes cells were left to produce biofilms together with either a five-strains cocktail of four Pseudomonas species (fragi, savastanoi, putida and fluorescens), or whole fish indigenous microflora. The biofilm populations of L. monocytogenes, Pseudomonas spp., Enterobacteriaceae, H2S producing and aerobic plate count (APC) bacteria, both before and after disinfection, were enumerated by selective agar plating, following their removal from surfaces through bead vortexing. Scanning electron microscopy was also applied to monitor biofilm formation dynamics and anti-biofilm biocidal actions. Results revealed the clear dominance of Pseudomonas spp. bacteria in all the mixed-culture sessile communities throughout the whole incubation period, with the in parallel sole presence of L. monocytogenes cells to further increase (ca. 10-fold) their sessile growth. With respect to L. monocytogenes and under mono-species conditions, its maximum biofilm population (ca. 6logCFU/cm2) was reached at 192h of incubation, whereas when solely Pseudomonas spp. cells were also present, its biofilm formation was either slightly hindered or favored, depending on the incubation day. However, when all the fish indigenous microflora was present, biofilm formation by the pathogen was greatly hampered and never exceeded 3logCFU/cm2, while under the same conditions, APC biofilm counts had already surpassed 7logCFU/cm2 by the end of the first 96h of incubation. All here tested disinfection treatments, composed of two common food industry biocides gradually applied for 15 to 30min, were insufficient against L. monocytogenes mono-species biofilm communities, with the resistance of the latter to significantly increase from the 3rd to 7th day of incubation. However, all these treatments resulted in no detectable L. monocytogenes cells upon their application against the mixed-culture sessile communities also containing the fish indigenous microflora, something probably associated with the low attached population level of these pathogenic cells before disinfection (<102CFU/cm2) under such mixed-culture conditions. Taken together, all these results expand our knowledge on both the population dynamics and resistance of L. monocytogenes biofilm cells under conditions resembling those encountered within the seafood industry and should be considered upon designing and applying effective anti-biofilm strategies.", "title": "" }, { "docid": "89e36aaa4c4d3ba5ec0326c6a568ebba", "text": "We demonstrate a MEMS-based display system with a very wide projection angle of up to 120deg. The system utilizes a gimbal-less two-axis micromirror scanner for high-speed laser beam-steering in both axes. The optical scan angle of the micromirrors is up to 16deg on each axis. A custom-designed fisheye lens is utilized to magnify scan angles. The system can display a variety of vector graphics as well as multiframe animations at arbitrary refresh rates, up to the overall bandwidth limit of the MEMS device. It is also possible to operate the scanners in point-to-point scanning, resonant and/or rastering modes. The system is highly adaptable for projection on a variety of surfaces including projection on specially coated transparent surfaces (Fig. 3.) The size of the displayed area, refresh rate, display mode (vector graphic or image raster,) and many other parameters are all adjustable by the user. The small size of the MEMS devices and lens as well as the ultra-low power consumption of the MEMS devices, in the milliwatt range, makes the overall system highly portable and miniaturizable.", "title": "" }, { "docid": "13451c2f433b9d32563012458bb4856c", "text": "Purpose – The paper’s aim is to explore the factors that affect the online game addiction and the role that online game addiction plays in the relationship between online satisfaction and loyalty. Design/methodology/approach – A web survey of online game players was conducted, with 1,186 valid responses collected. Structure equation modeling – specifically partial least squares – was used to assess the relationships in the proposed research framework. Findings – The results indicate that perceived playfulness and descriptive norms influence online game addiction. Furthermore, descriptive norms indirectly affect online game addiction through perceived playfulness. Addiction also directly contributes to loyalty and attenuates the relationship between satisfaction and loyalty. This finding partially explains why people remain loyal to an online game despite being dissatisfied. Practical implications – Online gaming vendors should strive to create amusing game content and to maintain their online game communities in order to enhance players’ perceptions of playfulness and the effects of social influences. Also, because satisfaction is the most significant indicator of loyalty, vendors can enhance loyalty by providing better services, such as fraud prevention and the detection of cheating behaviors. Originality/value – The value of this study is that it reveals the moderating influences of addiction on the satisfaction-loyalty relationship and factors that contribute to the online game addiction. Moreover, while many past studies focused on addiction’s negative effects and on groups considered particularly vulnerable to Internet addiction, this paper extends previous work by investigating the relationship of addiction to other marketing variables and by using a more general population, mostly young adults, as research subjects.", "title": "" }, { "docid": "4b57b59f475a643b281a1ee5e49c87bd", "text": "In this paper we present a Model Predictive Control (MPC) approach for combined braking and steering systems in autonomous vehicles. We start from the result presented in (Borrelli et al. (2005)) and (Falcone et al. (2007a)), where a Model Predictive Controller (MPC) for autonomous steering systems has been presented. As in (Borrelli et al. (2005)) and (Falcone et al. (2007a)) we formulate an MPC control problem in order to stabilize a vehicle along a desired path. In the present paper, the control objective is to best follow a given path by controlling the front steering angle and the brakes at the four wheels independently, while fulfilling various physical and design constraints.", "title": "" } ]
scidocsrr
9c1bcd73810f6c8113a878bbd84c2670
Building strong brands in a modern marketing communications environment
[ { "docid": "5a525ccce94c64cd8b2d8cf9125a7802", "text": "and others at both organizations for their support and valuable input. Special thanks to Grey Advertising's Ben Arno who suggested the term brand resonance. Additional thanks to workshop participants at Duke University and Dartmouth College. MSI was established in 1961 as a not-for profit institute with the goal of bringing together business leaders and academics to create knowledge that will improve business performance. The primary mission was to provide intellectual leadership in marketing and its allied fields. Over the years, MSI's global network of scholars from leading graduate schools of management and thought leaders from sponsoring corporations has expanded to encompass multiple business functions and disciplines. Issues of key importance to business performance are identified by the Board of Trustees, which represents MSI corporations and the academic community. MSI supports studies by academics on these issues and disseminates the results through conferences and workshops, as well as through its publications series. This report, prepared with the support of MSI, is being sent to you for your information and review. It is not to be reproduced or published, in any form or by any means, electronic or mechanical, without written permission from the Institute and the author. Building a strong brand has been shown to provide numerous financial rewards to firms, and has become a top priority for many organizations. In this report, author Keller outlines the Customer-Based Brand Equity (CBBE) model to assist management in their brand-building efforts. According to the model, building a strong brand involves four steps: (1) establishing the proper brand identity, that is, establishing breadth and depth of brand awareness, (2) creating the appropriate brand meaning through strong, favorable, and unique brand associations, (3) eliciting positive, accessible brand responses, and (4) forging brand relationships with customers that are characterized by intense, active loyalty. Achieving these four steps, in turn, involves establishing six brand-building blocks—brand salience, brand performance, brand imagery, brand judgments, brand feelings, and brand resonance. The most valuable brand-building block, brand resonance, occurs when all the other brand-building blocks are established. With true brand resonance, customers express a high degree of loyalty to the brand such that they actively seek means to interact with the brand and share their experiences with others. Firms that are able to achieve brand resonance should reap a host of benefits, for example, greater price premiums and more efficient and effective marketing programs. The CBBE model provides a yardstick by …", "title": "" } ]
[ { "docid": "85e43d5afefc791725a05c8e554653bf", "text": "An analytical model of an ultrawideband range gating radar is developed. The model is used for the system design of a radar for breath activity monitoring having sub-millimeter movement resolution and fulfilling the requirements of the Federal Communications Commission in terms of effective isotropic radiated power. The system study has allowed to define the requirements of the various radar subsystems that have been designed and realized by means of a low cost hybrid technology. The radar has been assembled and some performance factors, such as range and movement resolution, and the receiver conversion factor have been experimentally evaluated and compared with the model predictions. Finally, the radar has been tested for remote breath activity monitoring, showing recorded respiratory signals in very good agreement with those obtained by means of a conventional technique employing a piezoelectric belt.", "title": "" }, { "docid": "0c726f5313f7302081eca58530a0ed8f", "text": "Software is a complex entity composed in various modules with varied range of defect occurrence possibility. Efficient and timely prediction of defect occurrence in software allows software project managers to effectively utilize people, cost, time for better quality assurance. The presence of defects in a software leads to a poor quality software and also responsible for the failure of a software project. Sometime it is not possible to identify the defects and fixing them at the time of development and it is required to handle such defects any time whenever they are noticed by the team members. So it is important to predict defect-prone software modules prior to deployment of software project in order to plan better maintenance strategy. Early knowledge of defect prone software module can also help to make efficient process improvement plan within justified period of time and cost. This can further lead to better software release as well as high customer satisfaction subsequently. Accurate measurement and prediction of defect is a crucial issue in any software because it is an indirect measurement and is based on several metrics. Therefore, instead of considering all the metrics, it would be more appropriate to find out a suitable set of metrics which are relevant and significant for prediction of defects in any software modules. This paper proposes a feature selection based Linear Twin Support Vector Machine (LSTSVM) model to predict defect prone software modules. F-score, a feature selection technique, is used to determine the significant metrics set which are prominently affecting the defect prediction in a software modules. The efficiency of predictive model could be enhanced with reduced metrics set obtained after feature selection and further used to identify defective modules in a given set of inputs. This paper evaluates the performance of proposed model and compares it against other existing machine learning models. The experiment has been performed on four PROMISE software engineering repository datasets. The experimental results indicate the effectiveness of the proposed feature selection based LSTSVM predictive model on the basis standard performance evaluation parameters.", "title": "" }, { "docid": "1d3379e5e70d1fb7fa050c42805fe865", "text": "While many recent hand pose estimation methods critically rely on a training set of labelled frames, the creation of such a dataset is a challenging task that has been overlooked so far. As a result, existing datasets are limited to a few sequences and individuals, with limited accuracy, and this prevents these methods from delivering their full potential. We propose a semi-automated method for efficiently and accurately labeling each frame of a hand depth video with the corresponding 3D locations of the joints: The user is asked to provide only an estimate of the 2D reprojections of the visible joints in some reference frames, which are automatically selected to minimize the labeling work by efficiently optimizing a sub-modular loss function. We then exploit spatial, temporal, and appearance constraints to retrieve the full 3D poses of the hand over the complete sequence. We show that this data can be used to train a recent state-of-the-art hand pose estimation method, leading to increased accuracy.", "title": "" }, { "docid": "973da8a50b1250688fceb94611a4f0f7", "text": "Experts in sport benefit from some cognitive mechanisms and strategies which enables them to reduce response times and increase response accuracy.Reaction time is mediated by different factors including type of sport that athlete is participating in and expertise status. The present study aimed to investigate the relationship between CRTs and expertise level in collegiate athletes, as well as evaluating the role of sport and gender differences.44 male and female athletesrecruited from team and individual sports at elite and non-elite levels. The Lafayette multi-choice reaction time was used to collect data.All subjectsperformed a choice reaction time task that required response to visual and auditory stimuli. Results demonstrated a significant overall choice reaction time advantage for maleathletes, as well as faster responses to stimuli in elite participants.Athletes of team sportsdid not showmore accurate performance on the choice reaction time tasks than athletes of individual sports. These findings suggest that there is a relation between choice reaction time and expertise in athletes and this relationship can be mediated by gender differences. Overall, athletes with intrinsic perceptualmotor advantages such as faster reaction times are potentially more equipped for participation in high levels of sport.", "title": "" }, { "docid": "9bc182298ad6158dbb5de4da15353312", "text": "We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization. Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics. As such, they can be a powerful tool for unsupervised representation learning from video or pairs of data. We derive a training algorithm for Spectral Inference Networks that addresses the bias in the gradients due to finite batch size and allows for online learning of multiple eigenfunctions. We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets as well as the Arcade Learning Environment. Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators, can discover interpretable representations from video and find meaningful subgoals in reinforcement learning environments.", "title": "" }, { "docid": "1c6078d68891b6600727a82841812666", "text": "Network traffic prediction aims at predicting the subsequent network traffic by using the previous network traffic data. This can serve as a proactive approach for network management and planning tasks. The family of recurrent neural network (RNN) approaches is known for time series data modeling which aims to predict the future time series based on the past information with long time lags of unrevealed size. RNN contains different network architectures like simple RNN, long short term memory (LSTM), gated recurrent unit (GRU), identity recurrent unit (IRNN) which is capable to learn the temporal patterns and long range dependencies in large sequences of arbitrary length. To leverage the efficacy of RNN approaches towards traffic matrix estimation in large networks, we use various RNN networks. The performance of various RNN networks is evaluated on the real data from GÉANT backbone networks. To identify the optimal network parameters and network structure of RNN, various experiments are done. All experiments are run up to 200 epochs with learning rate in the range [0.01-0.5]. LSTM has performed well in comparison to the other RNN and classical methods. Moreover, the performance of various RNN methods is comparable to LSTM.", "title": "" }, { "docid": "ff619ce19b787d32aa78a6ac295d1f1d", "text": "Mullerian duct anomalies (MDAs) are rare, affecting approximately 1% of all women and about 3% of women with poor reproductive outcomes. These congenital anomalies usually result from one of the following categories of abnormalities of the mullerian ducts: failure of formation (no development or underdevelopment) or failure of fusion of the mullerian ducts. The American Fertility Society (AFS) classification of uterine anomalies is widely accepted and includes seven distinct categories. MR imaging has consolidated its role as the imaging modality of choice in the evaluation of MDA. MRI is capable of demonstrating the anatomy of the female genital tract remarkably well and is able to provide detailed images of the intra-uterine zonal anatomy, delineate the external fundal contour of the uterus, and comprehensively image the entire female pelvis in multiple imaging planes in a single examination. The purpose of this pictorial essay is to show the value of MRI in the diagnosis of MDA and to review the key imaging features of anomalies of formation and fusion, emphasizing the relevance of accurate diagnosis before therapeutic intervention.", "title": "" }, { "docid": "867041312ec43a2b13937e9b82d68dc5", "text": "This paper presents a method of implementing impedance control (with inertia, damping, and stiffness terms) on a dual-arm system by using the relative Jacobian technique. The proposed method significantly simplifies the control implementation because the dual arm is treated as a single manipulator, whose end-effector motion is defined by the relative motion between the two end effectors. As a result, task description becomes simpler and more intuitive when specifying the desired impedance and the desired trajectories. This is the basis for the relative impedance control. In addition, the use of time-delay estimation enhances ease of implementation of our proposed method to a physical system, which would have been otherwise a very tedious and complicated process.", "title": "" }, { "docid": "7f1625c0d1ed39245c77db9cd3ca2bd7", "text": "We address the computational problem of novel human pose synthesis. Given an image of a person and a desired pose, we produce a depiction of that person in that pose, retaining the appearance of both the person and background. We present a modular generative neural network that synthesizes unseen poses using training pairs of images and poses taken from human action videos. Our network separates a scene into different body part and background layers, moves body parts to new locations and refines their appearances, and composites the new foreground with a hole-filled background. These subtasks, implemented with separate modules, are trained jointly using only a single target image as a supervised label. We use an adversarial discriminator to force our network to synthesize realistic details conditioned on pose. We demonstrate image synthesis results on three action classes: golf, yoga/workouts and tennis, and show that our method produces accurate results within action classes as well as across action classes. Given a sequence of desired poses, we also produce coherent videos of actions.", "title": "" }, { "docid": "27582287aeb1abccda7c7582d75de676", "text": "Affect Control Theory is a mathematical representation of the interactions between two persons, in which it is posited that people behave in a way so as to minimize the amount of deflection between their cultural emotional sentiments and the transient emotional sentiments that are created by each situation. Affect Control Theory presents a maximum likelihood solution in which optimal behaviours or identities can be predicted based on past interactions. Here, we formulate a probabilistic and decision theoretic model of the same underlying principles, and show this to be a generalisation of the basic theory. The model is more expressive than the original theory, as it can maintain multiple hypotheses about behaviours and identities simultaneously as a probability distribution. This allows the model to generate affectively believable interactions with people by learning about their identity and predicting their behaviours. We demonstrate this generalisation with a set of simulations. We then show how our model can be used as an emotional \"plug-in\" for systems that interact with humans. We demonstrate human-interactive capability by building a simple intelligent tutoring application and pilot-testing it in an experiment with 20 participants.", "title": "" }, { "docid": "80e8541113d629020a7057ca1f87b6e0", "text": "More recently, remote sensing image classification has been moving from pixel-level interpretation to scene-level semantic understanding, which aims to label each scene image with a specific semantic class. While significant efforts have been made in developing various methods for remote sensing image scene classification, most of them rely on handcrafted features. In this letter, we propose a novel feature representation method for scene classification, named bag of convolutional features (BoCF). Different from the traditional bag of visual words-based methods in which the visual words are usually obtained by using handcrafted feature descriptors, the proposed BoCF generates visual words from deep convolutional features using off-the-shelf convolutional neural networks. Extensive evaluations on a publicly available remote sensing image scene classification benchmark and comparison with the state-of-the-art methods demonstrate the effectiveness of the proposed BoCF method for remote sensing image scene classification.", "title": "" }, { "docid": "3d0f9cede1630367d28f06fe42b964a8", "text": "In-database analytics is of great practical importance as it avoids the costly repeated loop data scientists have to deal with on a daily basis: select features, export the data, convert data format, train models using an external tool, reimport the parameters. It is also a fertile ground of theoretically fundamental and challenging problems at the intersection of relational and statistical data models. This paper introduces a unified framework for training and evaluating a class of statistical learning models inside a relational database. This class includes ridge linear regression, polynomial regression, factorization machines, and principal component analysis. We show that, by synergizing key tools from relational database theory such as schema information, query structure, recent advances in query evaluation algorithms, and from linear algebra such as various tensor and matrix operations, one can formulate in-database learning problems and design efficient algorithms to solve them. The algorithms and models proposed in the paper have already been implemented and deployed in retail-planning and forecasting applications, with significant performance benefits over out-of-database solutions that require the costly data-export loop.", "title": "" }, { "docid": "216d4c4dc479588fb91a27e35b4cb403", "text": "At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices.\n We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and PageRank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%.", "title": "" }, { "docid": "8a0afddaf9909aa343915b0481fd9988", "text": "INTRODUCTION\nThe majority of osteoporotic, spinal cord compressive, vertebral fractures occurs at the thoracolumbar junction level. When responsible for neurological impairment, these rare lesions require a decompression procedure. We present the results of a new option to treat these lesions: an open balloon kyphoplasty associated with a short-segment posterior internal fixation.\n\n\nMATERIALS AND METHODS\nTwelve patients, aged a mean 72.3 years, were included in this prospective series; all of them presented osteoporotic burst fractures located between T11 and L2 associated with neurological impairment. The surgical procedure first consisted of a laminectomy, for decompression, followed by an open balloon kyphoplasty. A short-segment posterior internal fixation was subsequently put into place when the local kyphosis was considered severe. A CAT scan study evaluated local vertebral body's height restoration using two pre- and postoperative radiological indices.\n\n\nRESULTS\nAll of the patients in the series were followed up for a mean 14 months. Local kyphosis improved a mean 10.8 (p<0.001). Vertebral body height was also substantially restored, with a mean gain of 26% according to the anterior height/adjacent height ratio and 28% according to the Beck Index (p<0.001). Two cases of cement leakage were recorded, with no adverse clinical side effect. Complete neurological recovery was observed in 10 patients; two retained a minimal neurological deficit but kept a walking capacity.\n\n\nDISCUSSION\nThe results presented in this study confirm the data reported in the literature in terms of local kyphosis correction and vertebral body height restoration. The combination of this technique with laminectomy plus osteosynthesis allowed us to effectively treat burst fractures of the thoracolumbar junction and led to stable results 1 year after surgery. This can be advantageous in a population often carrying multiple co-morbidities. With a single operation, we can achieve neurological decompression and spinal column stability in a minimally invasive way; this avoids more substantial surgery in these fragile patients.\n\n\nLEVEL OF EVIDENCE\nLevel IV. Therapeutic prospective study.", "title": "" }, { "docid": "e9d8a2e0691067f6181ca3c62ca7a86c", "text": "K-means is a popular algorithm in document clustering, which is fast and efficient. The disadvantages of K-means are that it requires one to set the number of clusters first and select the initial clustering centers randomly. Latent Dirichlet Allocation (LDA) is a mature probabilistic topic model, which aids in document dimensionality reduction, semantic mining and information retrieval. We present a document clustering method based on LDA and K-means (LDA_K-means). In order to improve document clustering effect with K-means, we discover the initial clustering centers by finding the typical latent topics extracted by LDA. The effectiveness of LDA_K-means is evaluated on the 20 Newsgroups data sets. We show that LDA_K-means can significantly improve the clustering effect in contrast to clustering based on random initialization of K-means and LDA (LDA_KMR).", "title": "" }, { "docid": "1514ce079eba01f4a78ab13c49cc2fa7", "text": "The task of event trigger labeling is typically addressed in the standard supervised setting: triggers for each target event type are annotated as training data, based on annotation guidelines. We propose an alternative approach, which takes the example trigger terms mentioned in the guidelines as seeds, and then applies an eventindependent similarity-based classifier for trigger labeling. This way we can skip manual annotation for new event types, while requiring only minimal annotated training data for few example events at system setup. Our method is evaluated on the ACE-2005 dataset, achieving 5.7% F1 improvement over a state-of-the-art supervised system which uses the full training data.", "title": "" }, { "docid": "6df80f85e102b94c1b29b8e0dca6cab4", "text": "With the shortage of the energy and ever increasing of the oil price, research on the renewable and green energy sources, especially the solar arrays and the fuel cells, becomes more and more important. How to achieve high step-up and high efficiency DC/DC converters is the major consideration in the renewable grid-connected power applications due to the low voltage of PV arrays and fuel cells. The topology study with high step-up conversion is concentrated and most topologies recently proposed in these applications are covered and classified. The advantages and disadvantages of these converters are discussed and the major challenges of high step-up converters in renewable energy applications are summarized. This paper would like to make a clear picture on the general law and framework for the next generation non-isolated high step-up DC/DC converters.", "title": "" }, { "docid": "b277765cf0ced8162b6f05cc8f91fb71", "text": "Questions and their corresponding answers within a community based question answering (CQA) site are frequently presented as top search results forWeb search queries and viewed by millions of searchers daily. The number of answers for CQA questions ranges from a handful to dozens, and a searcher would be typically interested in the different suggestions presented in various answers for a question. Yet, especially when many answers are provided, the viewer may not want to sift through all answers but to read only the top ones. Prior work on answer ranking in CQA considered the qualitative notion of each answer separately, mainly whether it should be marked as best answer. We propose to promote CQA answers not only by their relevance to the question but also by the diversification and novelty qualities they hold compared to other answers. Specifically, we aim at ranking answers by the amount of new aspects they introduce with respect to higher ranked answers (novelty), on top of their relevance estimation. This approach is common in Web search and information retrieval, yet it was not addressed within the CQA settings before, which is quite different from classic document retrieval. We propose a novel answer ranking algorithm that borrows ideas from aspect ranking and multi-document summarization, but adapts them to our scenario. Answers are ranked in a greedy manner, taking into account their relevance to the question as well as their novelty compared to higher ranked answers and their coverage of important aspects. An experiment over a collection of Health questions, using a manually annotated gold-standard dataset, shows that considering novelty for answer ranking improves the quality of the ranked answer list.", "title": "" }, { "docid": "cc8a4744f05d5f46feacaff27b91a86c", "text": "In the recent past, several sampling-based algorithms have been proposed to compute trajectories that are collision-free and dynamically-feasible. However, the outputs of such algorithms are notoriously jagged. In this paper, by focusing on robots with car-like dynamics, we present a fast and simple heuristic algorithm, named Convex Elastic Smoothing (CES) algorithm, for trajectory smoothing and speed optimization. The CES algorithm is inspired by earlier work on elastic band planning and iteratively performs shape and speed optimization. The key feature of the algorithm is that both optimization problems can be solved via convex programming, making CES particularly fast. A range of numerical experiments show that the CES algorithm returns high-quality solutions in a matter of a few hundreds of milliseconds and hence appears amenable to a real-time implementation.", "title": "" }, { "docid": "b53e5d6054b684990e9c5c1e5d2b6b7d", "text": "Automatic Dependent Surveillance-Broadcast (ADS-B) is one of the key technologies for future “e-Enabled” aircrafts. ADS-B uses avionics in the e-Enabled aircrafts to broadcast essential flight data such as call sign, altitude, heading, and other extra positioning information. On the one hand, ADS-B brings significant benefits to the aviation industry, but, on the other hand, it could pose security concerns as channels between ground controllers and aircrafts for the ADS-B communication are not secured, and ADS-B messages could be captured by random individuals who own ADS-B receivers. In certain situations, ADS-B messages contain sensitive information, particularly when communications occur among mission-critical civil airplanes. These messages need to be protected from any interruption and eavesdropping. The challenge here is to construct an encryption scheme that is fast enough for very frequent encryption and that is flexible enough for effective key management. In this paper, we propose a Staged Identity-Based Encryption (SIBE) scheme, which modifies Boneh and Franklin's original IBE scheme to address those challenges, that is, to construct an efficient and functional encryption scheme for ADS-B system. Based on the proposed SIBE scheme, we provide a confidentiality framework for future e-Enabled aircraft with ADS-B capability.", "title": "" } ]
scidocsrr
390f817ebe88bff3be540c4282ffbc25
Automatic Facial Expression Recognition Using Gabor Filter and Expression Analysis
[ { "docid": "7ab87738e0dc081d26a8cf223b957833", "text": "We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions. We report results on a series of experiments comparing recognition engines, including AdaBoost, support vector machines, linear discriminant analysis. We also explored feature selection techniques, including the use of AdaBoost for feature selection prior to classification by SVM or LDA. Best results were obtained by selecting a subset of Gabor filters using AdaBoost followed by classification with support vector machines. The system operates in real-time, and obtained 93% correct generalization to novel subjects for a 7-way forced choice on the Cohn-Kanade expression dataset. The outputs of the classifiers change smoothly as a function of time and thus can be used to measure facial expression dynamics. We applied the system to to fully automated recognition of facial actions (FACS). The present system classifies 17 action units, whether they occur singly or in combination with other actions, with a mean accuracy of 94.8%. We present preliminary results for applying this system to spontaneous facial expressions.", "title": "" } ]
[ { "docid": "8aa92d178ff383742c1f3cc12d2d8539", "text": "Hypertext documents, such as web pages and academic papers, are of great importance in delivering information in our daily life. Although being effective on plain documents, conventional text embedding methods suffer from information loss if directly adapted to hyper-documents. In this paper, we propose a general embedding approach for hyper-documents, namely, hyperdoc2vec, along with four criteria characterizing necessary information that hyper-document embedding models should preserve. Systematic comparisons are conducted between hyperdoc2vec and several competitors on two tasks, i.e., paper classification and citation recommendation, in the academic paper domain. Analyses and experiments both validate the superiority of hyperdoc2vec to other models w.r.t. the four criteria.", "title": "" }, { "docid": "9d7a441731e9d0c62dd452ccb3d19f7b", "text": " In many countries, especially in under developed and developing countries proper health care service is a major concern. The health centers are far and even the medical personnel are deficient when compared to the requirement of the people. For this reason, health services for people who are unhealthy and need health monitoring on regular basis is like impossible. This makes the health monitoring of healthy people left far more behind. In order for citizens not to be deprived of the primary care it is always desirable to implement some system to solve this issue. The application of Internet of Things (IoT) is wide and has been implemented in various areas like security, intelligent transport system, smart cities, smart factories and health. This paper focuses on the application of IoT in health care system and proposes a novel architecture of making use of an IoT concept under fog computing. The proposed architecture can be used to acknowledge the underlying problem of deficient clinic-centric health system and change it to smart patientcentric health system.", "title": "" }, { "docid": "1ef1e20f24fa75b40bcc88a40a544c5b", "text": "Monitoring is the act of collecting information concerning the characteristics and status of resources of interest. Monitoring grid resources is a lively research area given the challenges and manifold applications. The aim of this paper is to advance the understanding of grid monitoring by introducing the involved concepts, requirements, phases, and related standardisation activities, including Global Grid Forum’s Grid Monitoring Architecture. Based on a refinement of the latter, the paper proposes a taxonomy of grid monitoring systems, which is employed to classify a wide range of projects and frameworks. The value of the offered taxonomy lies in that it captures a given system’s scope, scalability, generality and flexibility. The paper concludes with, among others, a discussion of the considered systems, as well as directions for future research. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2936f8e1f9a6dcf2ba4fdbaee73684e2", "text": "Recently the world of the web has become more social and more real-time. Facebook and Twitter are perhaps the exemplars of a new generation of social, real-time web services and we believe these types of service provide a fertile ground for recommender systems research. In this paper we focus on one of the key features of the social web, namely the creation of relationships between users. Like recent research, we view this as an important recommendation problem -- for a given user, UT which other users might be recommended as followers/followees -- but unlike other researchers we attempt to harness the real-time web as the basis for profiling and recommendation. To this end we evaluate a range of different profiling and recommendation strategies, based on a large dataset of Twitter users and their tweets, to demonstrate the potential for effective and efficient followee recommendation.", "title": "" }, { "docid": "36a6c72e049ce551fcf302e19eb5063b", "text": "We propose a complete probabilistic discriminative framework for performing sentencelevel discourse analysis. Our framework comprises a discourse segmenter, based on a binary classifier, and a discourse parser, which applies an optimal CKY-like parsing algorithm to probabilities inferred from a Dynamic Conditional Random Field. We show on two corpora that our approach outperforms the state-of-the-art, often by a wide margin.", "title": "" }, { "docid": "4f1111b33789e25ed896ad366f0d98de", "text": "As an ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words but the vector corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget’s Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the Thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. We also demonstrate the preservation of semantic coherence of the resulting vector space by using word-analogy and word-similarity tests. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests.", "title": "" }, { "docid": "ff91ed2072c93eeae5f254fb3de0d780", "text": "Machine learning requires access to all the data used for training. Recently, Google Research proposed Federated Learning as an alternative, where the training data is distributed over a federation of clients that each only access their own training data; the partially trained model is updated in a distributed fashion to maintain a situation where the data from all participating clients remains unknown. In this research we construct different distributions of the DMOZ dataset over the clients in the network and compare the resulting performance of Federated Averaging when learning a classifier. We find that the difference in spread of topics for each client has a strong correlation with the performance of the Federated Averaging algorithm.", "title": "" }, { "docid": "d5fbbd249842b40f3a81f1229213c528", "text": "In recent years, spatial applications have become more and more important in both scientific research and industry. Spatial query processing is the fundamental functioning component to support spatial applications. However, the state-of-the-art techniques of spatial query processing are facing significant challenges as the data expand and user accesses increase. In this paper we propose and implement a novel scheme (named VegaGiStore) to provide efficient spatial query processing over big spatial data and numerous concurrent user queries. Firstly, a geography-aware approach is proposed to organize spatial data in terms of geographic proximity, and this approach can achieve high aggregate I/O throughput. Secondly, in order to improve data retrieval efficiency, we design a two-tier distributed spatial index for efficient pruning of the search space. Thirdly, we propose an \"indexing + MapReduce'' data processing architecture to improve the computation capability of spatial query. Performance evaluations of the real-deployed VegaGiStore system confirm its effectiveness.", "title": "" }, { "docid": "670556463e3204a98b1e407ea0619a1f", "text": "1 Ekaterina Prasolova-Forland, IDI, NTNU, Sem Salandsv 7-9, N-7491 Trondheim, Norway [email protected] Abstract  This paper discusses awareness support in educational context, focusing on the support offered by collaborative virtual environments. Awareness plays an important role in everyday educational activities, especially in engineering courses where projects and group work is an integral part of the curriculum. In this paper we will provide a general overview of awareness in computer supported cooperative work and then focus on the awareness mechanisms offered by CVEs. We will also discuss the role and importance of these mechanisms in educational context and make some comparisons between awareness support in CVEs and in more traditional tools.", "title": "" }, { "docid": "f205f1760e33faebf2ded8065ff3c717", "text": "An audience effect arises when a person's behaviour changes because they believe someone else is watching them. Though these effects have been known about for over 110 years, the cognitive mechanisms of the audience effect and how it might vary across different populations and cultures remains unclear. In this review, we examine the hypothesis that the audience effect draws on implicit mentalising abilities. Behavioural and neuroimaging data from a number of tasks are consistent with this hypothesis. We further review data suggest that how people respond to audiences may vary over development, personality factors, cultural background and clinical diagnosis including autism and anxiety disorder. Overall, understanding and exploring the audience effect may contribute to our models of social interaction, including reputation management and mentalising.", "title": "" }, { "docid": "98689a2f03193a2fb5cc5195ef735483", "text": "Darknet markets are online services behind Tor where cybercriminals trade illegal goods and stolen datasets. In recent years, security analysts and law enforcement start to investigate the darknet markets to study the cybercriminal networks and predict future incidents. However, vendors in these markets often create multiple accounts (\\em i.e., Sybils), making it challenging to infer the relationships between cybercriminals and identify coordinated crimes. In this paper, we present a novel approach to link the multiple accounts of the same darknet vendors through photo analytics. The core idea is that darknet vendors often have to take their own product photos to prove the possession of the illegal goods, which can reveal their distinct photography styles. To fingerprint vendors, we construct a series deep neural networks to model the photography styles. We apply transfer learning to the model training, which allows us to accurately fingerprint vendors with a limited number of photos. We evaluate the system using real-world datasets from 3 large darknet markets (7,641 vendors and 197,682 product photos). A ground-truth evaluation shows that the system achieves an accuracy of 97.5%, outperforming existing stylometry-based methods in both accuracy and coverage. In addition, our system identifies previously unknown Sybil accounts within the same markets (23) and across different markets (715 pairs). Further case studies reveal new insights into the coordinated Sybil activities such as price manipulation, buyer scam, and product stocking and reselling.", "title": "" }, { "docid": "9326b7c1bd16e7db931131f77aaad687", "text": "We argue in this article that many common adverbial phrases generally taken to signal a discourse relation between syntactically connected units within discourse structure instead work anaphorically to contribute relational meaning, with only indirect dependence on discourse structure. This allows a simpler discourse structure to provide scaffolding for compositional semantics and reveals multiple ways in which the relational meaning conveyed by adverbial connectives can interact with that associated with discourse structure. We conclude by sketching out a lexicalized grammar for discourse that facilitates discourse interpretation as a product of compositional rules, anaphor resolution, and inference.", "title": "" }, { "docid": "3b9af99b33c15188a8ec50c7decd3b28", "text": "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5% on BDDS (drive-cam videos) in an unsupervised setting.", "title": "" }, { "docid": "df97ff54b80a096670c7771de1f49b6d", "text": "In recent times, Bitcoin has gained special attention both from industry and academia. The underlying technology that enables Bitcoin (or more generally crypto-currency) is called blockchain. At the core of the blockchain technology is a data structure that keeps record of the transactions in the network. The special feature that distinguishes it from existing technology is its immutability of the stored records. To achieve immutability, it uses consensus and cryptographic mechanisms. As the data is stored in distributed nodes this technology is also termed as \"Distributed Ledger Technology (DLT)\". As many researchers and practitioners are joining the hype of blockchain, some of them are raising the question about the fundamental difference between blockchain and traditional database and its real value or potential. In this paper, we present a critical analysis of both technologies based on a survey of the research literature where blockchain solutions are applied to various scenarios. Based on this analysis, we further develop a decision tree diagram that will help both practitioners and researchers to choose the appropriate technology for their use cases. Using our proposed decision tree we evaluate a sample of the existing works to see to what extent the blockchain solutions have been used appropriately in the relevant problem domains.", "title": "" }, { "docid": "06518637c2b44779da3479854fdbb84d", "text": "OBJECTIVE\nThe relative short-term efficacy and long-term benefits of pharmacologic versus psychotherapeutic interventions have not been studied for posttraumatic stress disorder (PTSD). This study compared the efficacy of a selective serotonin reup-take inhibitor (SSRI), fluoxetine, with a psychotherapeutic treatment, eye movement desensitization and reprocessing (EMDR), and pill placebo and measured maintenance of treatment gains at 6-month follow-up.\n\n\nMETHOD\nEighty-eight PTSD subjects diagnosed according to DSM-IV criteria were randomly assigned to EMDR, fluoxetine, or pill placebo. They received 8 weeks of treatment and were assessed by blind raters posttreatment and at 6-month follow-up. The primary outcome measure was the Clinician-Administered PTSD Scale, DSM-IV version, and the secondary outcome measure was the Beck Depression Inventory-II. The study ran from July 2000 through July 2003.\n\n\nRESULTS\nThe psychotherapy intervention was more successful than pharmacotherapy in achieving sustained reductions in PTSD and depression symptoms, but this benefit accrued primarily for adult-onset trauma survivors. At 6-month follow-up, 75.0% of adult-onset versus 33.3% of child-onset trauma subjects receiving EMDR achieved asymptomatic end-state functioning compared with none in the fluoxetine group. For most childhood-onset trauma patients, neither treatment produced complete symptom remission.\n\n\nCONCLUSIONS\nThis study supports the efficacy of brief EMDR treatment to produce substantial and sustained reduction of PTSD and depression in most victims of adult-onset trauma. It suggests a role for SSRIs as a reliable first-line intervention to achieve moderate symptom relief for adult victims of childhood-onset trauma. Future research should assess the impact of lengthier intervention, combination treatments, and treatment sequencing on the resolution of PTSD in adults with childhood-onset trauma.", "title": "" }, { "docid": "f2239ebff484962c302b00faf24374e4", "text": "In this paper, a methodology for the automated detection and classification of transient events in electroencephalographic (EEG) recordings is presented. It is based on association rule mining and classifies transient events into four categories: epileptic spikes, muscle activity, eye blinking activity, and sharp alpha activity. The methodology involves four stages: 1) transient event detection; 2) clustering of transient events and feature extraction; 3) feature discretization and feature subset selection; and 4) association rule mining and classification of transient events. The methodology is evaluated using 25 EEG recordings, and the best obtained accuracy was 87.38%. The proposed approach combines high accuracy with the ability to provide interpretation for the decisions made, since it is based on a set of association rules", "title": "" }, { "docid": "cd6fce2e64ba8933339dd59491b9ef1d", "text": "The first micrometer-sized graphene flakes extracted from graphite demonstrated outstanding electrical, mechanical and chemical properties, but they were too small for practical applications. However, the recent advances in graphene synthesis and transfer techniques have enabled various macroscopic applications such as transparent electrodes for touch screens and light-emitting diodes (LEDs) and thin-film transistors for flexible electronics in particular. With such exciting potential, a great deal of effort has been put towards producing larger size graphene in the hopes of industrializing graphene production. Little less than a decade after the first discovery, graphene now can be synthesized up to 30 inches in its diagonal size using chemical vapour deposition methods. In making this possible, it was not only the advances in the synthesis techniques but also the transfer methods that deliver graphene onto target substrates without significant mechanical damage. In this article, the recent advancements in transferring graphene to arbitrary substrates will be extensively reviewed. The methods are categorized into mechanical exfoliation, polymer-assisted transfer, continuous transfer by roll-to-roll process, and transfer-free techniques including direct synthesis on insulating substrates.", "title": "" }, { "docid": "02e961880a7925eb9d41c372498cb8d0", "text": "Since debt is typically riskier in recessions, transfers from equity holders to debt holders associated with each investment also tend to concentrate in recessions. Such systematic risk exposure of debt overhang has important implications for the investment and financing decisions of firms and on the ex ante costs of debt overhang. Using a calibrated dynamic capital structure/real option model, we show that the costs of debt overhang become significantly higher in the presence of macroeconomic risk. We also provide several new predictions that relate the cyclicality of a firm’s assets in place and growth options to its investment and capital structure decisions. We are grateful to Santiago Bazdresch, Bob Goldstein, David Mauer (WFA discussant), Erwan Morellec, Stew Myers, Chris Parsons, Michael Roberts, Antoinette Schoar, Neng Wang, Ivo Welch, and seminar participants at MIT, Federal Reserve Bank of Boston, Boston University, Dartmouth, University of Lausanne, University of Minnesota, the Third Risk Management Conference at Mont Tremblant, the Minnesota Corporate Finance Conference, and the WFA for their comments. MIT Sloan School of Management and NBER. Email: [email protected]. Tel. 617-324-3896. MIT Sloan School of Management. Email: [email protected]. Tel. 617-253-7218.", "title": "" }, { "docid": "40beda0d1e99f4cc5a15a3f7f6438ede", "text": "One of the major challenges with electric shipboard power systems (SPS) is preserving the survivability of the system under fault situations. Some minor faults in SPS can result in catastrophic consequences. Therefore, it is essential to investigate available fault management techniques for SPS applications that can enhance SPS robustness and reliability. Many recent studies in this area take different approaches to address fault tolerance in SPSs. This paper provides an overview of the concepts and methodologies that are utilized to deal with faults in the electric SPS. First, a taxonomy of the types of faults and their sources in SPS is presented; then, the methods that are used to detect, identify, isolate, and manage faults are reviewed. Furthermore, common techniques for designing a fault management system in SPS are analyzed and compared. This paper also highlights several possible future research directions.", "title": "" }, { "docid": "1d5a91029960f267b49831bee80e348f", "text": "Deep neural networks (DNNs) have become the dominant technique for acoustic-phonetic modeling due to their markedly improved performance over other models. Despite this, little is understood about the computation they implement in creating phonemic categories from highly variable acoustic signals. In this paper, we analyzed a DNN trained for phoneme recognition and characterized its representational properties, both at the single node and population level in each layer. At the single node level, we found strong selectivity to distinct phonetic features in all layers. Node selectivity to specific manners and places of articulation appeared from the first hidden layer and became more explicit in deeper layers. Furthermore, we found that nodes with similar phonetic feature selectivity were differentially activated to different exemplars of these features. Thus, each node becomes tuned to a particular acoustic manifestation of the same feature, providing an effective representational basis for the formation of invariant phonemic categories. This study reveals that phonetic features organize the activations in different layers of a DNN, a result that mirrors the recent findings of feature encoding in the human auditory system. These insights may provide better understanding of the limitations of current models, leading to new strategies to improve their performance.", "title": "" } ]
scidocsrr
354579b2298c9d6677cd502a74e92e6e
Hybrid Partitioned SRAM-Based Ternary Content Addressable Memory
[ { "docid": "39ab78b58f6ace0fc29f18a1c4ed8ebc", "text": "We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption.", "title": "" } ]
[ { "docid": "55304b1a38d49cd65658964c3aea5df5", "text": "In this paper, we take the view that any formalization of commitments has to come together with a formalization of time, events/actions and change. We enrich a suitable formalism for reasoning about time, event/action and change in order to represent and reason about commitments. We employ a three-valued based temporal first-order non-monotonic logic (TFONL) that allows an explicit representation of time and events/action. TFONL subsumes the action languages presented in the literature and takes into consideration the frame, qualification and ramification problems, and incorporates to a domain description the set of rules governing change. It can handle protocols for the different types of dialogues such as information seeking, inquiry and negotiation. We incorporate commitments into TFONL to obtain Com-TFONL. Com-TFONL allows an agent to reason about its commitments and about other agents’ behaviour during a dialogue. Thus, agents can employ social commitments to act on, argue with and reason about during interactions with other agents. Agents may use their reasoning and argumentative capabilities in order to determine the appropriate communicative acts during conversations. Furthermore, Com-TFONL allows for an integration of commitments and arguments which helps in capturing the public aspects of a conversation and the reasoning aspects required in coherent conversations.", "title": "" }, { "docid": "58a47d7fab243f265621be47f0bc5b58", "text": "A 1.8-kV 100-ps rise-time pulsed-power generator operating at a repetition frequency of 50 kHz is presented. The generator consists of three compression stages. In the first stage, a power MOSFET produces high voltage by breaking an inductor current. In the second stage, a 3-kV drift-step-recovery diode cuts the reverse current rapidly to create a 1-ns rise-time pulse. In the last stage, a silicon-avalanche shaper is used as a fast 100-ps closing switch. Experimental investigation showed that, by optimizing the generator operating point, the shot-to-shot jitter can be reduced to less than 13 ps. The theoretical model of the pulse-forming circuit is presented.", "title": "" }, { "docid": "39430478909e5818b242e0b28db419f0", "text": "BACKGROUND\nA modified version of the Berg Balance Scale (mBBS) was developed for individuals with intellectual and visual disabilities (IVD). However, the concurrent and predictive validity has not yet been determined.\n\n\nAIM\nThe purpose of the current study was to evaluate the concurrent and predictive validity of the mBBS for individuals with IVD.\n\n\nMETHOD\nFifty-four individuals with IVD and Gross Motor Functioning Classification System (GMFCS) Levels I and II participated in this study. The mBBS, the Centre of Gravity (COG), the Comfortable Walking Speed (CWS), and the Barthel Index (BI) were assessed during one session in order to determine the concurrent validity. The percentage of explained variance was determined by analyzing the squared multiple correlation between the mBBS and the BI, COG, CWS, GMFCS, and age, gender, level of intellectual disability, presence of epilepsy, level of visual impairment, and presence of hearing impairment. Furthermore, an overview of the degree of dependence between the mBBS, BI, CWS, and COG was obtained by graphic modelling. Predictive validity of mBBS was determined with respect to the number of falling incidents during 26 weeks and evaluated with Zero-inflated regression models using the explanatory variables of mBBS, BI, COG, CWS, and GMFCS.\n\n\nRESULTS\nThe results demonstrated that two significant explanatory variables, the GMFCS Level and the BI, and one non-significant variable, the CWS, explained approximately 60% of the mBBS variance. Graphical modelling revealed that BI was the most important explanatory variable for mBBS moreso than COG and CWS. Zero-inflated regression on the frequency of falling incidents demonstrated that the mBBS was not predictive, however, COG and CWS were.\n\n\nCONCLUSIONS\nThe results indicated that the concurrent validity as well as the predictive validity of mBBS were low for persons with IVD.", "title": "" }, { "docid": "2615f2f66adeaf1718d7afa5be3b32b1", "text": "In this paper, an advanced design of an Autonomous Underwater Vehicle (AUV) is presented. The design is driven only by four water pumps. The different power combinations of the four motors provides the force and moment for propulsion and maneuvering. No control surfaces are needed in this design, which make the manufacturing cost of such a vehicle minimal and more reliable. Based on the propulsion method of the vehicle, a nonlinear AUV dynamic model is studied. This nonlinear model is linearized at the operation point. A control strategy of the AUV is proposed including attitude control and auto-pilot design. Simulation results for the attitude control loop are presented to validate this approach.", "title": "" }, { "docid": "ba13195d39b28d5205b33452bfebd6e7", "text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications. The antenna consists of two open L-shaped slot (LS) antenna elements and a narrow slot on the ground plane. The antenna elements are placed perpendicularly to each other to obtain high isolation, and the narrow slot is added to reduce the mutual coupling of antenna elements in the low frequency band (3-4.5 GHz). The proposed MIMO antenna has a compact size of 32 ×32 mm2, and the antenna prototype is fabricated and measured. The measured results show that the proposed antenna design achieves an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than 15 dB, and a low envelope correlation coefficient of better than 0.02 across the frequency band, which are suitable for portable UWB applications.", "title": "" }, { "docid": "37a8ea1b792466c6e39709879e7a7b41", "text": "The lightning impulse withstand voltage for an oil-immersed power transformer is determined by the value of the lightning surge overvoltage generated at the transformer terminal. This overvoltage value has been conventionally obtained through lightning surge analysis using the electromagnetic transients program (EMTP), where the transformer is often simulated by a single lumped capacitance. However, since high frequency surge overvoltages ranging from several kHz to several MHz are generated in an actual system, a transformer circuit model capable of simulating the range up to this high frequency must be developed for further accurate analysis. In this paper, a high frequency circuit model for an oil-immersed transformer was developed and its validity was verified through comparison with the measurement results on the model winding actually produced. Consequently, it emerged that a high frequency model with three serially connected LC parallel circuits could adequately simulate the impedance characteristics of the winding up to a high frequency range of several MHz. Following lightning surge analysis for a 500 kV substation using this high frequency model, the peak value of the waveform was evaluated as lower than that simulated by conventional lumped capacitance even though the front rising was steeper. This phenomenon can be explained by the charging process of the capacitance circuit inside the transformer. Furthermore, the waveform analyzed by each model was converted into an equivalent standard lightning impulse waveform and the respective peak values were compared. As a result, the peak value obtained by the lumped capacitance simulation was evaluated as relatively higher under the present analysis conditions.", "title": "" }, { "docid": "dadcecd178721cf1ea2b6bf51bc9d246", "text": "8 Research on speech and emotion is moving from a period of exploratory research into one where there is a prospect 9 of substantial applications, notably in human–computer interaction. Progress in the area relies heavily on the devel10 opment of appropriate databases. This paper addresses four main issues that need to be considered in developing 11 databases of emotional speech: scope, naturalness, context and descriptors. The state of the art is reviewed. A good deal 12 has been done to address the key issues, but there is still a long way to go. The paper shows how the challenge of 13 developing appropriate databases is being addressed in three major recent projects––the Reading–Leeds project, the 14 Belfast project and the CREST–ESP project. From these and other studies the paper draws together the tools and 15 methods that have been developed, addresses the problems that arise and indicates the future directions for the de16 velopment of emotional speech databases. 2002 Published by Elsevier Science B.V.", "title": "" }, { "docid": "c809ef0984855e377bf241ed8a7aa7eb", "text": "Priapism of the clitoris is a rare entity. A case of painful priapism is reported in a patient who had previously suffered a radical cystectomy for bladder carcinoma pT3-GIII, followed by local recurrence in the pelvis. From a symptomatic point of view she showed a good response to conservative treatment (analgesics and anxiolytics), as she refused surgical treatment. She survived 6 months from the recurrence, and died with lung metastases. The priapism did not recur. The physiopathological mechanisms involved in the process are discussed and the literature reviewed.", "title": "" }, { "docid": "fce58bfa94acf2b26a50f816353e6bf2", "text": "The perspective directions in evaluating network security are simulating possible malefactor’s actions, building the representation of these actions as attack graphs (trees, nets), the subsequent checking of various properties of these graphs, and determining security metrics which can explain possible ways to increase security level. The paper suggests a new approach to security evaluation based on comprehensive simulation of malefactor’s actions, construction of attack graphs and computation of different security metrics. The approach is intended for using both at design and exploitation stages of computer networks. The implemented software system is described, and the examples of experiments for analysis of network security level are considered.", "title": "" }, { "docid": "d4da4c9bc129a15a8f7b7094216bc4b2", "text": "This paper presents a physical description of two specific aspects in drain-extended MOS transistors, i.e., quasi-saturation and impact-ionization effects. The 2-D device simulator Medici provides the physical insights, and both the unique features are originally attributed to the Kirk effect. The transistor dc model is derived from regional analysis of carrier transport in the intrinsic MOS and the drift region. The substrate-current equations, considering extra impact-ionization factors in the drift region, are also rigorously derived. The proposed model is primarily validated by MATLAB program and exhibits excellent scalability for various transistor dimensions, drift-region doping concentration, and voltage-handling capability.", "title": "" }, { "docid": "39b072a5adb75eb43561017d53ab6f44", "text": "The Internet of Things (IoT) is converting the agriculture industry and solving the immense problems or the major challenges faced by the farmers todays in the field. India is one of the 13th countries in the world having scarcity of water resources. Due to ever increasing of world population, we are facing difficulties in the shortage of water resources, limited availability of land, difficult to manage the costs while meeting the demands of increasing consumption needs of a global population that is expected to grow by 70% by the year 2050. The influence of population growth on agriculture leads to a miserable impact on the farmers livelihood. To overcome the problems we design a low cost system for monitoring the agriculture farm which continuously measure the level of soil moisture of the plants and alert the farmers if the moisture content of particular plants is low via sms or an email. This system uses an esp8266 microcontroller and a moisture sensor using Losant platform. Losant is a simple and most powerful IoT cloud platform for the development of coming generation. It offers the real time data visualization of sensors data which can be operate from any part of the world irrespective of the position of field.", "title": "" }, { "docid": "0efa756a15219d8383ca296860f7433a", "text": "Chronic inflammation plays a multifaceted role in carcinogenesis. Mounting evidence from preclinical and clinical studies suggests that persistent inflammation functions as a driving force in the journey to cancer. The possible mechanisms by which inflammation can contribute to carcinogenesis include induction of genomic instability, alterations in epigenetic events and subsequent inappropriate gene expression, enhanced proliferation of initiated cells, resistance to apoptosis, aggressive tumor neovascularization, invasion through tumor-associated basement membrane and metastasis, etc. Inflammation-induced reactive oxygen and nitrogen species cause damage to important cellular components (e.g., DNA, proteins and lipids), which can directly or indirectly contribute to malignant cell transformation. Overexpression, elevated secretion, or abnormal activation of proinflammatory mediators, such as cytokines, chemokines, cyclooxygenase-2, prostaglandins, inducible nitric oxide synthase, and nitric oxide, and a distinct network of intracellular signaling molecules including upstream kinases and transcription factors facilitate tumor promotion and progression. While inflammation promotes development of cancer, components of the tumor microenvironment, such as tumor cells, stromal cells in surrounding tissue and infiltrated inflammatory/immune cells generate an intratumoral inflammatory state by aberrant expression or activation of some proinflammatory molecules. Many of proinflammatory mediators, especially cytokines, chemokines and prostaglandins, turn on the angiogenic switches mainly controlled by vascular endothelial growth factor, thereby inducing inflammatory angiogenesis and tumor cell-stroma communication. This will end up with tumor angiogenesis, metastasis and invasion. Moreover, cellular microRNAs are emerging as a potential link between inflammation and cancer. The present article highlights the role of various proinflammatory mediators in carcinogenesis and their promise as potential targets for chemoprevention of inflammation-associated carcinogenesis.", "title": "" }, { "docid": "a20b874ab019da6a8c8f430cd9bc11b4", "text": "It is traditional wisdom that one should start from the goals when generating a plan in order to focus the plan generation process on potentially relevant actions. The graphplan system, however, which is the most eecient planning system nowadays, builds a \\planning graph\" in a forward-chaining manner. Although this strategy seems to work well, it may possibly lead to problems if the planning task description contains irrelevant information. Although some irrelevant information can be ltered out by graphplan, most cases of irrelevance are not noticed. In this paper, we analyze the eeects arising from \\irrelevant\" information to planning task descriptions for diierent types of planners. Based on that, we propose a family of heuristics that select relevant information by minimizing the number of initial facts that are used when approximating a plan by backchaining from the goals ignoring any connicts. These heuristics, although not solution-preserving, turn out to be very useful for guiding the planning process, as shown by applying the heuristics to a large number of examples from the literature.", "title": "" }, { "docid": "5aacd3ac3c6120311d7daa2de3cef2ba", "text": "Situated in the western Sierra Nevada foothills of California, CA-MRP-402 exhibits 103 rock art panels. By combining archaeological field research and excavation, this paper explores the ancient activities that took place at MRP-402. These efforts reveal that ancient Native Americans intentionally altered the landscape to create an astronomical observation area and generate consistent equinoctial solar and shadow alignments.", "title": "" }, { "docid": "8a1adea9a1f4beeb704691d76b2e4f53", "text": "As we observe a trend towards the recentralisation of the Internet, this paper raises the question of guaranteeing an everlasting decentralisation. We introduce the properties of strong and soft uncentralisability in order to describe systems in which all authorities can be untrusted at any time without affecting the system. We link the soft uncentralisability to another property called perfect forkability. Using that knowledge, we introduce a new cryptographic primitive called uncentralisable ledger and study its properties. We use those properties to analyse what an uncentralisable ledger may offer to classic electronic voting systems and how it opens up the realm of possibilities for completely new voting mechanisms. We review a list of selected projects that implement voting systems using blockchain technology. We then conclude that the true revolutionary feature enabled by uncentralisable ledgers is a self-sovereign and distributed identity provider.", "title": "" }, { "docid": "a576a6bf249616d186657a48c2aec071", "text": "Penumbras, or soft shadows, are an important means to enhance the realistic ap pearance of computer generated images. We present a fast method based on Minkowski operators to reduce t he run ime for penumbra calculation with stochastic ray tracing. Detailed run time analysis on some examples shows that the new method is significantly faster than the conventional approach. Moreover, it adapts to the environment so that small penumbras are calculated faster than larger ones. The algorithm needs at most twice as much memory as the underlying ray tracing algorithm.", "title": "" }, { "docid": "6440be547f86da7e08b79eac6b4311fe", "text": "OBJECTIVE\nTo assess the bioequivalence of an ezetimibe/simvastatin (EZE/SIMVA) combination tablet compared to the coadministration of ezetimibe and simvastatin as separate tablets (EZE + SIMVA).\n\n\nMETHODS\nIn this open-label, randomized, 2-part, 2-period crossover study, 96 healthy subjects were randomly assigned to participate in each part of the study (Part I or II), with each part consisting of 2 single-dose treatment periods separated by a 14-day washout. Part I consisted of Treatments A (EZE 10 mg + SIMVA 10 mg) and B (EZE/SIMVA 10/10 mg/mg) and Part II consisted of Treatments C (EZE 10 mg + SIMVA 80 mg) and D (EZE/SIMVA 10/80 mg/mg). Blood samples were collected up to 96 hours post-dose for determination of ezetimibe, total ezetimibe (ezetimibe + ezetimibe glucuronide), simvastatin and simvastatin acid (the most prevalent active metabolite of simvastatin) concentrations. Ezetimibe and simvastatin acid AUC(0-last) were predefined as primary endpoints and ezetimibe and simvastatin acid Cmax were secondary endpoints. Bioequivalence was achieved if 90% confidence intervals (CI) for the geometric mean ratios (GMR) (single tablet/coadministration) of AUC(0-last) and Cmax fell within prespecified bounds of (0.80, 1.25).\n\n\nRESULTS\nThe GMRs of the AUC(0-last) and Cmax for ezetimibe and simvastatin acid fell within the bioequivalence limits (0.80, 1.25). EZE/ SIMVA and EZE + SIMVA were generally well tolerated.\n\n\nCONCLUSIONS\nThe lowest and highest dosage strengths of EZE/SIMVA tablet were bioequivalent to the individual drug components administered together. Given the exact weight multiples of the EZE/SIMVA tablet and linear pharmacokinetics of simvastatin across the marketed dose range, bioequivalence of the intermediate tablet strengths (EZE/SIMVA 10/20 mg/mg and EZE/SIMVA 10/40 mg/mg) was inferred, although these dosages were not tested directly. These results indicate that the safety and efficacy profile of EZE + SIMVA coadministration therapy can be applied to treatment with the EZE/SIMVA tablet across the clinical dose range.", "title": "" }, { "docid": "9d2b3aaf57e31a2c0aa517d642f39506", "text": "3.1. URINARY TRACT INFECTION Urinary tract infection is one of the important causes of morbidity and mortality in Indian population, affecting all age groups across the life span. Anatomically, urinary tract is divided into an upper portion composed of kidneys, renal pelvis, and ureters and a lower portion made up of urinary bladder and urethra. UTI is an inflammatory response of the urothelium to bacterial invasion that is usually associated with bacteriuria and pyuria. UTI may involve only the lower urinary tract or both the upper and lower tract [19].", "title": "" }, { "docid": "1926166029995392a9ccb3c64bc10ee7", "text": "OBJECTIVES\nFew low income countries have emergency medical services to provide prehospital medical care and transport to road traffic crash casualties. In Ghana most roadway casualties receive care and transport to the hospital from taxi, bus, or truck drivers. This study reports the methods used to devise a model for prehospital trauma training for commercial drivers in Ghana.\n\n\nMETHODS\nOver 300 commercial drivers attended a first aid and rescue course designed specifically for roadway trauma and geared to a low education level. The training programme has been evaluated twice at one and two year intervals by interviewing both trained and untrained drivers with regard to their experiences with injured persons. In conjunction with a review of prehospital care literature, lessons learnt from the evaluations were used in the revision of the training model.\n\n\nRESULTS\nControl of external haemorrhage was quickly learnt and used appropriately by the drivers. Areas identified needing emphasis in future trainings included consistent use of universal precautions and protection of airways in unconscious persons using the recovery position.\n\n\nCONCLUSION\nIn low income countries, prehospital trauma care for roadway casualties can be improved by training laypersons already involved in prehospital transport and care. Training should be locally devised, evidence based, educationally appropriate, and focus on practical demonstrations.", "title": "" }, { "docid": "3969a0156c558020ca1de3b978c3ab4e", "text": "Silver-Russell syndrome (SRS) and Beckwith-Wiedemann syndrome (BWS) are 2 clinically opposite growth-affecting disorders belonging to the group of congenital imprinting disorders. The expression of both syndromes usually depends on the parental origin of the chromosome in which the imprinted genes reside. SRS is characterized by severe intrauterine and postnatal growth retardation with various additional clinical features such as hemihypertrophy, relative macrocephaly, fifth finger clinodactyly, and triangular facies. BWS is an overgrowth syndrome with many additional clinical features such as macroglossia, organomegaly, and an increased risk of childhood tumors. Both SRS and BWS are clinically and genetically heterogeneous, and for clinical diagnosis, different diagnostic scoring systems have been developed. Six diagnostic scoring systems for SRS and 4 for BWS have been previously published. However, neither syndrome has common consensus diagnostic criteria yet. Most cases of SRS and BWS are associated with opposite epigenetic or genetic abnormalities in the 11p15 chromosomal region leading to opposite imbalances in the expression of imprinted genes. SRS is also caused by maternal uniparental disomy 7, which is usually identified in 5-10% of the cases, and is therefore the first imprinting disorder that affects 2 different chromosomes. In this review, we describe in detail the clinical diagnostic criteria and scoring systems as well as molecular causes in both SRS and BWS.", "title": "" } ]
scidocsrr
1d789f197e86684157d68543178be045
Hotel reviews sentiment analysis based on word vector clustering
[ { "docid": "1434ac827bebb684682d527b92721354", "text": "Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times. *A preliminary version of this paper appeared in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS'O3), Toulouse, France, 2003, Vol. 111, 2057-2059. +NASA Goddard Space Flight Center, Architecture and Automation Branch, Greenbelt, MD 20771 and Department of Computer Science, University of Maryland, College Park, Maryland, 20742. Email: [email protected]. $ ~ e ~ a r t m e n t of Computer Science, University df Maryland, College Park, Maryland, 20742. The work of this author was supported by the Science Foundation under grant CCR-0098151. Email: [email protected]. l ~ e ~ a r t m e n t of Computer Science, Bar-Ilan University, Ramat-Gan 52900, Israel, and Center for Automation Research, University of Maryland, College Park, Maryland, 20742. Email: nathanOcs.biu.ac.il. NASA Goddard Space Flight Center, previously Applied Information Sciences Branch, currently Advanced Architectures and Automation Branch, Greenbelt, MD 20771. Email: [email protected]. https://ntrs.nasa.gov/search.jsp?R=20070038185 2017-12-21T21:41:30+00:00Z", "title": "" }, { "docid": "38aa324964214620c55eb4edfecf1bd2", "text": "This paper presents ROC curve, lift chart and calibration plot, three well known graphical techniques that are useful for evaluating the quality of classification models used in data mining and machine learning. Each technique, normally used and studied separately, defines its own measure of classification quality and its visualization. Here, we give a brief survey of the methods and establish a common mathematical framework which adds some new aspects, explanations and interrelations between these techniques. We conclude with an empirical evaluation and a few examples on how to use the presented techniques to boost classification accuracy.", "title": "" }, { "docid": "6651777a7843a59ef2365dfc811d7cde", "text": "As the widespread use of computers and the high-speed development of the Internet, E-Commerce has already penetrated as a part of our daily life. For a popular product, there are a large number of reviews. This makes it difficult for a potential customer to make an informed decision on purchasing the product, as well as for the manufacturer of the product to keep track and to manage customer opinions. In this paper, we pay attention to online hotel reviews, and propose a supervised machine learning approach using unigram feature with two types of information (frequency and TF-IDF) to realize polarity classification of documents. As shown in our experimental results, the information of TF-IDF is more effective than frequency.", "title": "" } ]
[ { "docid": "e3e75689d9425ea04db2de83bbfc9102", "text": "Recently, with the advent of location-based social networking services (LBSNs), travel planning and location-aware information recommendation based on LBSNs have attracted much research attention. In this paper, we study the impact of social relations hidden in LBSNs, i.e., The social influence of friends. We propose a new social influence-based user recommender framework (SIR) to discover the potential value from reliable users (i.e., Close friends and travel experts). Explicitly, our SIR framework is able to infer influential users from an LBSN. We claim to capture the interactions among virtual communities, physical mobility activities and time effects to infer the social influence between user pairs. Furthermore, we intend to model the propagation of influence using diffusion-based mechanism. Moreover, we have designed a dynamic fusion framework to integrate the features mined into a united follow probability score. Finally, our SIR framework provides personalized top-k user recommendations for individuals. To evaluate the recommendation results, we have conducted extensive experiments on real datasets (i.e., The Go Walla dataset). The experimental results show that the performance of our SIR framework is better than the state-of the-art user recommendation mechanisms in terms of accuracy and reliability.", "title": "" }, { "docid": "15884b99bf0f288377bd1fe01423bdfd", "text": "This is an innovative work for the field of web usage mining. The main feature of our work a complete framework and findings in mining Web usage patterns from Web log files of a real Web site that has all the difficult aspects of real-life Web usage mining, including developing user profiles and external data describing an ontology of the Web content. We are presenting a method for discovering and tracking evolving user profiles. Profiles are also enriched with other domain-specific information facets that give a panoramic view of the discovered mass usage modes. An objective validation plan is also used to assess the quality of the mined profiles, in particular their adaptability in the face of evolving user behaviour. Keywords— Web mining, Cookies, Session.", "title": "" }, { "docid": "3f807cb7e753ebd70558a0ce74b416b7", "text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "e599fa394befb387f9148a840bfbe308", "text": "Social media is becoming a major and popular technological platform that allows users to express personal opinions toward the subjects with shared interests, opinion are good for decision making to People would want to know others' opinion before taking a decision, while corporate would like to monitor pulse of people in a social media about their products and services and take appropriate actions. This paper reviewed about world are realizing that e-commerce is not just buying and selling over Internet, rather it is improve the efficiency to compete with other giants in the market. Their opinions on specific topic are inevitably dependent on many social effects such as user preference on topics, peer influence, user profile information.", "title": "" }, { "docid": "09c5da2fbf8a160ba27221ff0c5417ac", "text": " The burst fracture of the spine was first described by Holdsworth in 1963 and redefined by Denis in 1983 as being a fracture of the anterior and middle columns of the spine with or without an associated posterior column fracture. This injury has received much attention in the literature as regards its radiological diagnosis and also its clinical managment. The purpose of this article is to review the way that imaging has been used both to diagnose the injury and to guide management. Current concepts of the stability of this fracture are presented and our experience in the use of magnetic resonance imaging in deciding treatment options is discussed.", "title": "" }, { "docid": "cebdedb344f2ba7efb95c2933470e738", "text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks", "title": "" }, { "docid": "177f95dc300186f519bd3ac48081a6e0", "text": "TAI's multi-sensor fusion technology is accelerating the development of accurate MEMS sensor-based inertial navigation in situations where GPS does not operate reliably (GPS-denied environments). TAI has demonstrated that one inertial device per axis is not sufficient to produce low drift errors for long term accuracy needed for GPS-denied applications. TAI's technology uses arrays of off-the-shelf MEMS inertial sensors to create an inertial measurement unit (IMU) suitable for inertial navigation systems (INS) that require only occasional GPS updates. Compared to fiber optics gyros, properly combined MEMS gyro arrays are lower cost, fit into smaller volume, use less power and have equal or better performance. The patents TAI holds address this development for both gyro and accelerometer arrays. Existing inertial measurement units based on such array combinations, the backbone of TAI's inertial navigation system (INS) design, have demonstrated approximately 100 times lower sensor drift error to support very accurate angular rates, very accurate position measurements, and very low angle error for long durations. TAI's newest, fourth generation, product occupies small volume, has low weight, and consumes little power. The complete assembly can be potted in a protective sheath to form a rugged standalone product. An external exoskeleton case protects the electronic assembly for munitions and UAV applications. TAI's IMU/INS will provide the user with accurate real-time navigation information in difficult situations where GPS is not reliable. The key to such accurate performance is to achieve low sensor drift errors. The INS responds to quick movements without introducing delays while sharply reducing sensor drift errors that result in significant navigation errors. Discussed in the paper are physical characteristics of the IMU, an overview of the system design, TAI's systematic approach to drift reduction and some early results of applying a sigma point Kalman filter to sustain low gyro drift.", "title": "" }, { "docid": "1d6e23fedc5fa51b5125b984e4741529", "text": "Human action recognition from well-segmented 3D skeleton data has been intensively studied and attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream. In this paper, we study the problem of online action detection from the streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.", "title": "" }, { "docid": "2c5cab6e37ad905e0e3576259c4357ff", "text": "--------------------------------------------------------ABSTRACT-----------------------------------------------------------Classification and regression as data mining techniques for predicting the diseases outbreak has been permitted in the health institutions which have relative opportunities for conducting the treatment of diseases. But there is a need to develop a strong model for predicting disease outbreak in datasets based in various countries by filling the existing data mining technique gaps where the majority of models are relaying on single data mining techniques which their accuracies in prediction are not maximized for achieving expected results and also prediction are still few. This paper presents a survey and analysis for existing techniques on both classification and regression models techniques that have been applied for diseases outbreak prediction in datasets.", "title": "" }, { "docid": "c956c6d99053b44557cfed93f12dc1bc", "text": "We present a device demonstrating a lithographically patterned transmon integrated with a micromachined cavity resonator. Our two-cavity, one-qubit device is a multilayer microwave-integrated quantum circuit (MMIQC), comprising a basic unit capable of performing circuit-QED operations. We describe the qubit-cavity coupling mechanism of a specialized geometry using an electric-field picture and a circuit model, and obtain specific system parameters using simulations. Fabrication of the MMIQC includes lithography, etching, and metallic bonding of silicon wafers. Superconducting wafer bonding is a critical capability that is demonstrated by a micromachined storage-cavity lifetime of 34.3 μs, corresponding to a quality factor of 2 × 10 at single-photon energies. The transmon coherence times are T1 1⁄4 6.4 μs, and Techo 2 1⁄4 11.7 μs. We measure qubit-cavity dispersive coupling with a rate χqμ=2π 1⁄4 −1.17 MHz, constituting a Jaynes-Cummings system with an interaction strength g=2π 1⁄4 49 MHz. With these parameters we are able to demonstrate circuit-QED operations in the strong dispersive regime with ease. Finally, we highlight several improvements and anticipated extensions of the technology to complex MMIQCs.", "title": "" }, { "docid": "ff4c2f1467a141894dbe76491bc06d3b", "text": "Railways is the major means of transport in most of the countries. Rails are the backbone of the track structure and should be protected from defects. Surface defects are irregularities in the rails caused due to the shear stresses between the rails and wheels of the trains. This type of defects should be detected to avoid rail fractures. The objective of this paper is to propose an innovative technique to detect the surface defect on rail heads. In order to identify the defects, it is essential to extract the rails from the background and further enhance the image for thresholding. The proposed method uses Binary Image Based Rail Extraction (BIBRE) algorithm to extract the rails from the background. The extracted rails are enhanced to achieve uniform background with the help of direct enhancement method. The direct enhancement method enhance the image by enhancing the brightness difference between objects and their backgrounds. The enhanced rail image uses Gabor filters to identify the defects from the rails. The Gabor filters maximizes the energy difference between defect and defect less surface. Thresholding is done based on the energy of the defects. From the thresholded image the defects are identified and a message box is generated when there is a presence of defects.", "title": "" }, { "docid": "026f146c87f4b2f4a63789b8c08a482a", "text": "This study aims to develop a comprehensive review on the issue of poor school performance for professionals in both health and education areas. It discusses current aspects of education, learning and the main conditions involved in underachievement. It also presents updated data on key aspects of neurobiology, epidemiology, etiology, clinical presentation, comorbidities and diagnosis, early intervention and treatment of the major pathologies comprised. It is a comprehensive, non-systematic literature review on learning, school performance, learning disorders (dyslexia, dyscalculia and dysgraphia), attention deficit / hyperactivity disorder (ADHD) and developmental coordination disorder (DCD). Poor school performance is a frequent problem faced by our children, causing serious emotional, social and economic issues. An updated view of the subject facilitates clinical reasoning, accurate diagnosis and appropriate treatment.", "title": "" }, { "docid": "38d1e06642f12138f8b0a90deeb96979", "text": "Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit the abundance of patterns of code. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities.", "title": "" }, { "docid": "b720df1467aade5dd1ba82602ba14591", "text": "Modern medical devices and equipment have become very complex and sophisticated and are expected to operate under stringent environments. Hospitals must ensure that their critical medical devices are safe, accurate, reliable and operating at the required level of performance. Even though the importance, the application of all inspection, maintenance and optimization models to medical devices is fairly new. In Canada, most, if not all healthcare organizations include all their medical equipment in their maintenance program and just follow manufacturers’ recommendations for preventative maintenance. Then, current maintenance strategies employed in hospitals and healthcare organizations have difficulty in identifying specific risks and applying optimal risk reduction activities. This paper addresses these gaps found in literature for medical equipment inspection and maintenance and reviews various important aspects including current policies applied in hospitals. Finally we suggest future research which will be the starting point to develop tools and policies for better medical devices management in the future.", "title": "" }, { "docid": "b8334d21af0d511b13dcaf27b6916dc5", "text": "Almost all of today’s knowledge is stored in databases and thus can only be accessed with the help of domain specific query languages, strongly limiting the number of people which can access the data. In this work, we demonstrate an end-to-end trainable question answering (QA) system that allows a user to query an external NoSQL database by using natural language. A major challenge of such a system is the non-differentiability of database operations which we overcome by applying policy-based reinforcement learning. We evaluate our approach on Facebook’s bAbI Movie Dialog dataset and achieve a competitive score of 84.2% compared to several benchmark models. We conclude that our approach excels with regard to real-world scenarios where knowledge resides in external databases and intermediate labels are too costly to gather for non-end-to-end trainable QA systems.", "title": "" }, { "docid": "99f93328d19ac240378c5cfe08cf9f9e", "text": "Email classification is still a mostly manual task. Consequently, most Web mail users never define a single folder. Recently however, automatic classification offering the same categories to all users has started to appear in some Web mail clients, such as AOL or Gmail. We adopt this approach, rather than previous (unsuccessful) personalized approaches because of the change in the nature of consumer email traffic, which is now dominated by (non-spam) machine-generated email. We propose here a novel approach for (1) automatically distinguishing between personal and machine-generated email and (2) classifying messages into latent categories, without requiring users to have defined any folder. We report how we have discovered that a set of 6 \"latent\" categories (one for human- and the others for machine-generated messages) can explain a significant portion of email traffic. We describe in details the steps involved in building a Web-scale email categorization system, from the collection of ground-truth labels, the selection of features to the training of models. Experimental evaluation was performed on more than 500 billion messages received during a period of six months by users of Yahoo mail service, who elected to be part of such research studies. Our system achieved precision and recall rates close to 90% and the latent categories we discovered were shown to cover 70% of both email traffic and email search queries. We believe that these results pave the way for a change of approach in the Web mail industry, and could support the invention of new large-scale email discovery paradigms that had not been possible before.", "title": "" }, { "docid": "2be043b09e6dd631b5fe6f9eed44e2ec", "text": "This article aims to contribute to a critical research agenda for investigating the democratic implications of citizen journalism and social news. The article calls for a broad conception of ‘citizen journalism’ which is (1) not an exclusively online phenomenon, (2) not confined to explicitly ‘alternative’ news sources, and (3) includes ‘metajournalism’ as well as the practices of journalism itself. A case is made for seeing democratic implications not simply in the horizontal or ‘peer-to-peer’ public sphere of citizen journalism networks, but also in the possibility of a more ‘reflexive’ culture of news consumption through citizen participation. The article calls for a research agenda that investigates new forms of gatekeeping and agendasetting power within social news and citizen journalism networks and, drawing on the example of three sites, highlights the importance of both formal and informal status differentials and of the software ‘code’ structuring these new modes of news", "title": "" }, { "docid": "6a763e49cdfd41b28922eb536d9404ed", "text": "With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.", "title": "" }, { "docid": "3f9f01e3b3f5ab541cbe78fb210cf744", "text": "The reliable and effective localization system is the basis of Automatic Guided Vehicle (AGV) to complete given tasks automatically in warehouse environment. However, there are no obvious features that can be used for localization of AGV to be extracted in warehouse environment and it dose make it difficult to realize the localization of AGV. So in this paper, we concentrate on the problem of optimal landmarks placement in warehouse so as to improve the reliability of localization. Firstly, we take the practical warehouse environment into consideration and transform the problem of landmarks placement into an optimization problem which aims at maximizing the difference degree between each basic unit of localization. Then Genetic Algorithm (GA) is used to solve the optimization problem. Then we match the observed landmarks with the already known ones stored in the map and the Triangulation method is used to estimate the position of AGV after the matching has been done. Finally, experiments in a real warehouse environment validate the effectiveness and reliability of our method.", "title": "" }, { "docid": "8aeead40ab3112b0ef69c77c73885d46", "text": "We provide a new understanding of the fundamental nature of adversarially robust classifiers and how they differ from standard models. In particular, we show that there provably exists a trade-off between the standard accuracy of a model and its robustness to adversarial perturbations. We demonstrate an intriguing phenomenon at the root of this tension: a certain dichotomy between “robust” and “non-robust” features. We show that while robustness comes at a price, it also has some surprising benefits. Robust models turn out to have interpretable gradients and feature representations that align unusually well with salient data characteristics. In fact, they yield striking feature interpolations that have thus far been possible to obtain only using generative models such as GANs.", "title": "" } ]
scidocsrr
147a8f2b62ceea97cf02c011f6d8446f
Scaled Current Tracking Control for Doubly Fed Induction Generator to Ride-Through Serious Grid Faults
[ { "docid": "8066246656f6a9a3060e42efae3b197f", "text": "The paper describes the engineering and design of a doubly fed induction generator (DFIG), using back-to-back PWM voltage-source converters in the rotor circuit. A vector-control scheme for the supply-side PWM converter results in independent control of active and reactive power drawn from the supply, while ensuring sinusoidal supply currents. Vector control of the rotor-connected converter provides for wide speed-range operation; the vector scheme is embedded in control loops which enable optimal speed tracking for maximum energy capture from the wind. An experimental rig, which represents a 1.5 kW variable speed wind-energy generation system is described, and experimental results are given that illustrate the excellent performance characteristics of the system. The paper considers a grid-connected system; a further paper will describe a stand-alone system.", "title": "" } ]
[ { "docid": "f613a2ed6f64c469cf1180d1e8fe9e4a", "text": "We describe an estimation technique which, given a measurement of the depth of a target from a wide-fieldof-view (WFOV) stereo camera pair, produces a minimax risk fixed-size confidence interval estimate for the target depth. This work constitutes the first application to the computer vision domain of optimal fixed-size confidenceinterval decision theory. The approach is evaluated in terms of theoretical capture probability and empirical cap ture frequency during actual experiments with a target on an optical bench. The method is compared to several other procedures including the Kalman Filter. The minimax approach is found to dominate all the other methods in performance. In particular, for the minimax approach, a very close agreement is achieved between theoreticalcapture probability andempiricalcapture frequency. This allows performance to be accurately predicted, greatly facilitating the system design, and delineating the tasks that may be performed with a given system.", "title": "" }, { "docid": "4816d3c4ca52f2ba592b29636b4a3c35", "text": "In this paper, we describe a system that applies maximum entropy (ME) models to the task of named entity recognition (NER). Starting with an annotated corpus and a set of features which are easily obtainable for almost any language, we first build a baseline NE recognizer which is then used to extract the named entities and their context information from additional nonannotated data. In turn, these lists are incorporated into the final recognizer to further improve the recognition accuracy.", "title": "" }, { "docid": "1bf735fc91f375bd3c1d5a437aabf6eb", "text": "In any collaborative system, there are both symmetries and asymmetries present in the design of the technology and in the ways that technology is appropriated. Yet media space research tends to focus more on supporting and fostering the symmetries than the asymmetries. Throughout more than 20 years of media space research, the pursuit of increased symmetry, whether achieved through technical or social means, has been a recurrent theme. The research literature on the use of contemporary awareness systems, in contrast, displays little if any of this emphasis on symmetrical use; indeed, this body of research occasionally highlights the perceived value of asymmetry. In this paper, we unpack the different forms of asymmetry present in both media spaces and contemporary awareness systems. We argue that just as asymmetry has been demonstrated to have value in contemporary awareness systems, so might asymmetry have value in media spaces and in other CSCW systems, more generally. To illustrate, we present a media space that emphasizes and embodies multiple forms of asymmetry and does so in response to the needs of a particular work context.", "title": "" }, { "docid": "c7f0a749e38b3b7eba871fca80df9464", "text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.", "title": "" }, { "docid": "46c4b4a68e0be453148779529f235e98", "text": "Received Feb 14, 2017 Revised Apr 14, 2017 Accepted Apr 28, 2017 This paper proposes maximum boost control for 7-level z-source cascaded h-bridge inverter and their affiliation between voltage boost gain and modulation index. Z-source network avoids the usage of external dc-dc boost converter and improves output voltage with minimised harmonic content. Z-source network utilises distinctive LC impedance combination with 7-level cascaded inverter and it conquers the conventional voltage source inverter. The maximum boost controller furnishes voltage boost and maintain constant voltage stress across power switches, which provides better output voltage with variation of duty cycles. Single phase 7-level z-source cascaded inverter simulated using matlab/simulink. Keyword:", "title": "" }, { "docid": "6b4a30948ed87cfc9f3a19a984d94994", "text": "In Ethernet-based time-triggered networks, like TTEthernet, a global communication scheme, for which the schedule synthesis is known to be an NP-complete problem, establishes contention-free windows for the exchange of messages with guaranteed low latency and minimal jitter. However, in order to achieve end-to-end determinism at the application level, software tasks running on the end-system nodes need to obey a similar execution scheme with tight dependencies towards the network domain. In this paper we address the simultaneous co-synthesis of network as well as application schedules for preemptive time-triggered tasks communicating in a switched multi-speed time-triggered network. We use Satisfiability Modulo Theories (SMT) to formulate the scheduling constraints and solve the resulting problem using a state-of-the-art SMT solver. Furthermore, we introduce a novel incremental scheduling approach, based on the demand bound test for asynchronous constrained-deadline periodic tasks, which significantly improves scalability for the average case without sacrificing schedulability. We demonstrate the performance of our approach using synthetic network topologies and system configurations.", "title": "" }, { "docid": "5f811c5f95c60c6edc48b1fedab07a2a", "text": "This paper discusses dexterous, within-hand manipulation with differential-type underactuated hands. We discuss the fact that not only can this class of hands, which to date have been considered almost exclusively for adaptive grasping, be utilized for precision manipulation, but also that the reduction of the number of actuators and constraints can make within-hand manipulation easier to implement and control. Next, we introduce an analytical framework for evaluating the dexterous workspace of objects held within the fingertips in a precision grasp. A set of design principles for underactuated fingers are developed that enable fingertip grasping and manipulation. Finally, we apply this framework to analyze the workspace of stable object configurations for an object held within a pinch grasp of a two-fingered underactuated planar hand, demonstrating a large and useful workspace despite only one actuator per finger. The in-hand manipulation workspace for the iRobot–Harvard–Yale Hand is experimentally measured and presented.", "title": "" }, { "docid": "5595102130b4c03c7f65f31207951f79", "text": "Being a leading location-based social network (LBSN), Foursquare’s Swarm app allows users to conduct checkins at a specified location and share their real-time locations with friends. This app records a massive set of spatio-temporal information of users around the world. In this paper, we track the evolution of user density of the Swarm app in New York City (NYC) for one entire week. We study the temporal patterns of different venue categories, and investigate how the function of venue categories affects the temporal behavior of visitors. Moreover, by applying time-series analysis, we validate that the temporal patterns can be effectively decomposed into regular parts which represent the regular human behavior and stochastic parts which represent the randomness of human behavior. Finally, we build a model to predict the evolution of the user density, and our results demonstrate an accurate prediction.", "title": "" }, { "docid": "ffbe1b8861515e0801da9cb514e490b7", "text": "A mathematical study is performed to assess how the arterial pressure-volume (P-V) relationship, blood pressure pulse amplitude and shape affect the results of non-invasive oscillometric finger mean blood pressure estimation by the maximum oscillation criterion (MOC). The exponential models for a relaxed finger artery and for a partly contracted artery are studied. A new modification of the error equation is suggested. This equation and the results of simulation demonstrate that the value of pressure estimated by the MOC does not exactly agree with the value of the true mean blood pressure (the latter being defined as pressure corresponding to maximum arterial compliance). The error depends on the arterial pressure pulse amplitude, as well as on the difference between the arterial pressure pulse shape index and the arterial P-V curve shape index. In the case of contracted finger arteries, the MOC can give an overestimation of up to 19 mmHg, the pressure pulse shape index being 0.21 and the pulse amplitude 60 mmHg. In the case of relaxed arteries, the error is less evident.", "title": "" }, { "docid": "1796b8d91de88303571cc6f3f66b580b", "text": "In this paper it is shown that bifilar of a Quadrifilar Helix Antenna (QHA) when designed in side-fed configuration at a given diameter and length of helical arm, effectively becomes equivalent to combination of a loop and a dipole antenna. The vertical and horizontal electric fields caused by these equivalent antennas can be made to vary by changing the turn angle of the bifilar. It is shown how the variation in horizontal and vertical electric field dominance is seen until perfect circular polarization is achieved when two fields are equal at a certain turn angle where area of the loop equals product of pitch of helix and radian length i.e. equivalent dipole length. The antenna is low profile and does not require ground plane and thus can be used in high speed aerodynamic and platform bodies made of composite material where metallic ground is unavailable. Additionally not requiring ground plane increases the isolation between the antennas with stable radiation pattern and hence can be used in MIMO systems.", "title": "" }, { "docid": "e34a61754ff8cfac053af5cbedadd9e0", "text": "An ongoing, annual survey of publications in systems and software engineering identifies the top 15 scholars and institutions in the field over a 5-year period. Each ranking is based on the weighted scores of the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software of the corresponding period. This report summarizes the results for 2003–2007 and 2004–2008. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea for 2003–2007, and Simula Research Laboratory, Norway for 2004–2008, while Magne Jørgensen is the top-ranked scholar for both periods.", "title": "" }, { "docid": "1573020547c887b8f54948e99b87ca53", "text": "Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining access to the performance and functionality already on offer, doing so in just 800 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.", "title": "" }, { "docid": "38d791ebe063bd58a04afd21e6d8f25a", "text": "The design of a Web search evaluation metric is closely related with how the user's interaction process is modeled. Each behavioral model results in a different metric used to evaluate search performance. In these models and the user behavior assumptions behind them, when a user ends a search session is one of the prime concerns because it is highly related to both benefit and cost estimation. Existing metric design usually adopts some simplified criteria to decide the stopping time point: (1) upper limit for benefit (e.g. RR, AP); (2) upper limit for cost (e.g. Precision@N, DCG@N). However, in many practical search sessions (e.g. exploratory search), the stopping criterion is more complex than the simplified case. Analyzing benefit and cost of actual users' search sessions, we find that the stopping criteria vary with search tasks and are usually combination effects of both benefit and cost factors. Inspired by a popular computer game named Bejeweled, we propose a Bejeweled Player Model (BPM) to simulate users' search interaction processes and evaluate their search performances. In the BPM, a user stops when he/she either has found sufficient useful information or has no more patience to continue. Given this assumption, a new evaluation framework based on upper limits (either fixed or changeable as search proceeds) for both benefit and cost is proposed. We show how to derive a new metric from the framework and demonstrate that it can be adopted to revise traditional metrics like Discounted Cumulative Gain (DCG), Expected Reciprocal Rank (ERR) and Average Precision (AP). To show effectiveness of the proposed framework, we compare it with a number of existing metrics in terms of correlation between user satisfaction and the metrics based on a dataset that collects users' explicit satisfaction feedbacks and assessors' relevance judgements. Experiment results show that the framework is better correlated with user satisfaction feedbacks.", "title": "" }, { "docid": "b4cadd9179150203638ff9b045a4145d", "text": "Interpenetrating network (IPN) hydrogel membranes of sodium alginate (SA) and poly(vinyl alcohol) (PVA) were prepared by solvent casting method for transdermal delivery of an anti-hypertensive drug, prazosin hydrochloride. The prepared membranes were thin, flexible and smooth. The X-ray diffraction studies indicated the amorphous dispersion of drug in the membranes. Differential scanning calorimetric analysis confirmed the IPN formation and suggests that the membrane stiffness increases with increased concentration of glutaraldehyde (GA) in the membranes. All the membranes were permeable to water vapors depending upon the extent of cross-linking. The in vitro drug release study was performed through excised rat abdominal skin; drug release depends on the concentrations of GA in membranes. The IPN membranes extended drug release up to 24 h, while SA and PVA membranes discharged the drug quickly. The primary skin irritation and skin histopathology study indicated that the prepared IPN membranes were less irritant and safe for skin application.", "title": "" }, { "docid": "e75df6ff31c9840712cf1a4d7f6582cd", "text": "Endotoxin, a constituent of Gram-negative bacteria, stimulates macrophages to release large quantities of tumor necrosis factor (TNF) and interleukin-1 (IL-1), which can precipitate tissue injury and lethal shock (endotoxemia). Antagonists of TNF and IL-1 have shown limited efficacy in clinical trials, possibly because these cytokines are early mediators in pathogenesis. Here a potential late mediator of lethality is identified and characterized in a mouse model. High mobility group-1 (HMG-1) protein was found to be released by cultured macrophages more than 8 hours after stimulation with endotoxin, TNF, or IL-1. Mice showed increased serum levels of HMG-1 from 8 to 32 hours after endotoxin exposure. Delayed administration of antibodies to HMG-1 attenuated endotoxin lethality in mice, and administration of HMG-1 itself was lethal. Septic patients who succumbed to infection had increased serum HMG-1 levels, suggesting that this protein warrants investigation as a therapeutic target.", "title": "" }, { "docid": "835f004b55534f051a5dc98dc8852e12", "text": "The focus of this paper is on presentation attack detection for the iris biometrics, which measures the pattern within the colored concentric circle of the subjects' eyes, to authenticate an individual to a generic user verification system. Unlike previous deep learning methods that use single convolutional neural network architectures, this paper develops a framework built upon triplet convolutional networks that takes as input two real iris patches and a fake patch or two fake patches and a genuine patch. The aim is to increase the number of training samples and to generate a representation that separates the real from the fake iris patches. The smaller architecture provides a way to do early stopping based on the liveness of single patches rather than the whole image. The matching is performed by computing the distance with respect to a reference set of real and fake examples. The proposed approach allows for real-time processing using a smaller network and provides equal or better than state-of-the-art performance on three benchmark datasets of photo-based and contact lens presentation attacks.", "title": "" }, { "docid": "cfaf2c04cd06103489ac60d00a70cd2c", "text": "BACKGROUND\nΔ(9)-Tetrahydrocannabinol (THC), 11-nor-9-carboxy-THC (THCCOOH), and cannabinol (CBN) were measured in breath following controlled cannabis smoking to characterize the time course and windows of detection of breath cannabinoids.\n\n\nMETHODS\nExhaled breath was collected from chronic (≥4 times per week) and occasional (<twice per week) smokers before and after smoking a 6.8% THC cigarette. Sample analysis included methanol extraction from breath pads, solid-phase extraction, and liquid chromatography-tandem mass spectrometry quantification.\n\n\nRESULTS\nTHC was the major cannabinoid in breath; no sample contained THCCOOH and only 1 contained CBN. Among chronic smokers (n = 13), all breath samples were positive for THC at 0.89 h, 76.9% at 1.38 h, and 53.8% at 2.38 h, and only 1 sample was positive at 4.2 h after smoking. Among occasional smokers (n = 11), 90.9% of breath samples were THC-positive at 0.95 h and 63.6% at 1.49 h. One occasional smoker had no detectable THC. Analyte recovery from breath pads by methanolic extraction was 84.2%-97.4%. Limits of quantification were 50 pg/pad for THC and CBN and 100 pg/pad for THCCOOH. Solid-phase extraction efficiency was 46.6%-52.1% (THC) and 76.3%-83.8% (THCCOOH, CBN). Matrix effects were -34.6% to 12.3%. Cannabinoids fortified onto breath pads were stable (≤18.2% concentration change) for 8 h at room temperature and -20°C storage for 6 months.\n\n\nCONCLUSIONS\nBreath may offer an alternative matrix for identifying recent driving under the influence of cannabis, but currently sensitivity is limited to a short detection window (0.5-2 h).", "title": "" }, { "docid": "cded40190ef8cc022adeb97c2e77ce36", "text": "Question classification is very important for question answering. This paper present our research work on question classification through machine learning approach. In order to train the learning model, we designed a rich set of features that are predictive of question categories. An important component of question answering systems is question classification. The task of question classification is to predict the entity type of the answer of a natural language question. Question classification is typically done using machine learning techniques. Different lexical, syntactical and semantic features can be extracted from a question. In this work we combined lexical, syntactic and semantic features which improve the accuracy of classification. Furthermore, we adopted three different classifiers: Nearest Neighbors (NN), Naïve Bayes (NB), and Support Vector Machines (SVM) using two kinds of features: bag-of-words and bag-of n grams. Furthermore, we discovered that when we take SVM classifier and combine the semantic, syntactic, lexical feature we found that it will improve the accuracy of classification. We tested our proposed approaches on the well-known UIUC dataset and succeeded to achieve a new record on the accuracy of classification on this dataset.", "title": "" }, { "docid": "a45be66a54403701a8271c3063dd24d8", "text": "This paper highlights the role of humans in the next generation of driver assistance and intelligent vehicles. Understanding, modeling, and predicting human agents are discussed in three domains where humans and highly automated or self-driving vehicles interact: 1) inside the vehicle cabin, 2) around the vehicle, and 3) inside surrounding vehicles. Efforts within each domain, integrative frameworks across domains, and scientific tools required for future developments are discussed to provide a human-centered perspective on research in intelligent vehicles.", "title": "" } ]
scidocsrr
657325690b0c7222e3fd594d52d6521c
Lessons and Insights from Creating a Synthetic Optical Flow Benchmark
[ { "docid": "cc4c58f1bd6e5eb49044353b2ecfb317", "text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.", "title": "" } ]
[ { "docid": "62c71a412a8b715e2fda64cd8b6a2a66", "text": "We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster—a subset of vertices whose internal connections are significantly richer than its external connections—near a given vertex. The running time of our algorithm, when it finds a nonempty local cluster, is nearly linear in the size of the cluster it outputs. The running time of our algorithm also depends polylogarithmically on the size of the graph and polynomially on the conductance of the cluster it produces. Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and webgraphs. As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance. Our algorithm takes time nearly linear in the number edges of the graph. Using the partitioning algorithm of this paper, we have designed a nearly linear time algorithm for constructing spectral sparsifiers of graphs, which we in turn use in a nearly linear time algorithm for solving linear systems in symmetric, diagonally dominant matrices. The linear system solver also leads to a nearly linear time algorithm for approximating the secondsmallest eigenvalue and corresponding eigenvector of the Laplacian matrix of a graph. These other results are presented in two companion papers.", "title": "" }, { "docid": "ad58798807256cff2eff9d3befaf290a", "text": "Centrality indices are an essential concept in network analysis. For those based on shortest-path distances the computation is at least quadratic in the number of nodes, since it usually involves solving the single-source shortest-paths (SSSP) problem from every node. Therefore, exact computation is infeasible for many large networks of interest today. Centrality scores can be estimated, however, from a limited number of SSSP computations. We present results from an experimental study of the quality of such estimates under various selection strategies for the source vertices. ∗Research supported in part by DFG under grant Br 2158/2-3", "title": "" }, { "docid": "ec0bc85d241f71f5511b54f107987e5a", "text": "We present a new deep learning approach to pose-guided resynthesis of human photographs. At the heart of the new approach is the estimation of the complete body surface texture based on a single photograph. Since the input photograph always observes only a part of the surface, we suggest a new inpainting method that completes the texture of the human body. Rather than working directly with colors of texture elements, the inpainting network estimates an appropriate source location in the input image for each element of the body surface. This correspondence field between the input image and the texture is then further warped into the target image coordinate frame based on the desired pose, effectively establishing the correspondence between the source and the target view even when the pose change is drastic. The final convolutional network then uses the established correspondence and all other available information to synthesize the output image using a fully-convolutional architecture with deformable convolutions. We show stateof-the-art result for pose-guided image synthesis. Additionally, we demonstrate the performance of our system for garment transfer and pose-guided face resynthesis.", "title": "" }, { "docid": "dcd21065898c9dd108617a3db8dad6a1", "text": "Advanced driver assistance systems are the newest addition to vehicular technology. Such systems use a wide array of sensors to provide a superior driving experience. Vehicle safety and driver alert are important parts of these system. This paper proposes a driver alert system to prevent and mitigate adjacent vehicle collisions by proving warning information of on-road vehicles and possible collisions. A dynamic Bayesian network (DBN) is utilized to fuse multiple sensors to provide driver awareness. It detects oncoming adjacent vehicles and gathers ego vehicle motion characteristics using an on-board camera and inertial measurement unit (IMU). A histogram of oriented gradient feature based classifier is used to detect any adjacent vehicles. Vehicles front-rear end and side faces were considered in training the classifier. Ego vehicles heading, speed and acceleration are captured from the IMU and feed into the DBN. The network parameters were learned from data via expectation maximization(EM) algorithm. The DBN is designed to provide two type of warning to the driver, a cautionary warning and a brake alert for possible collision with other vehicles. Experiments were completed on multiple public databases, demonstrating successful warnings and brake alerts in most situations.", "title": "" }, { "docid": "a19d9517866e3f482a35dd0fb26d4405", "text": "Recent rapid advances in ICTs, specifically in Internet and mobile technologies, have highlighted the rising importance of the Business Model (BM) in Information Systems (IS). Despite agreement on its importance to an organization’s success, the concept is still fuzzy and vague, and there is no consensus regarding its definition. Furthermore, understanding the BM domain by identifying its meaning, fundamental pillars, and its relevance to other business concepts is by no means complete. In this paper we aim to provide further clarification by first presenting a classification of definitions found in the IS literature; second, proposing guidelines on which to develop a more comprehensive definition in order to reach consensus; and third, identifying the four main business model concepts and values and their interaction, and thus place the business model within the world of digital business. Based on this discussion, we propose a new definition for the business model that we argue is more appropriate to this new world.", "title": "" }, { "docid": "b9d6744630ed392e5807a56cb2dfaeab", "text": "This document and any map included herein are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. In recent years, the cost of delivering health care in developed and developing countries has been rising exponentially. Governments around the world are searching for alternative mechanisms to reduce costs while increasing the capacity of social programmes with significant investments in infrastructure. A number of jurisdictions have begun to utilise public-private partnerships (PPPs) as a means of achieving these objectives. The use of PPPs in the Canadian health system is a relatively new phenomenon. Generally, the success of PPP projects is evaluated on the basis of the qualitative outcomes of the project, most commonly in a value-for-money analysis. In this article, we explore whether quantitative elements are sufficient to measure PPPs in politically sensitive areas of public policy, such as health care. We propose that the best way to evaluate the outcomes of PPPs in public health system projects requires both quantitative and qualitative criteria. We use a framework developed from neo-institutional economics that contextualises outcomes through a balance of quantitative and qualitative assessment criteria. We apply this evaluation framework to a specific Canadian case study in order to determine key success factors for future PPP health infrastructure projects. The analysis concludes that, given the complex and politically sensitive nature of health care, particular attention must be paid to communications and public relations and to design and post-construction planning in order to deliver a successful PPP. 2 PPP relationships differ in a fundamental way from conventional procurement contracting. In conventional procurement, risks are assumed to be contained in a contract focused on a short-term infrastructure deliverable, such as construction of a road, airport, water and sewer facility, or hospital. In PPPs, developing risk-sharing mechanisms is key to enhancing the returns to both the public and private sector. PPPs are based upon a stewardship model in which the private sector takes a more aggressive role in aspects of the project from which it had previously been excluded in the conventional procurement approach, such as design, financing, operations and maintenance. The hypothesis is that when the private sector assumes greater responsibility in the project, there will be incentives to ensure a steady stream of revenue for the private sector over the life of the project. …", "title": "" }, { "docid": "8c78e7c93153284deb46464082e04a69", "text": "This paper presents the design and construction of a microstrip Yagi array antenna operating at 5.3 GHz, to be used with an avalanche sensor in avalanche measurement. The advantage of the antenna is it can achieve a high gain of 15.2 dB with bandwidth of 8% in compact size. The gain enhancement is achieved by using a compact microstrip Yagi antenna as the array element; separating the feed network from the main radiating elements; and increasing the antenna height by installing the feed layer at the back of the patch layer, sharing the same ground plane. In order to ensure the power is transferred smoothly from the main input port to the radiating elements, the corporate feed is also design and tested. The fabricated antenna shows an agreeable performance with the simulated version.", "title": "" }, { "docid": "c5021fd377f1d7ebd8f87fb114ed07d9", "text": "In this essay a new theory of stress and linguistic rhythm will be elaborated, based on the proposals of Liberman (1975).' It will be argued that certain features of prosodic systems like that of English, in particular the phenomenon of \"stress subordination\", are not to be referred primarily to the properties of individual segments (or syllables), but rather reflect a hierarchical rhythmic structuring that organizes the syllables, words, and syntactic phrases of a sentence. The character of this structuring, properly understood, will give fresh insight into phenomena that have been apprehended in terms of the phonological cycle, the stress-subordination convention, the theory of disjunctive ordering, and the use of crucial variables in phonological rules. Our theory will employ two basic ideas about the representation of traditional prosodic concepts: first, we represent the notion relative prominence in terms of a relation defined on constituent structure; and second, we represent certain aspects of the notion linguistic rhythm in terms of the alignment of linguistic material with a \"metrical grid\". The perceived \"stressing\" of an utterance, we think, reflects the combined influence of a constituent-structure pattern and its grid alignment. This pattern-grid combination is reminiscent of the traditional picture of verse scansion, so that the theory as a whole deserves the name \"metrical\". We will also use the expression \"'metrical theory\" as a convenient term for that portion of the theory which deals with the assignment of relative prominence in terms of a relation defined on constituent structure. Section 1 will apply the metrical theory of stress-pattern assignment to the system of English phrasal stress, arguing this theory's value in rationalizing otherwise arbitrary characteristics of stress features and stress rules. Section 2 will extend this treatment to the domain of English word stress, adopting a somewhat traditional view of the assignment of the feature [+stress], but explaining the generation of word-level * We would like to thank", "title": "" }, { "docid": "4ff2e867a47fa27a95e5c190136dd73a", "text": "Lack of trust is one of the most frequently cited reasons for consumers not purchasing from Internet vendors. During the last four years a number of empirical studies have investigated the role of trust in the specific context of e-commerce, focusing on different aspects of this multi-dimensional construct. However, empirical research in this area is beset by conflicting conceptualizations of the trust construct, inadequate understanding of the relationships between trust, its antecedents and consequents, and the frequent use of trust scales that are neither theoretically derived nor rigorously validated. The major objective of this paper is to provide an integrative review of the empirical literature on trust in e-commerce in order to allow cumulative analysis of results. The interpretation and comparison of different empirical studies on on-line trust first requires conceptual clarification. A set of trust constructs is proposed that reflects both institutional phenomena (system trust) and personal and interpersonal forms of trust (dispositional trust, trusting beliefs, trusting intentions and trust-related behaviours), thus facilitating a multi-level and multi-dimensional analysis of research problems related to trust in e-commerce. r 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "18dcf52ce2b8c6bf8fb5c4eb839b6795", "text": "The use of information technology (IT) as a competitive weapon has become a popular cliché; but there is still a marked lack of understanding of the issues that determine the influence of information technology on a particular organization and the processes that will allow a smooth coordination of technology and corporate strategy. This article surveys the major efforts to arrive at a relevant framework and attempts to integrate them in a more comprehensive viewpoint. The focus then turns to the major research issues in understanding the impact of information technology on competitive strategy. Copyright © 1986 Yannis Bakos and Michael Treacy", "title": "" }, { "docid": "6e3e881cb1bb05101ad0f38e3f21e547", "text": "Mechanical valves used for aortic valve replacement (AVR) continue to be associated with bleeding risks because of anticoagulation therapy, while bioprosthetic valves are at risk of structural valve deterioration requiring reoperation. This risk/benefit ratio of mechanical and bioprosthetic valves has led American and European guidelines on valvular heart disease to be consistent in recommending the use of mechanical prostheses in patients younger than 60 years of age. Despite these recommendations, the use of bioprosthetic valves has significantly increased over the last decades in all age groups. A systematic review of manuscripts applying propensity-matching or multivariable analysis to compare the usage of mechanical vs. bioprosthetic valves found either similar outcomes between the two types of valves or favourable outcomes with mechanical prostheses, particularly in younger patients. The risk/benefit ratio and choice of valves will be impacted by developments in valve designs, anticoagulation therapy, reducing the required international normalized ratio, and transcatheter and minimally invasive procedures. However, there is currently no evidence to support lowering the age threshold for implanting a bioprosthesis. Physicians in the Heart Team and patients should be cautious in pursuing more bioprosthetic valve use until its benefit is clearly proven in middle-aged patients.", "title": "" }, { "docid": "b05d36b98d68c9407e6cb213bcf03709", "text": "With the continuous increase in data velocity and volume nowadays, preserving system and data security is particularly affected. In order to handle the huge amount of data and to discover security incidents in real-time, analyses of log data streams are required. However, most of the log anomaly detection techniques fall short in considering continuous data processing. Thus, this paper aligns an anomaly detection technique for data stream processing. It thereby provides a conceptual basis for future adaption of other techniques and further delivers proof of concept by prototype implementation.", "title": "" }, { "docid": "e59bd7353cdbd4f353e45990a2c24c63", "text": "We describe CACTI-IO, an extension to CACTI [4] that includes power, area and timing models for the IO and PHY of the off-chip memory interface for various server and mobile configurations. CACTI-IO enables design space exploration of the off-chip IO along with the DRAM and cache parameters. We describe the models added and three case studies that use CACTI-IO to study the tradeoffs between memory capacity, bandwidth and power.\n The case studies show that CACTI-IO helps (i) provide IO power numbers that can be fed into a system simulator for accurate power calculations, (ii) optimize off-chip configurations including the bus width, number of ranks, memory data width and off-chip bus frequency, especially for novel buffer-based topologies, and (iii) enable architects to quickly explore new interconnect technologies, including 3-D interconnect. We find that buffers on board and 3-D technologies offer an attractive design space involving power, bandwidth and capacity when appropriate interconnect parameters are deployed.", "title": "" }, { "docid": "5293dc28da110096fee7be1da7bf52b2", "text": "The function of brown adipose tissue is to transfer energy from food into heat; physiologically, both the heat produced and the resulting decrease in metabolic efficiency can be of significance. Both the acute activity of the tissue, i.e., the heat production, and the recruitment process in the tissue (that results in a higher thermogenic capacity) are under the control of norepinephrine released from sympathetic nerves. In thermoregulatory thermogenesis, brown adipose tissue is essential for classical nonshivering thermogenesis (this phenomenon does not exist in the absence of functional brown adipose tissue), as well as for the cold acclimation-recruited norepinephrine-induced thermogenesis. Heat production from brown adipose tissue is activated whenever the organism is in need of extra heat, e.g., postnatally, during entry into a febrile state, and during arousal from hibernation, and the rate of thermogenesis is centrally controlled via a pathway initiated in the hypothalamus. Feeding as such also results in activation of brown adipose tissue; a series of diets, apparently all characterized by being low in protein, result in a leptin-dependent recruitment of the tissue; this metaboloregulatory thermogenesis is also under hypothalamic control. When the tissue is active, high amounts of lipids and glucose are combusted in the tissue. The development of brown adipose tissue with its characteristic protein, uncoupling protein-1 (UCP1), was probably determinative for the evolutionary success of mammals, as its thermogenesis enhances neonatal survival and allows for active life even in cold surroundings.", "title": "" }, { "docid": "e10b5a0363897f6e7cbb128a4d2f7cd7", "text": "We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator’s objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.", "title": "" }, { "docid": "4f02e48932129dd77f48f99478c08ab2", "text": "A low-power low-voltage OTA with rail-to-rail output is introduced. The proposed topology is based on the common current mirror OTA topology and provide gain enhancement without extra power consumption. Implemented in a standard 0.25/spl mu/m CMOS technology, the proposed OTA achieves 50 dB DC gain in 0.8 V supply voltage. The GBW is 1.2MHz and the static power consumption is 8/spl mu/W while driving 18pF load. The class AB operation increases the slew rate and still maintains low static biasing current. This topology is suitable for low-power low-voltage switched-capacitor application.", "title": "" }, { "docid": "45342a42547f265da8ae9b0e8f8fde1b", "text": "YAGO is a large knowledge base that is built automatically from Wikipedia, WordNet and GeoNames. The project combines information from Wikipedias in 10 di erent languages, thus giving the knowledge a multilingual dimension. It also attaches spatial and temporal information to many facts, and thus allows the user to query the data over space and time. YAGO focuses on extraction quality and achieves a manually evaluated precision of 95%. In this paper, we explain from a general perspective how YAGO is built from its sources, how its quality is evaluated, how a user can access it, and how other projects utilize it.", "title": "" }, { "docid": "adf3678a3f1fcd5db580a417194239f2", "text": "In training deep neural networks for semantic segmentation, the main limiting factor is the low amount of ground truth annotation data that is available in currently existing datasets. The limited availability of such data is due to the time cost and human effort required to accurately and consistently label real images on a pixel level. Modern sandbox video game engines provide open world environments where traffic and pedestrians behave in a pseudo-realistic manner. This caters well to the collection of a believable road-scene dataset. Utilizing open-source tools and resources found in single-player modding communities, we provide a method for persistent, ground truth, asset annotation of a game world. By collecting a synthetic dataset containing upwards of 1, 000, 000 images, we demonstrate realtime, on-demand, ground truth data annotation capability of our method. Supplementing this synthetic data to Cityscapes dataset, we show that our data generation method provides qualitative as well as quantitative improvements—for training networks—over previous methods that use video games as surrogate.", "title": "" }, { "docid": "9a8f782acaf09a6a09ceeacfa0fd9fee", "text": "The objective of the current study was to compare the effects of sensory-integration therapy (SIT) and a behavioral intervention on rates of challenging behavior (including self-injurious behavior) in four children diagnosed with Autism Spectrum Disorder. For each of the participants a functional assessment was conducted to identify the variables maintaining challenging behavior. Results of these assessments were used to design function-based behavioral interventions for each participant. Recommendations for the sensory-integration treatment were designed by an Occupational Therapist, trained in the use of sensory-integration theory and techniques. The sensory-integration techniques were not dependent on the results of the functional assessments. The study was conducted within an alternating treatments design, with initial baseline and final best treatment phase. For each participant, results demonstrated that the behavioral intervention was more effective than the sensory integration therapy in the treatment of challenging behavior. In the best treatment phase, the behavioral intervention alone was implemented and further reduction was observed in the rate of challenging behavior. Analysis of saliva samples revealed relatively low levels of cortisol and very little stress-responsivity across the SIT condition and the behavioral intervention condition, which may be related to the participants' capacity to perceive stress in terms of its social significance.", "title": "" }, { "docid": "936c1c708beea8a40831cf72094636ff", "text": "PURPOSE\nTo evaluate the problems encountered on revising a multiply operated nose and the methods used in correcting such problems.\n\n\nPATIENTS AND METHODS\nThe study included 50 cases presenting for revision rhinoplasty after having had 2 or more previous rhinoplasties. An external rhinoplasty approach was used in all cases. Simultaneous septal surgery was done whenever indicated. All cases were followed for a mean period of 32 months (range, 1.5-8 years). Evaluation of the surgical result depended on clinical examination, comparison of pre- and postoperative photographs, and degree of patients' satisfaction with their aesthetic and functional outcome.\n\n\nRESULTS\nFunctionally, 68% suffered nasal obstruction that was mainly caused by septal deviations and nasal valve problems. Aesthetically, the most common deformities of the upper two thirds of the nose included pollybeak (64%), dorsal irregularities (54%), dorsal saddle (44%), and open roof deformity (42%), whereas the deformities of lower third included depressed tip (68%), tip contour irregularities (60%), and overrotated tip (42%). Nasal grafting was necessary in all cases; usually more than 1 type of graft was used in each case. Postoperatively, 79% of the patients, with preoperative nasal obstruction, reported improved breathing; 84% were satisfied with their aesthetic result; and only 8 cases (16%) requested further revision to correct minor deformities.\n\n\nCONCLUSION\nRevision of a multiply operated nose is a complex and technically demanding task, yet, in a good percentage of cases, aesthetic as well as functional improvement are still possible.", "title": "" } ]
scidocsrr
82c335cb63c733ff7b5a4566b60b40a6
Modeling indoor space
[ { "docid": "d123465734c3e1029827267f46027bdc", "text": "Previous recent research on human wayfinding has focused primarily on mental representations rather than processes of wayfinding. This paper presents a formal model of some aspects of the process of wayfinding, where appropriate elements of human perception and cognition are formally realized using image schemata and affordances. The goal-driven reasoning chain that leads to action begins with incomplete and imprecise knowledge derived from imperfect observations of space. Actions result in further observations, derived knowledge and, recursively, further actions, until the goal is achieved or the wayfinder gives up. This paper gives a formalization of this process, using a modal extension to classical propositional logic to represent incomplete knowledge. Both knowledge and action are represented through a wayfinding graph. A special case of wayfinding in a building, that is finding one’s way through an airport, is used to demonstrate the formal model.", "title": "" }, { "docid": "18e1f1171844fa27905246b9246cc975", "text": "Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topoIogica1. Grid-based maps are learned using artificial neural networks and naive Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments. @ 1998 Elsevier Science B.V.", "title": "" } ]
[ { "docid": "c8ba8d59bb92778921eea146181fa2b8", "text": "MOTIVATION\nProtein interaction networks provide an important system-level view of biological processes. One of the fundamental problems in biological network analysis is the global alignment of a pair of networks, which puts the proteins of one network into correspondence with the proteins of another network in a manner that conserves their interactions while respecting other evidence of their homology. By providing a mapping between the networks of different species, alignments can be used to inform hypotheses about the functions of unannotated proteins, the existence of unobserved interactions, the evolutionary divergence between the two species and the evolution of complexes and pathways.\n\n\nRESULTS\nWe introduce GHOST, a global pairwise network aligner that uses a novel spectral signature to measure topological similarity between subnetworks. It combines a seed-and-extend global alignment phase with a local search procedure and exceeds state-of-the-art performance on several network alignment tasks. We show that the spectral signature used by GHOST is highly discriminative, whereas the alignments it produces are also robust to experimental noise. When compared with other recent approaches, we find that GHOST is able to recover larger and more biologically significant, shared subnetworks between species.\n\n\nAVAILABILITY\nAn efficient and parallelized implementation of GHOST, released under the Apache 2.0 license, is available at http://cbcb.umd.edu/kingsford_group/ghost\n\n\nCONTACT\[email protected].", "title": "" }, { "docid": "f2707d7fcd5d8d9200d4cc8de8ff1042", "text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.", "title": "" }, { "docid": "c61f9e85a3a804ceb06b835cd94b37cf", "text": "The recognition of personal emotional state or sentiment conveyed through text is the main task we address in our research. The communication of emotions through text messaging and posts of personal blogs poses the ‘informal style of writing’ challenge for researchers expecting grammatically correct input. Our Affect Analysis Model was designed to handle the informal messages written in an abbreviated or expressive manner. While constructing our rule-based approach to affect recognition from text, we followed the compositionality principle. Our method is capable of processing sentences of different complexity, including simple, compound, complex (with complement and relative clauses), and complex-compound sentences. The evaluation of the Affect Analysis Model algorithm showed promising results regarding its capability to accurately recognize affective information in text from an existing corpus of personal blog posts.", "title": "" }, { "docid": "f4aa06f7782a22eeb5f30d0ad27eaff9", "text": "Friction effects are particularly critical for industrial robots, since they can induce large positioning errors, stick-slip motions, and limit cycles. This paper offers a reasoned overview of the main friction compensation techniques that have been developed in the last years, regrouping them according to the adopted kind of control strategy. Some experimental results are reported, to show how the control performances can be affected not only by the chosen method, but also by the characteristics of the available robotic architecture and of the executed task.", "title": "" }, { "docid": "7ab15804bd53aa8288aafc5374a12499", "text": "We have used a modified technique in five patients to correct winging of the scapula caused by injury to the brachial plexus or the long thoracic nerve during transaxillary resection of the first rib. The procedure stabilises the scapulothoracic articulation by using strips of autogenous fascia lata wrapped around the 4th, 6th and 7th ribs at least two, and preferably three, times. The mean age of the patients at the time of operation was 38 years (26 to 47) and the mean follow-up six years and four months (three years and three months to 11 years). Satisfactory stability was achieved in all patients with considerable improvement in shoulder function. There were no complications.", "title": "" }, { "docid": "d4452dbdfb23d367d477607b5f8f42af", "text": "Micro-expressions are brief involuntary facial expressions that reveal genuine emotions and, thus, help detect lies. Because of their many promising applications, they have attracted the attention of researchers from various fields. Recent research reveals that two perceptual color spaces (CIELab and CIELuv) provide useful information for expression recognition. This paper is an extended version of our International Conference on Pattern Recognition paper, in which we propose a novel color space model, tensor independent color space (TICS), to help recognize micro-expressions. In this paper, we further show that CIELab and CIELuv are also helpful in recognizing micro-expressions, and we indicate why these three color spaces achieve better performance. A micro-expression color video clip is treated as a fourth-order tensor, i.e., a four-dimension array. The first two dimensions are the spatial information, the third is the temporal information, and the fourth is the color information. We transform the fourth dimension from RGB into TICS, in which the color components are as independent as possible. The combination of dynamic texture and independent color components achieves a higher accuracy than does that of RGB. In addition, we define a set of regions of interests (ROIs) based on the facial action coding system and calculated the dynamic texture histograms for each ROI. Experiments are conducted on two micro-expression databases, CASME and CASME 2, and the results show that the performances for TICS, CIELab, and CIELuv are better than those for RGB or gray.", "title": "" }, { "docid": "6fdd0c7d239417234cfc4706a82b5a0f", "text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.", "title": "" }, { "docid": "6f219ca6ff184ffc4dce78b95093c219", "text": "In this paper, a novel fabric detect detection scheme based on HOG and SVM is proposed. Firstly, each block-based feature of the image is encoded using the histograms of orientated gradients (HOG), which are insensitive to various lightings and noises. Then, a powerful feature selection algorithm, AdaBoost, is performed to automatically select a small set of discriminative HOG features in order to achieve robust detection results. In the end, support vector machine (SVM) is used to classify the fabric defects. Experimental results demonstrate the efficiency of our proposed algorithm.", "title": "" }, { "docid": "98256343ba583748729e7b8ea3ff2244", "text": "Taipei Metro adopted the diode-grounded scheme for stray current collection in construction of its cross rail network. During operation of the network, a high rail-to-earth potential (V/sub rail/) has been observed at the east end of the Blue Line (i.e., stations BL13-BL16). To find effective countermeasures, a series of field tests in a step-by-step development nature was conducted from 1999-2000, which led to the decision of disconnecting the impedance bond at G11 of the tie line so that the negative return current of the Blue Line cannot flow to the rails of the Red-Green Line, and vice versa (detailed in Sections III-A and V-C). This decision was implemented through contract-out work in 2003. Since then, the V/sub rail/ has been lowered by almost half before disconnection. To gain the insight characteristic before the contract-out, numerical simulations were also conducted by simulating the multi-train and multisection features of the cross transportation network. The simulation results (for V/sub rail/ and stray current, or I/sub stray/) were consistent with the field-test results. This paper presents the design of these field tests and their test results in comparison with the simulation results, based on which the countermeasures for reducing V/sub rail/ and the present status after V/sub rail/ reduction at Taipei Metro are presented.", "title": "" }, { "docid": "e63ad73be1999b26a9498513dcfec4a8", "text": "Novice qualitative researchers are often unsure regarding the analysis of their data and, where grounded theory is chosen, they may be uncertain regarding the differences that now exist between the approaches of Glaser and Strauss, who together first described the method. These two approaches are compared in relation to roots and divergences, role of induction, deduction and verification, ways in which data are coded and the format of generated theory. Personal experience of developing as a ground theorist is used to illustrate some of the key differences. A conclusion is drawn that, rather than debate relative merits of the two approaches, suggests that novice researchers need to select the method that best suits their cognitive style and develop analytic skills through doing research.", "title": "" }, { "docid": "d6b46b598f4fcbee933c1d0aff29c96c", "text": "Neural network based sequence-to-sequence models in an encoder-decoder framework have been successfully applied to solve Question Answering (QA) problems, predicting answers from statements and questions. However, almost all previous models have failed to consider detailed context information and unknown states under which systems do not have enough information to answer given questions. These scenarios with incomplete or ambiguous information are very common in the setting of Interactive Question Answering (IQA). To address this challenge, we develop a novel model, employing context-dependent word-level attention for more accurate statement representations and question-guided sentence-level attention for better context modeling. We also generate unique IQA datasets to test our model, which will be made publicly available. Employing these attention mechanisms, our model accurately understands when it can output an answer or when it requires generating a supplementary question for additional input depending on different contexts. When available, user's feedback is encoded and directly applied to update sentence-level attention to infer an answer. Extensive experiments on QA and IQA datasets quantitatively demonstrate the effectiveness of our model with significant improvement over state-of-the-art conventional QA models.", "title": "" }, { "docid": "9666ac68ee1aeb8ce18ccd2615cdabb2", "text": "As the bring your own device (BYOD) to work trend grows, so do the network security risks. This fast-growing trend has huge benefits for both employees and employers. With malware, spyware and other malicious downloads, tricking their way onto personal devices, organizations need to consider their information security policies. Malicious programs can download onto a personal device without a user even knowing. This can have disastrous results for both an organization and the personal device. When this happens, it risks BYODs making unauthorized changes to policies and leaking sensitive information into the public domain. A privacy breach can cause a domino effect with huge financial and legal implications, and loss of productivity for organizations. This is a difficult challenge. Organizations need to consider user privacy and rights together with protecting networks from attacks. This paper evaluates a new architectural framework to control the risks that challenge organizations and the use of BYODs. After analysis of large volumes of research, the previous studies addressed single issues. We integrated parts of these single solutions into a new framework to develop a complete solution for access control. With too many organizations failing to implement and enforce adequate security policies, the process needs to be simpler. This framework reduces system restrictions while enforcing access control policies for BYOD and cloud environments using an independent platform. Primary results of the study are positive with the framework reducing access control issues. Keywords—Bring your own device; access control; policy; security", "title": "" }, { "docid": "6fbf1dff8df2c97f44e236a9c7ffac2a", "text": "The generation of multimode orbital angular momentum (OAM) carrying beams has attracted more and more attention. A broadband dual-polarized dual-OAM-mode uniform circular array is proposed in this letter. The proposed antenna array, which consists of a broadband dual-polarized bow-tie dipole array and a broadband phase-shifting feeding network, can be used to generate OAM mode −1 and OAM mode 1 beams from 2.1 to 2.7 GHz (a bandwidth of 25%) for each of two polarizations. Four orthogonal channels can be provided by the proposed antenna array. A 2.5-m broadband OAM link is built. The measured crosstalk between the mode matched channels and the mode mismatched channels is less than −12 dB at 2.1, 2.4, and 2.7 GHz. Four different data streams are transmitted simultaneously by the proposed array with a bit error rate less than 4.2×10-3 at 2.1, 2.4, and 2.7 GHz.", "title": "" }, { "docid": "746c1feda23b8d685e9908001d8df0ab", "text": "Breast cancer is one of the leading causes of cancer death among women worldwide. The proposed approach comprises three steps as follows. Firstly, the image is preprocessed to remove speckle noise while preserving important features of the image. Three methods are investigated, i.e., Frost Filter, Detail Preserving Anisotropic Diffusion, and Probabilistic Patch-Based Filter. Secondly, Normalized Cut or Quick Shift is used to provide an initial segmentation map for breast lesions. Thirdly, a postprocessing step is proposed to select the correct region from a set of candidate regions. This approach is implemented on a dataset containing 20 B-mode ultrasound images, acquired from UDIAT Diagnostic Center of Sabadell, Spain. The overall system performance is determined against the ground truth images. The best system performance is achieved through the following combinations: Frost Filter with Quick Shift, Detail Preserving Anisotropic Diffusion with Normalized Cut and Probabilistic Patch-Based with Normalized Cut.", "title": "" }, { "docid": "e1eecbed3cc24ec6998890b6154afc7e", "text": "Neural networks are becoming central in several areas of computer vision and image processing and different architectures have been proposed to solve specific problems. The impact of the loss layer of neural networks, however, has not received much attention in the context of image processing: the default and virtually only choice is $\\ell _2$. In this paper, we bring attention to alternative choices for image restoration. In particular, we show the importance of perceptually-motivated losses when the resulting image is to be evaluated by a human observer. We compare the performance of several losses, and propose a novel, differentiable error function. We show that the quality of the results improves significantly with better loss functions, even when the network architecture is left unchanged.", "title": "" }, { "docid": "5c58eb86ec2fb61a4c26446a41a9037a", "text": "The filter bank methods have been a popular non-parametric way of computing the complex amplitude spectrum. So far, the length of the filters in these filter banks has been set to some constant value independently of the data. In this paper, we take the first step towards considering the filter length as an unknown parameter. Specifically, we derive a very simple and approximate way of determining the optimal filter length in a data-adaptive way. Based on this analysis, we also derive a model averaged version of the forward and the forward-backward amplitude spectral Capon estimators. Through simulations, we show that these estimators significantly improve the estimation accuracy compared to the traditional Capon estimators.", "title": "" }, { "docid": "4d9adaac8dc69f902056d531f7570da7", "text": "A new CMOS buffer without short-circuit power consumption is proposed. The gatedriving signal of the output pull-up (pull-down) transistor is fed back to the output pull-down (pull-up) transistor to get tri-state output momentarily, eliminating the short-circuit power consumption. The HSPICE simulation results verified the operation of the proposed buffer and showed the power-delay product is about 15% smaller than conventional tapered CMOS buffer.", "title": "" }, { "docid": "b71b9a6990866c89ab7bc65338f61a9d", "text": "This paper compares advantages and disadvantages of several commonly used current sensing methods such as dedicated sense resistor sensing, MOSFET Rds(on) current sensing, and inductor DC resistance (DCR) current sensing. Among these current sense methods, inductor DCR current sense that shows more advantages over other current sensing methods is chosen for analysis. The time constants mismatch issue between the time constant made by the current sensing RC network and the one formed with output inductor and its DC resistance is addressed in this paper. And an unified small signal modeling of a buck converter using inductor DCR current sensing with matched and mismatched time constants is presented, and the modeling has been verified experimentally.", "title": "" }, { "docid": "ffcd5db955741de747fe7323595f4291", "text": "We propose an approach to cross-lingual named entity recognition model transfer without the use of parallel corpora. In addition to global de-lexicalized features, we introduce multilingual gazetteers that are generated using graph propagation, and cross-lingual word representation mappings without the use of parallel data. We target the e-commerce domain, which is challenging due to its unstructured and noisy nature. The experiments have shown that our approaches beat the strong MT baseline, where the English model is transferred to two languages: Spanish and Chinese.", "title": "" }, { "docid": "d882657765647d9e84b8ad729a079833", "text": "Multiple treebanks annotated under heterogeneous standards give rise to the research question of best utilizing multiple resources for improving statistical models. Prior research has focused on discrete models, leveraging stacking and multi-view learning to address the problem. In this paper, we empirically investigate heterogeneous annotations using neural network models, building a neural network counterpart to discrete stacking and multiview learning, respectively, finding that neural models have their unique advantages thanks to the freedom from manual feature engineering. Neural model achieves not only better accuracy improvements, but also an order of magnitude faster speed compared to its discrete baseline, adding little time cost compared to a neural model trained on a single treebank.", "title": "" } ]
scidocsrr
f3b6384ba243589c11a67aedbce697b3
Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue
[ { "docid": "d15e7e655e7afc86e30e977516de7720", "text": "We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection/localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.", "title": "" }, { "docid": "d4fb664caa02b81909bc51291d3fafd7", "text": "This paper offers the first variational approach to the problem of dense 3D reconstruction of non-rigid surfaces from a monocular video sequence. We formulate non-rigid structure from motion (nrsfm) as a global variational energy minimization problem to estimate dense low-rank smooth 3D shapes for every frame along with the camera motion matrices, given dense 2D correspondences. Unlike traditional factorization based approaches to nrsfm, which model the low-rank non-rigid shape using a fixed number of basis shapes and corresponding coefficients, we minimize the rank of the matrix of time-varying shapes directly via trace norm minimization. In conjunction with this low-rank constraint, we use an edge preserving total-variation regularization term to obtain spatially smooth shapes for every frame. Thanks to proximal splitting techniques the optimization problem can be decomposed into many point-wise sub-problems and simple linear systems which can be easily solved on GPU hardware. We show results on real sequences of different objects (face, torso, beating heart) where, despite challenges in tracking, illumination changes and occlusions, our method reconstructs highly deforming smooth surfaces densely and accurately directly from video, without the need for any prior models or shape templates.", "title": "" }, { "docid": "9dbf1ae31558c80aff4edf94c446b69e", "text": "This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.", "title": "" } ]
[ { "docid": "26699915946647c1c582c1a0ab63b963", "text": "In computer vision problems such as pair matching, only binary information ‘same’ or ‘different’ label for pairs of images is given during training. This is in contrast to classification problems, where the category labels of training images are provided. We propose a unified discriminative dictionary learning approach for both pair matching and multiclass classification tasks. More specifically, we introduce a new discriminative term called ‘pairwise sparse code error’ for the discriminativeness in sparse representation of pairs of signals, and then combine it with the classification error for discriminativeness in classifier construction to form a unified objective function. The solution to the new objective function is achieved by employing the efficient feature-sign search algorithm. The learned dictionary encourages feature points from a similar pair (or the same class) to have similar sparse codes. We validate the effectiveness of our approach through a series of experiments on face verification and recognition problems.", "title": "" }, { "docid": "c3c58760970768b9a839184f9e0c5b29", "text": "The anatomic structures in the female that prevent incontinence and genital organ prolapse on increases in abdominal pressure during daily activities include sphincteric and supportive systems. In the urethra, the action of the vesical neck and urethral sphincteric mechanisms maintains urethral closure pressure above bladder pressure. Decreases in the number of striated muscle fibers of the sphincter occur with age and parity. A supportive hammock under the urethra and vesical neck provides a firm backstop against which the urethra is compressed during increases in abdominal pressure to maintain urethral closure pressures above the rapidly increasing bladder pressure. This supporting layer consists of the anterior vaginal wall and the connective tissue that attaches it to the pelvic bones through the pubovaginal portion of the levator ani muscle, and the uterosacral and cardinal ligaments comprising the tendinous arch of the pelvic fascia. At rest the levator ani maintains closure of the urogenital hiatus. They are additionally recruited to maintain hiatal closure in the face of inertial loads related to visceral accelerations as well as abdominal pressurization in daily activities involving recruitment of the abdominal wall musculature and diaphragm. Vaginal birth is associated with an increased risk of levator ani defects, as well as genital organ prolapse and urinary incontinence. Computer models indicate that vaginal birth places the levator ani under tissue stretch ratios of up to 3.3 and the pudendal nerve under strains of up to 33%, respectively. Research is needed to better identify the pathomechanics of these conditions.", "title": "" }, { "docid": "f7d36b012ac92e7a0e3ff26a3b596178", "text": "The purpose of the present text is to present the theory and techniques behind the Gray Level Coocurrence Matrix (GLCM) method, and the stateof-the-art of the field, as applied to two dimensional images. It does not present a survey of practical results. 1 Gray Level Coocurrence Matrices In statistical texture analysis, texture features are computed from the statistical distribution of observed combinations of intensities at specified positions relative to each other in the image. According to the number of intensity points (pixels) in each combination, statistics are classified into first-order, second-order and higher-order statistics. The Gray Level Coocurrence Matrix (GLCM) method is a way of extracting second order statistical texture features. The approach has been used in a number of applications, e.g. [5],[6],[14],[5],[7],[12],[2],[8],[10],[1]. A GLCM is a matrix where the number of rows and colums is equal to the number of gray levels, G, in the image. The matrix element P (i, j | ∆x, ∆y) is the relative frequency with which two pixels, separated by a pixel distance (∆x, ∆y), occur within a given neighborhood, one with intensity i and the other with intensity j. One may also say that the matrix element P (i, j | d, θ) contains the second order 1 Albregtsen : Texture Measures Computed from GLCM-Matrices 2 statistical probability values for changes between gray levels i and j at a particular displacement distance d and at a particular angle (θ). Given an M ×N neighborhood of an input image containing G gray levels from 0 to G − 1, let f(m, n) be the intensity at sample m, line n of the neighborhood. Then P (i, j | ∆x, ∆y) = WQ(i, j | ∆x, ∆y) (1) where W = 1 (M − ∆x)(N − ∆y) Q(i, j | ∆x, ∆y) = N−∆y ∑", "title": "" }, { "docid": "ca4beef505d8a93f399a4b5371816205", "text": "A systematic review of the literature related to effective occupational therapy interventions in rehabilitation of individuals with work-related low back injuries and illnesses was carried out as part of the Evidence-Based Literature Review Project of the American Occupational Therapy Association. This review evaluated research on a broad range of occupational therapy-related intervention procedures and approaches. Findings from the review indicate that the evidence is insufficient to support or refute the effectiveness of exercise therapy and other conservative treatments for subacute and chronic low back injuries. The research reviewed strongly suggests that for interventions to be effective, occupational therapy practitioners should use a holistic, client-centered approach. The research supports the need for occupational therapy practitioners to consider multiple strategies for addressing clients' needs. Specifically, interventions for individuals with low back injuries and illnesses should incorporate a biopsychosocial, client-centered approach that includes actively involving the client in the rehabilitation process at the beginning of the intervention process and addressing the client's psychosocial needs in addition to his or her physical impairments. The implications for occupational therapy practice, research, and education are also discussed.", "title": "" }, { "docid": "a4aa085507cc018af3735b5a848446da", "text": "Domain Name System (DNS) is ubiquitous in any network. DNS tunnelling is a technique to transfer data, convey messages or conduct TCP activities over DNS protocol that is typically not blocked or watched by security enforcement such as firewalls. As a technique, it can be utilized in many malicious ways which can compromise the security of a network by the activities of data exfiltration, cyber-espionage, and command and control. On the other side, it can also be used by legitimate users. The traditional methods may not be able to distinguish between legitimate and malicious uses even if they can detect the DNS tunnelling activities. We propose a behaviour analysis based method that can not only detect the DNS tunnelling, but also classify the activities in order to catch and block the malicious tunnelling traffic. The proposed method can achieve the scale of real-time detection on fast and large DNS data with the use of big data technologies in offline training and online detection systems.", "title": "" }, { "docid": "9d0a383122a7aa73053cededb64b418d", "text": "With the explosive growth of Internet of Things devices and massive data produced at the edge of the network, the traditional centralized cloud computing model has come to a bottleneck due to the bandwidth limitation and resources constraint. Therefore, edge computing, which enables storing and processing data at the edge of the network, has emerged as a promising technology in recent years. However, the unique features of edge computing, such as content perception, real-time computing, and parallel processing, has also introduced several new challenges in the field of data security and privacy-preserving, which are also the key concerns of the other prevailing computing paradigms, such as cloud computing, mobile cloud computing, and fog computing. Despites its importance, there still lacks a survey on the recent research advance of data security and privacy-preserving in the field of edge computing. In this paper, we present a comprehensive analysis of the data security and privacy threats, protection technologies, and countermeasures inherent in edge computing. Specifically, we first make an overview of edge computing, including forming factors, definition, architecture, and several essential applications. Next, a detailed analysis of data security and privacy requirements, challenges, and mechanisms in edge computing are presented. Then, the cryptography-based technologies for solving data security and privacy issues are summarized. The state-of-the-art data security and privacy solutions in edge-related paradigms are also surveyed. Finally, we propose several open research directions of data security in the field of edge computing.", "title": "" }, { "docid": "8f73870d5e999c0269059c73bb85e05c", "text": "Placing the DRAM in the same package as a processor enables several times higher memory bandwidth than conventional off-package DRAM. Yet, the latency of in-package DRAM is not appreciably lower than that of off-package DRAM. A promising use of in-package DRAM is as a large cache. Unfortunately, most previous DRAM cache designs optimize mainly for cache hit latency and do not consider bandwidth efficiency as a first-class design constraint. Hence, as we show in this paper, these designs are suboptimal for use with in-package DRAM.\n We propose a new DRAM cache design, Banshee, that optimizes for both in-package and off-package DRAM bandwidth efficiency without degrading access latency. Banshee is based on two key ideas. First, it eliminates the tag lookup overhead by tracking the contents of the DRAM cache using TLBs and page table entries, which is efficiently enabled by a new lightweight TLB coherence protocol we introduce. Second, it reduces unnecessary DRAM cache replacement traffic with a new bandwidth-aware frequency-based replacement policy. Our evaluations show that Banshee significantly improves performance (15% on average) and reduces DRAM traffic (35.8% on average) over the best-previous latency-optimized DRAM cache design.", "title": "" }, { "docid": "cde4d7457b949420ab90bdc894f40eb0", "text": "We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining. Medical records which are written by clinicians from different specialties usually contain quite different terminologies and writing styles. The difference of specialties and the cost of human annotation makes it particularly difficult to train a universal medical NER system. In this paper, we propose a labelaware double transfer learning framework (LaDTL) for cross-specialty NER, so that a medical NER system designed for one specialty could be conveniently applied to another one with minimal annotation efforts. The transferability is guaranteed by two components: (i) we propose label-aware MMD for feature representation transfer, and (ii) we perform parameter transfer with a theoretical upper bound which is also label aware. We conduct extensive experiments on 12 cross-specialty NER tasks. The experimental results demonstrate that La-DTL provides consistent accuracy improvement over strong baselines. Besides, the promising experimental results on non-medical NER scenarios indicate that LaDTL is potential to be seamlessly adapted to a wide range of NER tasks.", "title": "" }, { "docid": "a363b4cec11d5328012a1cd0f13ba747", "text": "Techniques for partitioning objects into optimally homogeneous groups on the basis of empirical measures of similarity among those objects have received increasing attention in several different fields. This paper develops a useful correspondence between any hierarchical system of such clusters, and a particular type of distance measure. The correspondence gives rise to two methods of clustering that are computationally rapid and invariant under monotonic transformations of the data. In an explicitly defined sense, one method forms clusters that are optimally \"connected,\" while the other forms clusters that are optimally \"compact.\"", "title": "" }, { "docid": "d8dd68593fd7bd4bdc868634deb9661a", "text": "We present a low-cost IoT based system able to monitor acoustic, olfactory, visual and thermal comfort levels. The system is provided with different ambient sensors, computing, control and connectivity features. The integration of the device with a smartwatch makes it possible the analysis of the personal comfort parameters.", "title": "" }, { "docid": "ccce159596bf45910117a80ee54090a5", "text": "The parietal lobe plays a major role in sensorimotor integration and action. Recent neuroimaging studies have revealed more than 40 retinotopic areas distributed across five visual streams in the human brain, two of which enter the parietal lobe. A series of retinotopic areas occupy the length of the intraparietal sulcus and continue into the postcentral sulcus. On themedial wall, retinotopy extends across the parieto-occipital sulcus into the precuneus and reaches the cingulate sulcus. Full-body tactile stimulation revealed a multisensory homunculus lying along the postcentral sulcus just posterior to primary somatosensory cortical areas and overlapping with the anteriormost retinotopic maps. These topologically organized higher-level maps lay the foundation for actions in peripersonal space (e.g., reaching and grasping) aswell as navigation through space. A preliminary yet comprehensive multilayer functional atlas was constructed to specify the relative locations of cortical unisensory, multisensory, and action representations. We expect that those areal and functional definitions will be refined by future studies using more sophisticated stimuli and tasks tailored to regions with different specificity. The long-term goal is to construct an online surface-based atlas containing layered maps of multiple modalities that can be used as a reference to understand the functions and disorders of the parietal lobe.", "title": "" }, { "docid": "8f73a521d7703fa00bbaf7b68e470c55", "text": "Purpose – The purpose of this paper is to introduce the concept of strategic integration of knowledge management (KM ) and customer relationship management (CRM). The integration is a strategic issue that has strong ramifications in the long-term competitiveness of organizations. It is not limited to CRM; the concept can also be applied to supply chain management (SCM), product development management (PDM), eterprise resource planning (ERP) and retail network management (RNM) that offer different perspectives into knowledge management adoption. Design/methodology/approach – Through literature review and establishing new perspectives with examples, the components of knowledge management, customer relationship management, and strategic planning are amalgamated. Findings – Findings include crucial details in the various components of knowledge management, customer relationship management, and strategic planning, i.e. strategic planning process, value formula, intellectual capital measure, different levels of CRM and their core competencies. Practical implications – Although the strategic integration of knowledge management and customer relationship management is highly conceptual, a case example has been provided where the concept is applied. The same concept could also be applied to other industries that focus on customer service. Originality/value – The concept of strategic integration of knowledge management and customer relationship management is new. There are other areas, yet to be explored in terms of additional integration such as SCM, PDM, ERP, and RNM. The concept of integration would be useful for future research as well as for KM and CRM practitioners.", "title": "" }, { "docid": "236dc9aa7d8c78698cbff770184db32b", "text": "The prevalence of diet-related chronic diseases strongly impacts global health and health services. Currently, it takes training and strong personal involvement to manage or treat these diseases. One way to assist with dietary assessment is through computer vision systems that can recognize foods and their portion sizes from images and output the corresponding nutritional information. When multiple food items may exist, a food segmentation stage should also be applied before recognition. In this study, we propose a method to detect and segment the food of already detected dishes in an image. The method combines region growing/merging techniques with a deep CNN-based food border detection. A semi-automatic version of the method is also presented that improves the result with minimal user input. The proposed methods are trained and tested on non-overlapping subsets of a food image database including 821 images, taken under challenging conditions and annotated manually. The automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 92%, respectively, in roughly 0.5 seconds per image.", "title": "" }, { "docid": "a402ac37db42996e6fccca9d2da056ee", "text": "This article presents an up-to-date review of the several extraction methods commonly used to determine the value of the threshold voltage of MOSFETs. It includes the different methods that extract this quantity from the drain current versus gate voltage transfer characteristics measured under linear operation conditions for crystalline and non-crystalline MOSFETs. The various methods presented for the linear region are adapted to the saturation region and tested as a function of drain voltage whenever possible. The implementation of the extraction methods is discussed and tested by applying them to real state-ofthe-art devices in order to compare their performance. The validity of the different methods with respect to the presence of parasitic series resistance is also evaluated using 2-D simulations. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2a5339fdb6b4f8a9a28af908da7b168d", "text": "In this paper we propose a human interface device that converts the mechanism of hand sign language into alphanumerical characters. This device is in the form of a portable right hand glove. We propose this device in concurrence with assistive engineering to help the underprivileged. Our main goal is to identify 26 alphabets and 10 numbers of American Sign Language and display it on the LCD. Once the text is obtained on the LCD, text to speech conversion operation is carried out and a voice output is obtained. Further, the text obtained can also be viewed on a PC or any portable hand held device. People with hearing disability find it difficult to communicate with others using their Universal Sign Language, as a normal person doesn't understand these sign languages. Our main objective is to set an interface between the Deaf/Dumb and normal person to improve the communication capabilities so that they can communicate easily with others. We mount dual axis accelerometers on the glove and propose and efficient methodology to convert these sign languages.", "title": "" }, { "docid": "354041896b7375aeedf1018f3d9bb380", "text": "More than 60 percent of the population in the India, agriculture as the primary sector occupation. In recent years, due increase in labor shortage interest has grown for the development of the autonomous vehicles like robots in the agriculture. An robot called agribot have been designed for agricultural purposes. It is designed to minimize the labor of farmers in addition to increasing the speed and accuracy of the work. It performs the elementary functions involved in farming i.e. spraying of pesticide, sowing of seeds, and so on. Spraying pesticides especially important for the workers in the area of potentially harmful for the safety and health of the workers. This is especially important for the workers in the area of potentially harmful for the safety and health of the workers. The Proposed system aims at designing multipurpose autonomous agricultural robotic vehicle which can be controlled through IoT for seeding and spraying of pesticides. These robots are used to reduce human intervention, ensuring high yield and efficient utilization of resources. KeywordsIoT, Agribot, Sprayer, Pesticides", "title": "" }, { "docid": "b63635129ab0663efa374b83f2b77944", "text": "Cannabis sativa L. is an important herbaceous species originating from Central Asia, which has been used in folk medicine and as a source of textile fiber since the dawn of times. This fast-growing plant has recently seen a resurgence of interest because of its multi-purpose applications: it is indeed a treasure trove of phytochemicals and a rich source of both cellulosic and woody fibers. Equally highly interested in this plant are the pharmaceutical and construction sectors, since its metabolites show potent bioactivities on human health and its outer and inner stem tissues can be used to make bioplastics and concrete-like material, respectively. In this review, the rich spectrum of hemp phytochemicals is discussed by putting a special emphasis on molecules of industrial interest, including cannabinoids, terpenes and phenolic compounds, and their biosynthetic routes. Cannabinoids represent the most studied group of compounds, mainly due to their wide range of pharmaceutical effects in humans, including psychotropic activities. The therapeutic and commercial interests of some terpenes and phenolic compounds, and in particular stilbenoids and lignans, are also highlighted in view of the most recent literature data. Biotechnological avenues to enhance the production and bioactivity of hemp secondary metabolites are proposed by discussing the power of plant genetic engineering and tissue culture. In particular two systems are reviewed, i.e., cell suspension and hairy root cultures. Additionally, an entire section is devoted to hemp trichomes, in the light of their importance as phytochemical factories. Ultimately, prospects on the benefits linked to the use of the -omics technologies, such as metabolomics and transcriptomics to speed up the identification and the large-scale production of lead agents from bioengineered Cannabis cell culture, are presented.", "title": "" }, { "docid": "486e3f5614f69f60d8703d8641c73416", "text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.", "title": "" }, { "docid": "924f23fa4a8b2140445755ed0a63676f", "text": "This article examined the relationships and outcomes of behaviors falling at the interface of general and sexual forms of interpersonal mistreatment in the workplace. Data were collected with surveys of two different female populations (Ns = 833 and 1,425) working within a large public-sector organization. Findings revealed that general incivility and sexual harassment were related constructs, with gender harassment bridging the two. Moreover, these behaviors tended to co-occur in organizations, and employee well-being declined with the addition of each type of mistreatment to the workplace experience. This behavior type (or behavior combination) effect remained significant even after controlling for behavior frequency. The findings are interpreted from perspectives on sexual aggression, social power, and multiple victimization.", "title": "" }, { "docid": "be426354d0338b2b5a17503d30c9665c", "text": "0141-9331/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.micpro.2011.06.002 ⇑ Corresponding author. E-mail address: [email protected] (J. M In this paper, Texas Instruments TMS320C6713 DSP based real-time speech recognition system using Modified One Against All Support Vector Machine (SVM) classifier is proposed. The major contributions of this paper are: the study and evaluation of the performance of the classifier using three feature extraction techniques and proposal for minimizing the computation time for the classifier. From this study, it is found that the recognition accuracies of 93.33%, 98.67% and 96.67% are achieved for the classifier using Mel Frequency Cepstral Coefficients (MFCC) features, zerocrossing (ZC) and zerocrossing with peak amplitude (ZCPA) features respectively. To reduce the computation time required for the systems, two techniques – one using optimum threshold technique for the SVM classifier and another using linear assembly are proposed. The ZC based system requires the least computation time and the above techniques reduce the execution time by a factor of 6.56 and 5.95 respectively. For the purpose of comparison, the speech recognition system is also implemented using Altera Cyclone II FPGA with Nios II soft processor and custom instructions. Of the two approaches, the DSP approach requires 87.40% less number of clock cycles. Custom design of the recognition system on the FPGA without using the soft-core processor would have resulted in less computational complexity. The proposed classifier is also found to reduce the number of support vectors by a factor of 1.12–3.73 when applied to speaker identification and isolated letter recognition problems. The techniques proposed here can be adapted for various other SVM based pattern recognition systems. 2011 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
0d6b58df08c2956b073151fe580781ed
Low-Rank Modeling and Its Applications in Image Analysis
[ { "docid": "783d7251658f9077e05a7b1b9bd60835", "text": "A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.", "title": "" } ]
[ { "docid": "ee5729a9ec24fbb951076a43d4945e8e", "text": "Enhancing the performance of emotional speaker recognition process has witnessed an increasing interest in the last years. This paper highlights a methodology for speaker recognition under different emotional states based on the multiclass Support Vector Machine (SVM) classifier. We compare two feature extraction methods which are used to represent emotional speech utterances in order to obtain best accuracies. The first method known as traditional Mel-Frequency Cepstral Coefficients (MFCC) and the second one is MFCC combined with Shifted-Delta-Cepstra (MFCC-SDC). Experimentations are conducted on IEMOCAP database using two multiclass SVM approaches: One-Against-One (OAO) and One Against-All (OAA). Obtained results show that MFCC-SDC features outperform the conventional MFCC. Keywords—Emotion; Speaker recognition; Mel Frequency Cepstral Coefficients (MFCC); Shifted-Delta-Cepstral (SDC); SVM", "title": "" }, { "docid": "f5a8d2d7ea71fa5444cc1594dc0cf5ab", "text": "Radar sensors operating in the 76–81 GHz range are considered key for Advanced Driver Assistance Systems (ADAS) like adaptive cruise control (ACC), collision mitigation and avoidance systems (CMS) or lane change assist (LCA). These applications are the next wave in automotive safety systems and have thus generated increased interest in lower-cost solutions especially for the mm-wave front-end (FE) section. Today, most of the radar sensors in this frequency range use GaAs based FEs. These multi-chip GaAs FEs are a main cost driver in current radar sensors due to their low integration level. The step towards monolithic microwave integrated circuits (MMIC) based on a 200 GHz ft silicon-germanium (SiGe) technology integrating all needed RF building blocks (mixers, VCOs, dividers, buffers, PAs) on an single die does not only lead to cost reductions but also benefits the testability of these MMICs. This is especially important in the light of upcoming functional safety standards like ASIL-D and ISO26262.", "title": "" }, { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" }, { "docid": "38a74fff83d3784c892230255943ee23", "text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.", "title": "" }, { "docid": "f366e1378b86e7fbed2252754502cf59", "text": "Multilabel learning deals with data associated with multiple labels simultaneously. Like other data mining and machine learning tasks, multilabel learning also suffers from the curse of dimensionality. Dimensionality reduction has been studied for many years, however, multilabel dimensionality reduction remains almost untouched. In this article, we propose a multilabel dimensionality reduction method, MDDM, with two kinds of projection strategies, attempting to project the original data into a lower-dimensional feature space maximizing the dependence between the original feature description and the associated class labels. Based on the Hilbert-Schmidt Independence Criterion, we derive a eigen-decomposition problem which enables the dimensionality reduction process to be efficient. Experiments validate the performance of MDDM.", "title": "" }, { "docid": "37936de50a1d3fa8612a465b6644c282", "text": "Nature uses a limited, conservative set of amino acids to synthesize proteins. The ability to genetically encode an expanded set of building blocks with new chemical and physical properties is transforming the study, manipulation and evolution of proteins, and is enabling diverse applications, including approaches to probe, image and control protein function, and to precisely engineer therapeutics. Underpinning this transformation are strategies to engineer and rewire translation. Emerging strategies aim to reprogram the genetic code so that noncanonical biopolymers can be synthesized and evolved, and to test the limits of our ability to engineer the translational machinery and systematically recode genomes.", "title": "" }, { "docid": "714b5db0d1f146c5dde6e4c01de59be9", "text": "Coilgun electromagnetic launchers have capability for low and high speed applications. Through the development of four guns having projectiles ranging from 10 g to 5 kg and speeds up to 1 km/s, Sandia National Laboratories has succeeded in coilgun design and operations, validating the computational codes and basis for gun system control. Coilguns developed at Sandia consist of many coils stacked end-to-end forming a barrel, with each coil energized in sequence to create a traveling magnetic wave that accelerates a projectile. Active tracking of the projectile location during launch provides precise feedback to control when the coils arc triggered to create this wave. However, optimum performance depends also on selection of coil parameters. This paper discusses issues related to coilgun design and control such as tradeoffs in geometry and circuit parameters to achieve the necessary current risetime to establish the energy in the coils. The impact of switch jitter on gun performance is also assessed for high-speed applications.", "title": "" }, { "docid": "81672984e2d94d7a06ffe930136647a3", "text": "Social network sites provide the opportunity for bu ilding and maintaining online social network groups around a specific interest. Despite the increasing use of social networks in higher education, little previous research has studied their impacts on stud en ’s engagement and on their perceived educational outcomes. This research investigates the impact of instructors’ self-disclosure and use of humor via course-based social networks as well as their credi bility, and the moderating impact of time spent in hese course-based social networks, on the students’ enga g ment in course-based social networks. The researc h provides a theoretical viewpoint, supported by empi rical evidence, on the impact of students’ engageme nt in course-based social networks on their perceived educational outcomes. The findings suggest that instructors who create course-based online social n etworks to communicate with their students can increase their engagement, motivation, and satisfac on. We conclude the paper by suggesting the theoretical implications for the study and by provi ding strategies for instructors to adjust their act ivities in order to succeed in improving their students’ engag ement and educational outcomes.", "title": "" }, { "docid": "9889cb9ae08cd177e6fa55c3ae7b8831", "text": "Design and developmental procedure of strip-line based 1.5 MW, 30-96 MHz, ultra-wideband high power 3 dB hybrid coupler has been presented and its applicability in ion cyclotron resonance heating (ICRH) in tokamak is discussed. For the high power handling capability, spacing between conductors and ground need to very high. Hence other structural parameters like strip-width, strip thickness coupling gap, and junction also become large which can be gone upto optimum limit where various constrains like fabrication tolerance, discontinuities, and excitation of higher TE and TM modes become prominent and significantly deteriorates the desired parameters of the coupled lines system. In designed hybrid coupler, two 8.34 dB coupled lines are connected in tandem to get desired coupling of 3 dB and air is used as dielectric. The spacing between ground and conductors are taken as 0.164 m for 1.5 MW power handling capability. To have the desired spacing, each of 8.34 dB segments are designed with inner dimension of 3.6 × 1.0 × 40 cm where constraints have been significantly realized, compensated, and applied in designing of 1.5 MW hybrid coupler and presented in paper.", "title": "" }, { "docid": "ce0ba4696c26732ac72b346f72af7456", "text": "OBJECTIVE\nThe purpose of this study was to examine the relationship between two forms of helping behavior among older adults--informal caregiving and formal volunteer activity.\n\n\nMETHODS\nTo evaluate our hypotheses, we employed Tobit regression models to analyze panel data from the first two waves of the Americans' Changing Lives survey.\n\n\nRESULTS\nWe found that older adult caregivers were more likely to be volunteers than noncaregivers. Caregivers who provided a relatively high number of caregiving hours annually reported a greater number of volunteer hours than did noncaregivers. Caregivers who provided care to nonrelatives were more likely than noncaregivers to be a volunteer and to volunteer more hours. Finally, caregivers were more likely than noncaregivers to be asked to volunteer.\n\n\nDISCUSSION\nOur results provide support for the hypothesis that caregivers are embedded in networks that provide them with more opportunities for volunteering. Additional research on the motivations for volunteering and greater attention to the context and hierarchy of caregiving and volunteering are needed.", "title": "" }, { "docid": "e3c0073428eb554c1341b5ba3af3918a", "text": "Technological Pedagogical Content Knowledge (TPACK) has been introduced as a conceptual framework for the knowledge base teachers need to effectively teach with technology. The framework stems from the notion that technology integration in a specific educational context benefits from a careful alignment of content, pedagogy and the potential of technology, and that teachers who want to integrate technology in their teaching practice therefore need to be competent in all three domains. This study is a systematic literature review about TPACK of 55 peer-reviewed journal articles (and one book chapter), published between 2005 and 2011. The purpose of the review was to investigate the theoretical basis and the practical use of TPACK. Findings showed different understandings of TPACK and of technological knowledge. Implications of these different views impacted the way TPACK was measured. Notions about TPACK in subject domains were hardly found in the studies selected for this review. Teacher knowledge (TPACK) and beliefs about pedagogy and technology are intertwined. Both determine whether a teacher decides to teach with technology. Active involvement in (re)design and enactment of technology-enhanced lessons was found as a promising strategy for the development of TPACK in (student-)teachers. Future directions for research are discussed.", "title": "" }, { "docid": "aca8b1efb729bdc45f5363cb663dba74", "text": "Along with the burst of open source projects, software theft (or plagiarism) has become a very serious threat to the healthiness of software industry. Software birthmark, which represents the unique characteristics of a program, can be used for software theft detection. We propose a system call dependence graph based software birthmark called SCDG birthmark, and examine how well it reflects unique behavioral characteristics of a program. To our knowledge, our detection system based on SCDG birthmark is the first one that is capable of detecting software component theft where only partial code is stolen. We demonstrate the strength of our birthmark against various evasion techniques, including those based on different compilers and different compiler optimization levels as well as two state-of-the-art obfuscation tools. Unlike the existing work that were evaluated through small or toy software, we also evaluate our birthmark on a set of large software. Our results show that SCDG birthmark is very practical and effective in detecting software theft that even adopts advanced evasion techniques.", "title": "" }, { "docid": "c9c4ed4a7e8e6ef8ca2bcf146001d2e5", "text": "Microblogging services such as Twitter are said to have the potential for increasing political participation. Given the feature of 'retweeting' as a simple yet powerful mechanism for information diffusion, Twitter is an ideal platform for users to spread not only information in general but also political opinions through their networks as Twitter may also be used to publicly agree with, as well as to reinforce, someone's political opinions or thoughts. Besides their content and intended use, Twitter messages ('tweets') also often convey pertinent information about their author's sentiment. In this paper, we seek to examine whether sentiment occurring in politically relevant tweets has an effect on their retweetability (i.e., how often these tweets will be retweeted). Based on a data set of 64,431 political tweets, we find a positive relationship between the quantity of words indicating affective dimensions, including positive and negative emotions associated with certain political parties or politicians, in a tweet and its retweet rate. Furthermore, we investigate how political discussions take place in the Twitter network during periods of political elections with a focus on the most active and most influential users. Finally, we conclude by discussing the implications of our results.", "title": "" }, { "docid": "5df3346cb96403ee932428d159ad342e", "text": "Nearly 40% of mortality in the United States is linked to social and behavioral factors such as smoking, diet and sedentary lifestyle. Autonomous self-regulation of health-related behaviors is thus an important aspect of human behavior to assess. In 1997, the Behavior Change Consortium (BCC) was formed. Within the BCC, seven health behaviors, 18 theoretical models, five intervention settings and 26 mediating variables were studied across diverse populations. One of the measures included across settings and health behaviors was the Treatment Self-Regulation Questionnaire (TSRQ). The purpose of the present study was to examine the validity of the TSRQ across settings and health behaviors (tobacco, diet and exercise). The TSRQ is composed of subscales assessing different forms of motivation: amotivation, external, introjection, identification and integration. Data were obtained from four different sites and a total of 2731 participants completed the TSRQ. Invariance analyses support the validity of the TSRQ across all four sites and all three health behaviors. Overall, the internal consistency of each subscale was acceptable (most alpha values >0.73). The present study provides further evidence of the validity of the TSRQ and its usefulness as an assessment tool across various settings and for different health behaviors.", "title": "" }, { "docid": "26d7cf1e760e9e443f33ebd3554315b6", "text": "The arrival of a multinational corporation often looks like a death sentence to local companies in an emerging market. After all, how can they compete in the face of the vast financial and technological resources, the seasoned management, and the powerful brands of, say, a Compaq or a Johnson & Johnson? But local companies often have more options than they might think, say the authors. Those options vary, depending on the strength of globalization pressures in an industry and the nature of a company's competitive assets. In the worst case, when globalization pressures are strong and a company has no competitive assets that it can transfer to other countries, it needs to retreat to a locally oriented link within the value chain. But if globalization pressures are weak, the company may be able to defend its market share by leveraging the advantages it enjoys in its home market. Many companies in emerging markets have assets that can work well in other countries. Those that operate in industries where the pressures to globalize are weak may be able to extend their success to a limited number of other markets that are similar to their home base. And those operating in global markets may be able to contend head-on with multinational rivals. By better understanding the relationship between their company's assets and the industry they operate in, executives from emerging markets can gain a clearer picture of the options they really have when multinationals come to stay.", "title": "" }, { "docid": "adaab9f6e0355af12f4058a350076f87", "text": "Recently, the fusion of hyperspectral and light detection and ranging (LiDAR) data has obtained a great attention in the remote sensing community. In this paper, we propose a new feature fusion framework using deep neural network (DNN). The proposed framework employs a novel 3D convolutional neural network (CNN) to extract the spectral-spatial features of hyperspectral data, a deep 2D CNN to extract the elevation features of LiDAR data, and then a fully connected deep neural network to fuse the extracted features in the previous CNNs. Through the aforementioned three deep networks, one can extract the discriminant and invariant features of hyperspectral and LiDAR data. At last, logistic regression is used to produce the final classification results. The experimental results reveal that the proposed deep fusion model provides competitive results. Furthermore, the proposed deep fusion idea opens a new window for future research.", "title": "" }, { "docid": "b83eb2f78c4b48cf9b1ca07872d6ea1a", "text": "Network Function Virtualization (NFV) is emerging as one of the most innovative concepts in the networking landscape. By migrating network functions from dedicated mid-dleboxes to general purpose computing platforms, NFV can effectively reduce the cost to deploy and to operate large networks. However, in order to achieve its full potential, NFV needs to encompass also the radio access network allowing Mobile Virtual Network Operators to deploy custom resource allocation solutions within their virtual radio nodes. Such requirement raises several challenges in terms of performance isolation and resource provisioning. In this work we formalize the Virtual Network Function (VNF) placement problem for radio access networks as an integer linear programming problem and we propose a VNF placement heuristic. Moreover, we also present a proof-of-concept implementation of an NFV management and orchestration framework for Enterprise WLANs. The proposed architecture builds upon a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing nodes leveraging on general computing platforms.", "title": "" }, { "docid": "078ba976d84d15da757f3f5e165927d9", "text": "Evolutionary algorithms often have to solve optimization problems in the presence of a wide range of uncertainties. Generally, uncertainties in evolutionary computation can be divided into the following four categories. First, the fitness function is noisy. Second, the design variables and/or the environmental parameters may change after optimization, and the quality of the obtained optimal solution should be robust against environmental changes or deviations from the optimal point. Third, the fitness function is approximated, which means that the fitness function suffers from approximation errors. Fourth, the optimum of the problem to be solved changes over time and, thus, the optimizer should be able to track the optimum continuously. In all these cases, additional measures must be taken so that evolutionary algorithms are still able to work satisfactorily. This paper attempts to provide a comprehensive overview of the related work within a unified framework, which has been scattered in a variety of research areas. Existing approaches to addressing different uncertainties are presented and discussed, and the relationship between the different categories of uncertainties are investigated. Finally, topics for future research are suggested.", "title": "" }, { "docid": "c4094c8b273d6332f36b6f452886de6a", "text": "This paper presents original research on prevalence, user characteristics and effect profile of N,N-dimethyltryptamine (DMT), a potent hallucinogenic which acts primarily through the serotonergic system. Data were obtained from the Global Drug Survey (an anonymous online survey of people, many of whom have used drugs) conducted between November and December 2012 with 22,289 responses. Lifetime prevalence of DMT use was 8.9% (n=1980) and past year prevalence use was 5.0% (n=1123). We explored the effect profile of DMT in 472 participants who identified DMT as the last new drug they had tried for the first time and compared it with ratings provided by other respondents on psilocybin (magic mushrooms), LSD and ketamine. DMT was most often smoked and offered a strong, intense, short-lived psychedelic high with relatively few negative effects or \"come down\". It had a larger proportion of new users compared with the other substances (24%), suggesting its popularity may increase. Overall, DMT seems to have a very desirable effect profile indicating a high abuse liability that maybe offset by a low urge to use more.", "title": "" }, { "docid": "4b5b09ee38c87fdf7031f90530460d81", "text": "With the increasing adoption of Web Services and service-oriented computing paradigm, matchmaking of web services with the request has become a significant task. This warrants the need to establish an effective and reliable Web Service discovery. Here reducing the service discovery time and increasing the quality of discovery are key issues. This paper proposes a new semantic Web Service discovery scheme where the similarity between the query and service is decided using the WSDL specification and ontology, and the improved Hungarian algorithm is applied to quickly find the maximum match. The proposed approach utilizes the structure of datatype and operation, and natural language description used for information retrieval. Computer simulation reveals that the proposed scheme substantially increases the quality of service discovery compared to the existing schemes in terms of precision, recall rate, and F-measure. Moreover, the proposed scheme allows consistently smaller discovery time, while the improvement gets more significant as the number of compared parameters increases.", "title": "" } ]
scidocsrr
162ce68b88ea90b547036e7048071c4f
A DAPTIVE PREDICTION TIME FOR SEQUENCE CLASSIFICATION
[ { "docid": "8306c40722bb956253c6e7cf112836d7", "text": "Recurrent Neural Networks are showing much promise in many sub-areas of natural language processing, ranging from document classification to machine translation to automatic question answering. Despite their promise, many recurrent models have to read the whole text word by word, making it slow to handle long documents. For example, it is difficult to use a recurrent network to read a book and answer questions about it. In this paper, we present an approach of reading text while skipping irrelevant information if needed. The underlying model is a recurrent network that learns how far to jump after reading a few words of the input text. We employ a standard policy gradient method to train the model to make discrete jumping decisions. In our benchmarks on four different tasks, including number prediction, sentiment analysis, news article classification and automatic Q&A, our proposed model, a modified LSTM with jumping, is up to 6 times faster than the standard sequential LSTM, while maintaining the same or even better accuracy.", "title": "" }, { "docid": "75b64f9106b2c334c572bc3180d93aef", "text": "This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions.", "title": "" }, { "docid": "2db49e1c2020875f2453d4b614fd2116", "text": "Text Categorization (TC), also known as Text Classification, is the task of automatically classifying a set of text documents into different categories from a predefined set. If a document belongs to exactly one of the categories, it is a single-label classification task; otherwise, it is a multi-label classification task. TC uses several tools from Information Retrieval (IR) and Machine Learning (ML) and has received much attention in the last years from both researchers in the academia and industry developers. In this paper, we first categorize the documents using KNN based machine learning approach and then return the most relevant documents.", "title": "" } ]
[ { "docid": "6533ee7e13ab293f33f1747adff92fe5", "text": "The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its farreaching application, there is almost no work on applying stochastic approximation to learning problems with general constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.", "title": "" }, { "docid": "94013936968a4864167ed4e764398deb", "text": "A prime requirement for autonomous driving is a fast and reliable estimation of the motion state of dynamic objects in the ego-vehicle's surroundings. An instantaneous approach for extended objects based on two Doppler radar sensors has recently been proposed. In this paper, that approach is augmented by prior knowledge of the object's heading angle and rotation center. These properties can be determined reliably by state-of-the-art methods based on sensors such as LIDAR or cameras. The information fusion is performed utilizing an appropriate measurement model, which directly maps the motion state in the Doppler velocity space. This model integrates the geometric properties. It is used to estimate the object's motion state using a linear regression. Additionally, the model allows a straightforward calculation of the corresponding variances. The resulting method shows a promising accuracy increase of up to eight times greater than the original approach.", "title": "" }, { "docid": "5f8b0a15477bf0ee5787269a578988c6", "text": "Suppose your netmail is being erratically censored by Captain Yossarian. Whenever you send a message, he censors each bit of the message with probability 1/2, replacing each censored bit by some reserved character. Well versed in such concepts as redundancy, this is no real problem to you. The question is, can it actually be turned around and used to your advantage? We answer this question strongly in the affirmative. We show that this protocol, more commonly known as oblivious transfer, can be used to simulate a more sophisticated protocol, known as oblivious circuit evaluation([Y]). We also show that with such a communication channel, one can have completely noninteractive zero-knowledge proofs of statements in NP. These results do not use any complexity-theoretic assumptions. We can show that they have applications to a variety of models in which oblivious transfer can be done.", "title": "" }, { "docid": "328a3e05fac7d118a99afd6197dac918", "text": "Neural networks have recently had a lot of success for many tasks. However, neural network architectures that perform well are still typically designed manually by experts in a cumbersome trial-and-error process. We propose a new method to automatically search for well-performing CNN architectures based on a simple hill climbing procedure whose operators apply network morphisms, followed by short optimization runs by cosine annealing. Surprisingly, this simple method yields competitive results, despite only requiring resources in the same order of magnitude as training a single network. E.g., on CIFAR-10, our method designs and trains networks with an error rate below 6% in only 12 hours on a single GPU; training for one day reduces this error further, to almost 5%.", "title": "" }, { "docid": "f59fd6af9dea570b49c453de02297f4c", "text": "OBJECTIVES\nThe role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data.\n\n\nMETHODOLOGY\nSocial media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported.\n\n\nDATA SETS\nThree data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts.\n\n\nEVALUATIONS\nTwo sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media.\n\n\nFINDINGS\nThe small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average.", "title": "" }, { "docid": "5c26713d33001fc91ce19f551adac492", "text": "Recurrent neural network language models (RNNLMs) have recently become increasingly popular for many applications i ncluding speech recognition. In previous research RNNLMs have normally been trained on well-matched in-domain data. The adaptation of RNNLMs remains an open research area to be explored. In this paper, genre and topic based RNNLM adaptation techniques are investigated for a multi-genre broad cast transcription task. A number of techniques including Proba bilistic Latent Semantic Analysis, Latent Dirichlet Alloc ation and Hierarchical Dirichlet Processes are used to extract sh ow level topic information. These were then used as additional input to the RNNLM during training, which can facilitate unsupervised test time adaptation. Experiments using a state-o f-theart LVCSR system trained on 1000 hours of speech and more than 1 billion words of text showed adaptation could yield pe rplexity reductions of 8% relatively over the baseline RNNLM and small but consistent word error rate reductions.", "title": "" }, { "docid": "9e2dc31edf639e1201c3a3d59f3381af", "text": "The AMBA-AHB Multilayer Bus matrix Self-Motivated Arbitration scheme proposed three methods for data transmiting from master to slave for on chip communication. Multilayer advanced high-performance bus (ML-AHB) busmatrix employs slave-side arbitration. Slave-side arbitration is different from master-side arbitration in terms of request and grant signals since, in the former, the master merely starts a burst transaction and waits for the slave response to proceed to the next transfer. Therefore, in the former, the unit of arbitration can be a transaction or a transfer. However, the ML-AHB busmatrix of ARM offers only transferbased fixed-pri-ority and round-robin arbitration schemes. In this paper, we propose the design and implementation of a flexible arbiter for the ML-AHB busmatrix to support three priority policies fixed priority, round robin, and dynamic priority and three data multiplexing modes transfer, transaction, and desired transfer length. In total, there are nine possible arbitration schemes. The proposed arbiter, which is self-motivated (SM), selects one of the nine possible arbitration schemes based upon the priority-level notifications and the desired transfer length from the masters so that arbitration leads to the maximum performance. Experimental results show that, although the area overhead of the proposed SM arbitration scheme is 9%–25% larger than those of the other arbitration schemes, our arbiter improves the throughput by 14%–62% compared to other schemes.", "title": "" }, { "docid": "58f505558cda55abf70b143d52030a2d", "text": "Given a finite set of points P ⊆ R, we would like to find a small subset S ⊆ P such that the convex hull of S approximately contains P . More formally, every point in P is within distance from the convex hull of S. Such a subset S is called an -hull. Computing an -hull is an important problem in computational geometry, machine learning, and approximation algorithms. In many applications, the set P is too large to fit in memory. We consider the streaming model where the algorithm receives the points of P sequentially and strives to use a minimal amount of memory. Existing streaming algorithms for computing an -hull require O( (1−d)/2) space, which is optimal for a worst-case input. However, this ignores the structure of the data. The minimal size of an -hull of P , which we denote by OPT, can be much smaller. A natural question is whether a streaming algorithm can compute an -hull using only O(OPT) space. We begin with lower bounds that show, under a reasonable streaming model, that it is not possible to have a single-pass streaming algorithm that computes an -hull with O(OPT) space. We instead propose three relaxations of the problem for which we can compute -hulls using space near-linear to the optimal size. Our first algorithm for points in R2 that arrive in random-order uses O(logn ·OPT) space. Our second algorithm for points in R2 makes O(log( −1)) passes before outputting the -hull and requires O(OPT) space. Our third algorithm, for points in R for any fixed dimension d, outputs, with high probability, an -hull for all but δ-fraction of directions and requires O(OPT · log OPT) space. 1 This work was supported in part by the National Science Foundation under grant CCF-1525971. Work was done while the author was at Carnegie Mellon University. 2 This material is based upon work supported in part by the National Science Foundation under Grants No. 1447639, 1650041 and 1652257, Cisco faculty award, and by the ONR Award N00014-18-1-2364. 3 Now at DeepMind. 4 This research was supported by the Franco-American Fulbright Commission and supported in part by National Science Foundation under Grant No. 1447639, 1650041 and 1652257. The author thanks INRIA (l’Institut national de recherche en informatique et en automatique) for hosting him during the writing of this paper. 5 This material is based upon work supported in part by National Science Foundation under Grant No. 1447639, 1650041 and 1652257. Work was done while the author was at Johns Hopkins University. EA T C S © Avrim Blum, Vladimir Braverman, Ananya Kumar, Harry Lang, and Lin F. Yang; licensed under Creative Commons License CC-BY 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Editors: Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella; Article No. 21; pp. 21:1–21:13 Leibniz International Proceedings in Informatics Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany 21:2 Approximate Convex Hull of Data Streams 2012 ACM Subject Classification Theory of computation → Computational geometry, Theory of computation → Sketching and sampling, Theory of computation → Streaming models", "title": "" }, { "docid": "3259c90b96b3ebbe885f73c2febe863d", "text": "Human-Following robots are being actively researched for their immense potential to carry out mundane tasks like load carrying and monitoring of target individual through interaction and collaboration. The recent advancements in vision and sensor technologies have helped in creating more user-friendly robots that are able to coexist with humans by leveraging the sensors for human detection, human movement estimation, collision avoidance, and obstacle avoidance. But most of these sensors are suitable only for Line of Sight following of human. In the case of loss of sight of the target, most of them fail to re-acquire their target. In this paper, we are proposing a novel method to develop a human following robot using Bluetooth and Inertial Measurement Unit (IMU) on Smartphones which can work under high interference environment and can reacquire the target when lost. The proposed method leverages IMU sensors on the smartphone to estimate the direction of human movement while estimating the distance traveled from the RSSI of the Bluetooth. Thus, the Follow Me robot which estimates the position of target human and direction of heading and effectively track the person was implemented using Smartphone on a differential drive robot.", "title": "" }, { "docid": "ab8599cbe4b906cea6afab663cbe2caf", "text": "Real-time ETL and data warehouse multidimensional modeling (DMM) of business operational data has become an important research issue in the area of real-time data warehousing (RTDW). In this study, some of the recently proposed real-time ETL technologies from the perspectives of data volumes, frequency, latency, and mode have been discussed. In addition, we highlight several advantages of using semi-structured DMM (i.e. XML) in RTDW instead of traditional structured DMM (i.e., relational). We compare the two DMMs on the basis of four characteristics: heterogeneous data integration, types of measures supported, aggregate query processing, and incremental maintenance. We implemented the RTDW framework for an example telecommunication organization. Our experimental analysis shows that if the delay comes from the incremental maintenance of DMM, no ETL technology (full-reloading or incremental-loading) can help in real-time business intelligence.", "title": "" }, { "docid": "f24bba45a1905cd4658d52bc7e9ee046", "text": "In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, QualityDiversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tuning policies using gradient-descent. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG. We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that DDPG fails on the former and that GEP-PG improves over the best DDPG variant in both environments. Supplementary videos and discussion can be found at frama.link/gep_pg, the code at github.com/flowersteam/geppg.", "title": "" }, { "docid": "5cbd331652b69714bc4ff0eeacc8f85a", "text": "A survey was conducted from May to Oct of 2011 of the parasitoid community of the imported cabbageworm, Pieris rapae (Lepidoptera: Pieridae), in cole crops in part of the eastern United States and southeastern Canada. The findings of our survey indicate that Cotesia rubecula (Hymenoptera: Braconidae) now occurs as far west as North Dakota and has become the dominant parasitoid of P. rapae in the northeastern and north central United States and adjacent parts of southeastern Canada, where it has displaced the previously common parasitoid Cotesia glomerata (Hymenoptera: Braconidae). Cotesia glomerata remains the dominant parasitoid in the mid-Atlantic states, from Virginia to North Carolina and westward to southern Illinois, below latitude N 38° 48’. This pattern suggests that the released populations of C. rubecula presently have a lower latitudinal limit south of which they are not adapted.", "title": "" }, { "docid": "1757c61b82376d05a869034b2c3e8455", "text": "DMA-capable interconnects, providing ultra-low latency and high bandwidth, are increasingly being used in the context of distributed storage and data processing systems. However, the deployment of such systems in virtualized data centers is currently inhibited by the lack of a flexible and high-performance virtualization solution for RDMA network interfaces.\n In this work, we present a hybrid virtualization architecture which builds upon the concept of separation of paths for control and data operations available in RDMA. With hybrid virtualization, RDMA control operations are virtualized using hypervisor involvement, while data operations are set up to bypass the hypervisor completely. We describe HyV (Hybrid Virtualization), a virtualization framework for RDMA devices implementing such a hybrid architecture. In the paper, we provide a detailed evaluation of HyV for different RDMA technologies and operations. We further demonstrate the advantages of HyV in the context of a real distributed system by running RAMCloud on a set of HyV-enabled virtual machines deployed across a 6-node RDMA cluster. All of the performance results we obtained illustrate that hybrid virtualization enables bare-metal RDMA performance inside virtual machines while retaining the flexibility typically associated with paravirtualization.", "title": "" }, { "docid": "49f21df66ac901e5f37cff022353ed20", "text": "This paper presents the implementation of the interval type-2 to control the process of production of High-strength low-alloy (HSLA) steel in a secondary metallurgy process in a simply way. The proposal evaluate fuzzy techniques to ensure the accuracy of the model, the most important advantage is that the systems do not need pretreatment of the historical data, it is used as it is. The system is a multiple input single output (MISO) and the main goal of this paper is the proposal of a system that optimizes the resources: computational, time, among others.", "title": "" }, { "docid": "e50320cfddc32a918389fbf8707d599f", "text": "Psilocybin, an indoleamine hallucinogen, produces a psychosis-like syndrome in humans that resembles first episodes of schizophrenia. In healthy human volunteers, the psychotomimetic effects of psilocybin were blocked dose-dependently by the serotonin-2A antagonist ketanserin or the atypical antipsychotic risperidone, but were increased by the dopamine antagonist and typical antipsychotic haloperidol. These data are consistent with animal studies and provide the first evidence in humans that psilocybin-induced psychosis is due to serotonin-2A receptor activation, independently of dopamine stimulation. Thus, serotonin-2A overactivity may be involved in the pathophysiology of schizophrenia and serotonin-2A antagonism may contribute to therapeutic effects of antipsychotics.", "title": "" }, { "docid": "01ea69cfc6b81e431717c6b090df37b0", "text": "Physical trauma to the brain has always been known to affect brain functions and subsequent neurobiological development. Research primarily since the early 1990s has shown that psychological trauma can have detrimental effects on brain function that are not only lasting but that may alter patterns of subsequent neurodevelopment, particularly in children although developmental effects may be seen in adults as well. Childhood trauma produces a diverse range of symptoms and defining the brain's response to trauma and the factors that mediate the body's stress response systems is at the forefront of scientific investigation. This paper reviews the current evidence relating psychological trauma to anatomical and functional changes in the brain and discusses the need for accurate diagnosis and treatment to minimize such effects and to recognize their existence in developing treatment programs.", "title": "" }, { "docid": "c66e38f3be7760c8ca0b6ef2dfc5bec2", "text": "Gesture recognition remains a very challenging task in the field of computer vision and human computer interaction (HCI). A decade ago the task seemed to be almost unsolvable with the data provided by a single RGB camera. Due to recent advances in sensing technologies, such as time-of-flight and structured light cameras, there are new data sources available, which make hand gesture recognition more feasible. In this work, we propose a highly precise method to recognize static gestures from a depth data, provided from one of the above mentioned devices. The depth images are used to derive rotation-, translation- and scale-invariant features. A multi-layered random forest (MLRF) is then trained to classify the feature vectors, which yields to the recognition of the hand signs. The training time and memory required by MLRF are much smaller, compared to a simple random forest with equivalent precision. This allows to repeat the training procedure of MLRF without significant effort. To show the advantages of our technique, we evaluate our algorithm on synthetic data, on publicly available dataset, containing 24 signs from American Sign Language(ASL) and on a new dataset, collected using recently appeared Intel Creative Gesture Camera.", "title": "" }, { "docid": "6990c4f7bde94cb0e14245872e670f91", "text": "The UK's recent move to polymer banknotes has seen some of the currently used fingermark enhancement techniques for currency potentially become redundant, due to the surface characteristics of the polymer substrates. Possessing a non-porous surface with some semi-porous properties, alternate processes are required for polymer banknotes. This preliminary investigation explored the recovery of fingermarks from polymer notes via vacuum metal deposition using elemental copper. The study successfully demonstrated that fresh latent fingermarks, from an individual donor, could be clearly developed and imaged in the near infrared. By varying the deposition thickness of the copper, the contrast between the fingermark minutiae and the substrate could be readily optimised. Where the deposition thickness was thin enough to be visually indistinguishable, forensic gelatin lifters could be used to lift the fingermarks. These lifts could then be treated with rubeanic acid to produce a visually distinguishable mark. The technique has shown enough promise that it could be effectively utilised on other semi- and non-porous substrates.", "title": "" }, { "docid": "cd11e079db25441a1a5801c71fcff781", "text": "Quad-robot type (QRT) unmanned aerial vehicles (UAVs) have been developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV is equipped with four propellers driven by each electric motor, an embedded controller, an Inertial Navigation System (INS) using three rate gyros and accelerometers, a CCD (Charge Coupled Device) camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. Accurate modeling and robust flight control of QRT UAVs are mainly discussed in this work. Rigorous dynamic model of a QRT UAV is obtained both in the reference and body frame coordinate systems. A disturbance observer (DOB) based controller using the derived dynamic models is also proposed for robust hovering control. The control input induced by DOB is helpful to use simple equations of motion satisfying accurately derived dynamics. The developed hovering robot shows stable flying performances under the adoption of DOB and the vision based localization method. Although a model is incorrect, DOB method can design a controller by regarding the inaccurate part of the model J. Kim Department of Mechanical Engineering, Seoul National University of Technology, Seoul, South Korea e-mail: [email protected] M.-S. Kang Department of Mechatronics Engineering, Hanyang University, Ansan, South Korea e-mail: [email protected] S. Park (B) Division of Applied Robot Technology, Korea Institute of Industrial Technology, Ansan, South Korea e-mail: [email protected] 10 J Intell Robot Syst (2010) 57:9–26 and sensor noises as disturbances. The UAV can also avoid obstacles using eight IR (Infrared) and four ultrasonic range sensors. This kind of micro UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment. The experimental results show the performance of the proposed control algorithm.", "title": "" } ]
scidocsrr
6ebaf2722502a9553803a05b66bfa95e
There's No Free Lunch, Even Using Bitcoin: Tracking the Popularity and Profits of Virtual Currency Scams
[ { "docid": "bc8b40babfc2f16144cdb75b749e3a90", "text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.", "title": "" }, { "docid": "8ee24b38d7cf4f63402cd4f2c0beaf79", "text": "At the current stratospheric value of Bitcoin, miners with access to significant computational horsepower are literally printing money. For example, the first operator of a USD $1,500 custom ASIC mining platform claims to have recouped his investment in less than three weeks in early February 2013, and the value of a bitcoin has more than tripled since then. Not surprisingly, cybercriminals have also been drawn to this potentially lucrative endeavor, but instead are leveraging the resources available to them: stolen CPU hours in the form of botnets. We conduct the first comprehensive study of Bitcoin mining malware, and describe the infrastructure and mechanism deployed by several major players. By carefully reconstructing the Bitcoin transaction records, we are able to deduce the amount of money a number of mining botnets have made.", "title": "" } ]
[ { "docid": "091c57447d5a3c97d3ff1afb57ebb4e3", "text": "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.", "title": "" }, { "docid": "7a6ae2e12dbd9f4a0a3355caec648ca7", "text": "Near Field Communication (NFC) is an emerging wireless short-range communication technology that is based on existing standards of the Radio Frequency Identification (RFID) infrastructure. In combination with NFC-capable smartphones it enables intuitive application scenarios for contactless transactions, in particular services for mobile payment and over-theair ticketing. The intention of this paper is to describe basic characteristics and benefits of the underlaying technology, to classify modes of operation and to present various use cases. Both existing NFC applications and possible future scenarios will be analyzed in this context. Furthermore, security concerns, challenges and present conflicts will be discussed eventually.", "title": "" }, { "docid": "2bdfeabf15a4ca096c2fe5ffa95f3b17", "text": "This paper studies how to incorporate the external word correlation knowledge to improve the coherence of topic modeling. Existing topic models assume words are generated independently and lack the mechanism to utilize the rich similarity relationships among words to learn coherent topics. To solve this problem, we build a Markov Random Field (MRF) regularized Latent Dirichlet Allocation (LDA) model, which defines a MRF on the latent topic layer of LDA to encourage words labeled as similar to share the same topic label. Under our model, the topic assignment of each word is not independent, but rather affected by the topic labels of its correlated words. Similar words have better chance to be put into the same topic due to the regularization of MRF, hence the coherence of topics can be boosted. In addition, our model can accommodate the subtlety that whether two words are similar depends on which topic they appear in, which allows word with multiple senses to be put into different topics properly. We derive a variational inference method to infer the posterior probabilities and learn model parameters and present techniques to deal with the hardto-compute partition function in MRF. Experiments on two datasets demonstrate the effectiveness of our model.", "title": "" }, { "docid": "4a9da1575b954990f98e6807deae469e", "text": "Recently, there has been considerable debate concerning key sizes for publ i c key based cry p t o graphic methods. Included in the debate have been considerations about equivalent key sizes for diffe rent methods and considerations about the minimum re q u i red key size for diffe rent methods. In this paper we propose a method of a n a lyzing key sizes based upon the value of the data being protected and the cost of b reaking ke y s . I . I n t ro d u c t i o n A . W H Y I S K E Y S I Z E I M P O R T A N T ? In order to keep transactions based upon public key cryptography secure, one must ensure that the underlying keys are sufficiently large as to render the best possible attack infeasible. However, this really just begs the question as one is now left with the task of defining ‘infeasible’. Does this mean infeasible given access to (say) most of the Internet to do the computations? Does it mean infeasible to a large adversary with a large (but unspecified) budget to buy the hardware for an attack? Does it mean infeasible with what hardware might be obtained in practice by utilizing the Internet? Is it reasonable to assume that if utilizing the entire Internet in a key breaking effort makes a key vulnerable that such an attack might actually be conducted? If a public effort involving a substantial fraction of the Internet breaks a single key, does this mean that similar sized keys are unsafe? Does one need to be concerned about such public efforts or does one only need to be concerned about possible private, sur reptitious efforts? After all, if a public attack is known on a particular key, it is easy to change that key. We shall attempt to address these issues within this paper. number 13 Apr i l 2000 B u l l e t i n News and A dv i c e f rom RSA La bo rat o r i e s I . I n t ro d u c t i o n I I . M et ho ds o f At tac k I I I . H i s tor i ca l R es u l t s and t he R S A Ch a l le nge I V. Se cu r i t y E st i m ate s", "title": "" }, { "docid": "ae6d36ccbf79ae6f62af3a62ef3e3bb2", "text": "This paper presents a new neural network system called the Evolving Tree. This network resembles the Self-Organizing map, but deviates from it in several aspects, which are desirable in many analysis tasks. First of all the Evolving Tree grows automatically, so the user does not have to decide the network’s size before training. Secondly the network has a hierarchical structure, which makes network training and use computationally very efficient. Test results with both synthetic and actual data show that the Evolving Tree works quite well.", "title": "" }, { "docid": "7d5d2f819a5b2561db31645d534836b8", "text": "Recent work has suggested enhancing Bloom filters by using a pre-filter, based on applying machine learning to model the data set the Bloom filter is meant to represent. Here we model such learned Bloom filters, clarifying what guarantees can and cannot be associated with such a structure.", "title": "" }, { "docid": "1eba8eccf88ddb44a88bfa4a937f648f", "text": "We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixelwise class labels with a measure of model uncertainty using Bayesian deep learning. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3% across a number of datasets and architectures such as SegNet, FCN, Dilation Network and DenseNet.", "title": "" }, { "docid": "0d747bd516498ae314e3197b7e7ad1e3", "text": "Neurotoxins and fillers continue to remain in high demand, comprising a large part of the growing business of cosmetic minimally invasive procedures. Multiple Food and Drug Administration-approved safe yet different products exist within each category, and the role of each product continues to expand. The authors review the literature to provide an overview of the use of neurotoxins and fillers and their future directions.", "title": "" }, { "docid": "2edcf1a54bded9a77345cbe88cc02533", "text": "Although the uncanny exists, the inherent, unavoidable dip (or valley) may be an illusion. Extremely abstract robots can be uncanny if the aesthetic is off, as can cosmetically atypical humans. Thus, the uncanny occupies a continuum ranging from the abstract to the real, although norms of acceptability may narrow as one approaches human likeness. However, if the aesthetic is right, any level of realism or abstraction can be appealing. If so, then avoiding or creating an uncanny effect just depends on the quality of the aesthetic design, regardless of the level of realism. The author’s preliminary experiments on human reaction to near-realistic androids appear to support this hypothesis.", "title": "" }, { "docid": "56998c03c373dfae07460a7b731ef03e", "text": "52 This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/ by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis", "title": "" }, { "docid": "a084e7dd5485e01d97ccf628bc00d644", "text": "A novel concept called gesture-changeable under-actuated (GCUA) function is proposed to improve the dexterities of traditional under-actuated hands and reduce the control difficulties of dexterous hands. Based on the GCUA function, a new humanoid robot hand, GCUA Hand is designed and manufactured. The GCUA Hand can grasp different objects self-adaptively and change its initial gesture dexterously before contacting objects. The hand has 5 fingers and 15 DOFs, each finger is based on screw-nut transmission, flexible drawstring constraint and belt-pulley under-actuated mechanism to realize GCUA function. The analyses on grasping static forces and grasping stabilities are put forward. The analyses and Experimental results show that the GCUA function is very nice and valid. The hands with the GCUA function can meet the requirements of grasping and operating with lower control and cost, which is the middle road between traditional under-actuated hands and dexterous hands.", "title": "" }, { "docid": "e7b42688ce3936604aefa581802040a4", "text": "Identity management through biometrics offer potential advantages over knowledge and possession based methods. A wide variety of biometric modalities have been tested so far but several factors paralyse the accuracy of mono modal biometric systems. Usually, the analysis of multiple modalities offers better accuracy. An extensive review of biometric technology is presented here. Besides the mono modal systems, the article also discusses multi modal biometric systems along with their architecture and information fusion levels. The paper along with the exemplary evidences highlights the potential for biometric technology, market value and prospects. Keywords— Biometrics, Fingerprint, Face, Iris, Retina, Behavioral biometrics, Gait, Voice, Soft biometrics, Multi-modal biometrics.", "title": "" }, { "docid": "69624e1501b897bf1a9f9a5a84132da3", "text": "360° videos and Head-Mounted Displays (HMDs) are geŠing increasingly popular. However, streaming 360° videos to HMDs is challenging. Œis is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra e‚orts to align the content and sensor data using the timestamps in the raw log €les. Œe resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hŠp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming", "title": "" }, { "docid": "f519d349d928e7006955943043ab0eae", "text": "A critical application of metabolomics is the evaluation of tissues, which are often the primary sites of metabolic dysregulation in disease. Laboratory rodents have been widely used for metabolomics studies involving tissues due to their facile handing, genetic manipulability and similarity to most aspects of human metabolism. However, the necessary step of administration of anesthesia in preparation for tissue sampling is not often given careful consideration, in spite of its potential for causing alterations in the metabolome. We examined, for the first time using untargeted and targeted metabolomics, the effect of several commonly used methods of anesthesia and euthanasia for collection of skeletal muscle, liver, heart, adipose and serum of C57BL/6J mice. The data revealed dramatic, tissue-specific impacts of tissue collection strategy. Among many differences observed, post-euthanasia samples showed elevated levels of glucose 6-phosphate and other glycolytic intermediates in skeletal muscle. In heart and liver, multiple nucleotide and purine degradation metabolites accumulated in tissues of euthanized compared to anesthetized animals. Adipose tissue was comparatively less affected by collection strategy, although accumulation of lactate and succinate in euthanized animals was observed in all tissues. Among methods of tissue collection performed pre-euthanasia, ketamine showed more variability compared to isoflurane and pentobarbital. Isoflurane induced elevated liver aspartate but allowed more rapid initiation of tissue collection. Based on these findings, we present a more optimal collection strategy mammalian tissues and recommend that rodent tissues intended for metabolomics studies be collected under anesthesia rather than post-euthanasia.", "title": "" }, { "docid": "099a2ee305b703a765ff3579f0e0c1c3", "text": "To enhance the security of mobile cloud users, a few proposals have been presented recently. However we argue that most of them are not suitable for mobile cloud where mobile users might join or leave the mobile networks arbitrarily. In this paper, we design a secure mobile user-based data service mechanism (SDSM) to provide confidentiality and fine-grained access control for data stored in the cloud. This mechanism enables the mobile users to enjoy a secure outsourced data services at a minimized security management overhead. The core idea of SDSM is that SDSM outsources not only the data but also the security management to the mobile cloud in a trust way. Our analysis shows that the proposed mechanism has many advantages over the existing traditional methods such as lower overhead and convenient update, which could better cater the requirements in mobile cloud computing scenarios.", "title": "" }, { "docid": "0e5a11ef4daeb969702e40ea0c50d7f3", "text": "OBJECTIVES\nThe aim of this study was to assess the long-term safety and efficacy of the CYPHER (Cordis, Johnson and Johnson, Bridgewater, New Jersey) sirolimus-eluting coronary stent (SES) in percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nConcern over the safety of drug-eluting stents implanted during PCI for STEMI remains, and long-term follow-up from randomized trials are necessary. TYPHOON (Trial to assess the use of the cYPHer sirolimus-eluting stent in acute myocardial infarction treated with ballOON angioplasty) randomized 712 patients with STEMI treated by primary PCI to receive either SES (n = 355) or bare-metal stents (BMS) (n = 357). The primary end point, target vessel failure at 1 year, was significantly lower in the SES group than in the BMS group (7.3% vs. 14.3%, p = 0.004) with no increase in adverse events.\n\n\nMETHODS\nA 4-year follow-up was performed. Complete data were available in 501 patients (70%), and the survival status is known in 580 patients (81%).\n\n\nRESULTS\nFreedom from target lesion revascularization (TLR) at 4 years was significantly better in the SES group (92.4% vs. 85.1%; p = 0.002); there were no significant differences in freedom from cardiac death (97.6% and 95.9%; p = 0.37) or freedom from repeat myocardial infarction (94.8% and 95.6%; p = 0.85) between the SES and BMS groups. No difference in definite/probable stent thrombosis was noted at 4 years (SES: 4.4%, BMS: 4.8%, p = 0.83). In the 580 patients with known survival status at 4 years, the all-cause death rate was 5.8% in the SES and 7.0% in the BMS group (p = 0.61).\n\n\nCONCLUSIONS\nIn the 70% of patients with complete follow-up at 4 years, SES demonstrated sustained efficacy to reduce TLR with no difference in death, repeat myocardial infarction or stent thrombosis. (The Study to Assess AMI Treated With Balloon Angioplasty [TYPHOON]; NCT00232830).", "title": "" }, { "docid": "08a6f27e905a732062ae585d8b324200", "text": "The advent of cost-effectiveness and easy-operation depth cameras has facilitated a variety of visual recognition tasks including human activity recognition. This paper presents a novel framework for recognizing human activities from video sequences captured by depth cameras. We extend the surface normal to polynormal by assembling local neighboring hypersurface normals from a depth sequence to jointly characterize local motion and shape information. We then propose a general scheme of super normal vector (SNV) to aggregate the low-level polynormals into a discriminative representation, which can be viewed as a simplified version of the Fisher kernel representation. In order to globally capture the spatial layout and temporal order, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time cells. In the extensive experiments, the proposed approach achieves superior performance to the state-of-the-art methods on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.", "title": "" }, { "docid": "957a3970611470b611c024ed3b558115", "text": "SHARE is a unique panel database of micro data on health, socio-economic status and social and family networks covering most of the European Union and Israel. To date, SHARE has collected three panel waves (2004, 2006, 2010) of current living circumstances and retrospective life histories (2008, SHARELIFE); 6 additional waves are planned until 2024. The more than 150 000 interviews give a broad picture of life after the age of 50 years, measuring physical and mental health, economic and non-economic activities, income and wealth, transfers of time and money within and outside the family as well as life satisfaction and well-being. The data are available to the scientific community free of charge at www.share-project.org after registration. SHARE is harmonized with the US Health and Retirement Study (HRS) and the English Longitudinal Study of Ageing (ELSA) and has become a role model for several ageing surveys worldwide. SHARE's scientific power is based on its panel design that grasps the dynamic character of the ageing process, its multidisciplinary approach that delivers the full picture of individual and societal ageing, and its cross-nationally ex-ante harmonized design that permits international comparisons of health, economic and social outcomes in Europe and the USA.", "title": "" }, { "docid": "efe279fbc7307bc6a191ebb397b01823", "text": "Real-time traffic sign detection and recognition has been receiving increasingly more attention in recent years due to the popularity of driver-assistance systems and autonomous vehicles. This paper proposes an accurate and efficient traffic sign detection technique by exploring AdaBoost and support vector regression (SVR) for discriminative detector learning. Different from the reported traffic sign detection techniques, a novel saliency estimation approach is first proposed, where a new saliency model is built based on the traffic sign-specific color, shape, and spatial information. By incorporating the saliency information, enhanced feature pyramids are built to learn an AdaBoost model that detects a set of traffic sign candidates from images. A novel iterative codeword selection algorithm is then designed to generate a discriminative codebook for the representation of sign candidates, as detected by the AdaBoost, and an SVR model is learned to identify the real traffic signs from the detected sign candidates. Experiments on three public data sets show that the proposed traffic sign detection technique is robust and obtains superior accuracy and efficiency.", "title": "" }, { "docid": "764ebb7673237d152995a0b6ae34e82a", "text": "Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such as half the LOD, the LOD divided by the square root of 2, or zero. These methods for handling below-detection values results in two distributions, a uniform distribution for those values below the LOD, and the true distribution. As a result, this can produce questionable descriptive statistics depending upon the percentage of values below the LOD. An alternative method uses the characteristics of the distribution of the values above the LOD to estimate the values below the LOD. This can be done with an extrapolation technique or maximum likelihood estimation. An example program using the same data is presented calculating the mean, standard deviation, t-test, and relative difference in the means for various methods and compares the results. The extrapolation and maximum likelihood estimate techniques have smaller error rates than all the standard replacement techniques. Although more computational, these methods produce more reliable descriptive statistics.", "title": "" } ]
scidocsrr
856d1c7e556a5f1423113cb1d1243167
Mining big data using parsimonious factor , machine learning , variable selection and shrinkage methods
[ { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "ffc36fa0dcc81a7f5ba9751eee9094d7", "text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.", "title": "" }, { "docid": "c7e584bca061335c8cd085511f4abb3b", "text": "The application of boosting technique to regression problems has received relatively little attention in contrast to research aimed at classification problems. This letter describes a new boosting algorithm, AdaBoost.RT, for regression problems. Its idea is in filtering out the examples with the relative estimation error that is higher than the preset threshold value, and then following the AdaBoost procedure. Thus, it requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted. Some experimental results using the M5 model tree as a weak learning machine for several benchmark data sets are reported. The results are compared to other boosting methods, bagging, artificial neural networks, and a single M5 model tree. The preliminary empirical comparisons show higher performance of AdaBoost.RT for most of the considered data sets.", "title": "" } ]
[ { "docid": "9fac5ac1de2ae70964bdb05643d41a68", "text": "A long-standing goal in the field of artificial intelligence is to develop agents that can perceive and understand the rich visual world around us and who can communicate with us about it in natural language. Significant strides have been made towards this goal over the last few years due to simultaneous advances in computing infrastructure, data gathering and algorithms. The progress has been especially rapid in visual recognition, where computers can now classify images into categories with a performance that rivals that of humans, or even surpasses it in some cases such as classifying breeds of dogs. However, despite much encouraging progress, most of the advances in visual recognition still take place in the context of assigning one or a few discrete labels to an image (e.g. person, boat, keyboard, etc.). In this dissertation we develop models and techniques that allow us to connect the domain of visual data and the domain of natural language utterances, enabling translation between elements of the two domains. In particular, first we introduce a model that embeds both images and sentences into a common multi-modal embedding space. This space then allows us to identify images that depict an arbitrary sentence description and conversely, we can identify sentences that describe any image. Second, we develop an image captioning model that takes an image and directly generates a sentence description without being constrained a finite collection of human-written sentences to choose from. Lastly, we describe a model that can take an image and both localize and describe all if its salient parts. We demonstrate that this model can also be used backwards to take any arbitrary description (e.g. white tennis shoes) and e ciently localize the described concept in a large collection of images. We argue that these models, the techniques they take advantage of internally and the interactions they enable are a stepping stone towards artificial intelligence and that connecting images and natural language o↵ers many practical benefits and immediate valuable applications. From the modeling perspective, instead of designing and staging explicit algorithms to process images and sentences in complex processing pipelines, our contribution lies in the design of hybrid convolutional and recurrent neural network architectures that connect visual data and natural language utterances with a single network. Therefore, the computational processing of images,", "title": "" }, { "docid": "58e17619012ddb58f86dc4bfa79d19d8", "text": "–Malicious programs have been the main actors in complex, sophisticated attacks against nations, governments, diplomatic agencies, private institutions and people. Knowledge about malicious program behavior forms the basis for constructing more secure information systems. In this article, we introduce MBO, a Malicious Behavior Ontology that represents complex behaviors of suspicious executions, and through inference rules calculates their associated threat level for analytical proposals. We evaluate MBO using over two thousand unique known malware and 385 unique known benign software. Results highlight the representativeness of the MBO for expressing typical malicious activities. Security ontologyMalware behaviorThreat analysis", "title": "" }, { "docid": "00eb132ce5063dd983c0c36724f82cec", "text": "This paper analyzes customer product-choice behavior based on the recency and frequency of each customer’s page views on e-commerce sites. Recently, we devised an optimization model for estimating product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints with respect to recency and frequency. This shape-restricted model delivered high predictive performance even when there were few training samples. However, typical e-commerce sites deal in many different varieties of products, so the predictive performance of the model can be further improved by integration of such product heterogeneity. For this purpose, we develop a novel latent-class shape-restricted model for estimating product-choice probabilities for each latent class of products. We also give a tailored expectation-maximization algorithm for parameter estimation. Computational results demonstrate that higher predictive performance is achieved with our latent-class model than with the previous shape-restricted model and common latent-class logistic regression.", "title": "" }, { "docid": "23ada5f749c5780ff45057747e978b66", "text": "In this paper, we introduce ReTSO, a reliable and efficient design for transactional support in large-scale storage systems. ReTSO uses a centralized scheme and implements snapshot isolation, a property that guarantees that read operations of a transaction read a consistent snapshot of the data stored. The centralized scheme of ReTSO enables a lock-free commit algorithm that prevents unre-leased locks of a failed transaction from blocking others. We analyze the bottlenecks in a single-server implementation of transactional logic and propose solutions for each. The experimental results show that our implementation can service up to 72K transaction per second (TPS), which is an order of magnitude larger than the maximum achieved traffic in similar data storage systems. Consequently, we do not expect ReTSO to be a bottleneck even for current large distributed storage systems.", "title": "" }, { "docid": "6280266740e1a3da3fd536c134b39cfd", "text": "Despite years of research yielding systems and guidelines to aid visualization design, practitioners still face the challenge of identifying the best visualization for a given dataset and task. One promising approach to circumvent this problem is to leverage perceptual laws to quantitatively evaluate the effectiveness of a visualization design. Following previously established methodologies, we conduct a large scale (n = 1687) crowdsourced experiment to investigate whether the perception of correlation in nine commonly used visualizations can be modeled using Weber's law. The results of this experiment contribute to our understanding of information visualization by establishing that: (1) for all tested visualizations, the precision of correlation judgment could be modeled by Weber's law, (2) correlation judgment precision showed striking variation between negatively and positively correlated data, and (3) Weber models provide a concise means to quantify, compare, and rank the perceptual precision afforded by a visualization.", "title": "" }, { "docid": "0b71777f8b4d03fb147ff41d1224136e", "text": "Mobile broadband demand keeps growing at an overwhelming pace. Though emerging wireless technologies will provide more bandwidth, the increase in demand may easily consume the extra bandwidth. To alleviate this problem, we propose using the content available on individual devices as caches. Particularly, when a user reaches areas with dense clusters of mobile devices, \"data spots\", the operator can instruct the user to connect with other users sharing similar interests and serve the requests locally. This paper presents feasibility study as well as prototype implementation of this idea.", "title": "" }, { "docid": "dc3495ec93462e68f606246205a8416d", "text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.", "title": "" }, { "docid": "680c621ebc0dd6f762abb8df9871070e", "text": "Methods for learning to search for structured prediction typically imitate a reference policy, with existing theoretical guarantees demonstrating low regret compared to that reference. This is unsatisfactory in many applications where the reference policy is suboptimal and the goal of learning is to improve upon it. Can learning to search work even when the reference is poor? We provide a new learning to search algorithm, LOLS, which does well relative to the reference policy, but additionally guarantees low regret compared to deviations from the learned policy: a local-optimality guarantee. Consequently, LOLS can improve upon the reference policy, unlike previous algorithms. This enables us to develop structured contextual bandits, a partial information structured prediction setting with many potential applications.", "title": "" }, { "docid": "0084faef0e08c4025ccb3f8fd50892f1", "text": "Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them. In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.", "title": "" }, { "docid": "4eabc161187126a726a6b65f6fc6c685", "text": "In this paper, we propose a new method to estimate synthetic aperture radar interferometry (InSAR) interferometric phase in the presence of large coregistration errors. The method takes advantage of the coherence information of neighboring pixel pairs to automatically coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase. The method can automatically coregister the SAR images and reduce the interferometric phase noise simultaneously. Theoretical analysis and computer simulation results show that the method can provide accurate estimate of the terrain interferometric phase (interferogram) as the coregistration error reaches one pixel. The effectiveness of the method is also verified with the real data from the Spaceborne Imaging Radar-C/X Band SAR and the European Remote Sensing 1 and 2 satellites.", "title": "" }, { "docid": "c66c1523322809d1b2d1279b5b2b8384", "text": "The design of the Smart Grid requires solving a complex problem of combined sensing, communications and control and, thus, the problem of choosing a networking technology cannot be addressed without also taking into consideration requirements related to sensor networking and distributed control. These requirements are today still somewhat undefined so that it is not possible yet to give quantitative guidelines on how to choose one communication technology over the other. In this paper, we make a first qualitative attempt to better understand the role that Power Line Communications (PLCs) can have in the Smart Grid. Furthermore, we here report recent results on the electrical and topological properties of the power distribution network. The topological characterization of the power grid is not only important because it allows us to model the grid as an information source, but also because the grid becomes the actual physical information delivery infrastructure when PLCs are used.", "title": "" }, { "docid": "31f5c712760d1733acb0d7ffd3cec6ad", "text": "Singular Spectrum Transform (SST) is a fundamental subspace analysis technique which has been widely adopted for solving change-point detection (CPD) problems in information security applications. However, the performance of a SST based CPD algorithm is limited to the lack of robustness to corrupted observations with large noises in practice. Based on the observation that large noises in practical time series are generally sparse, in this paper, we study a combination of Robust Principal Component Analysis (RPCA) and SST to obtain a robust CPD algorithm dealing with sparse large noises. The sparse large noises are to be eliminated from observation trajectory matrices by performing a low-rank matrix recovery procedure of RPCA. The noise-eliminated matrices are then used to extract SST subspaces for CPD. The effectiveness of the proposed method is demonstrated through experiments based on both synthetic and real-world datasets. Experimental results show that the proposed method outperforms the competing state-of-the-arts in terms of detection accuracy for time series with sparse large noises.", "title": "" }, { "docid": "30178d1de9d0aab8c3ab0ac9be674d8c", "text": "The immune system protects from infections primarily by detecting and eliminating the invading pathogens; however, the host organism can also protect itself from infectious diseases by reducing the negative impact of infections on host fitness. This ability to tolerate a pathogen's presence is a distinct host defense strategy, which has been largely overlooked in animal and human studies. Introduction of the notion of \"disease tolerance\" into the conceptual tool kit of immunology will expand our understanding of infectious diseases and host pathogen interactions. Analysis of disease tolerance mechanisms should provide new approaches for the treatment of infections and other diseases.", "title": "" }, { "docid": "6702bfca88f86e0c35a8b6195d0c971c", "text": "A hierarchical scheme for clustering data is presented which applies to spaces with a high number of dimensions ( 3 D N > ). The data set is first reduced to a smaller set of partitions (multi-dimensional bins). Multiple clustering techniques are used, including spectral clustering; however, new techniques are also introduced based on the path length between partitions that are connected to one another. A Line-of-Sight algorithm is also developed for clustering. A test bank of 12 data sets with varying properties is used to expose the strengths and weaknesses of each technique. Finally, a robust clustering technique is discussed based on reaching a consensus among the multiple approaches, overcoming the weaknesses found individually.", "title": "" }, { "docid": "cb55daf6ada8e9caba80aa4f421fc395", "text": "This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015. We began right at the start of the KinectT Mrevolution when inexpensive infrared cameras providing image depth recordings became available. We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision using multimodal data in this area of application. Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands of videos, which are available to conduct further research. We also overview recent state of the art works on gesture recognition based on a proposed taxonomy for gesture recognition, discussing challenges and future lines of research.", "title": "" }, { "docid": "4e8c67969add0e27dc1d3cb8f36971f8", "text": "To date no AIS1 neck injury mechanism has been established, thus no neck injury criterion has been validated against such mechanism. Validation methods not related to an injury mechanism may be used. The aim of this paper was to validate different proposed neck injury criteria with reconstructed reallife crashes with recorded crash pulses and with known injury outcomes. A car fleet of more than 40,000 cars fitted with crash pulse recorders have been monitored in Sweden since 1996. All crashes with these cars, irrespective of repair cost and injury outcome have been reported. With the inclusion criteria of the three most represented car models, single rear-end crashes with a recorded crash pulse, and front seat occupants with no previous long-term AIS1 neck injury, 79 crashes with 110 front seat occupants remained to be analysed in this study. Madymo models of a BioRID II dummy in the three different car seats were exposed to the recorded crash pulses. The dummy readings were correlated to the real-life injury outcome, divided into duration of AIS1 neck injury symptoms. Effectiveness to predict neck injury was assessed for the criteria NIC, Nkm, NDC and lower neck moment, aimed at predicting AIS1 neck injury. Also risk curves were assessed for the effective criteria as well as for impact severity. It was found that NICmax and Nkm are applicable to predict risk of AIS1 neck injury when using a BioRID dummy. It is suggested that both BioRID NICmax and Nkm should be considered in rear-impact test evaluation. Furthermore, lower neck moment was found to be less applicable. Using the BioRID dummy NDC was also found less applicable.", "title": "" }, { "docid": "23b18b2795b0e5ff619fd9e88821cfad", "text": "Goal-oriented dialogue has been paid attention for its numerous applications in artificial intelligence. To solve this task, deep learning and reinforcement learning have recently been applied. However, these approaches struggle to find a competent recurrent neural questioner, owing to the complexity of learning a series of sentences. Motivated by theory of mind, we propose “Answerer in Questioner’s Mind” (AQM), a novel algorithm for goal-oriented dialogue. With AQM, a questioner asks and infers based on an approximated probabilistic model of the answerer. The questioner figures out the answerer’s intent via selecting a plausible question by explicitly calculating the information gain of the candidate intentions and possible answers to each question. We test our framework on two goal-oriented visual dialogue tasks: “MNIST Counting Dialog” and “GuessWhat?!.” In our experiments, AQM outperforms comparative algorithms and makes human-like dialogue. We further use AQM as a tool for analyzing the mechanism of deep reinforcement learning approach and discuss the future direction of practical goal-oriented neural dialogue systems.", "title": "" }, { "docid": "8d957e6c626855a06ac2256c4e7cd15c", "text": "This article presents a robotic dataset collected from the largest underground copper mine in the world. The sensor measurements from a 3D scanning lidar, a 2D radar, and stereo cameras were recorded from an approximately two kilometer traverse of a production-active tunnel. The equipment used and the data collection process is discussed in detail, along with the format of the data. This dataset is suitable for research in robotic navigation, as well as simultaneous localization and mapping. The download instructions are available at the following website http://dataset.amtc.cl.", "title": "" }, { "docid": "69f413d247e88022c3018b2dee1b53e2", "text": "Research and development (R&D) project selection is an important task for organizations with R&D project management. It is a complicated multi-stage decision-making process, which involves groups of decision makers. Current research on R&D project selection mainly focuses on mathematical decision models and their applications, but ignores the organizational aspect of the decision-making process. This paper proposes an organizational decision support system (ODSS) for R&D project selection. Object-oriented method is used to design the architecture of the ODSS. An organizational decision support system has also been developed and used to facilitate the selection of project proposals in the National Natural Science Foundation of China (NSFC). The proposed system supports the R&D project selection process at the organizational level. It provides useful information for decision-making tasks in the R&D project selection process. D 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "3c82ba94aa4d717d51c99cfceb527f22", "text": "Manipulator collision avoidance using genetic algorithms is presented. Control gains in the collision avoidance control model are selected based on genetic algorithms. A repulsive force is artificially created using the distances between the robot links and obstacles, which are generated by a distance computation algorithm. Real-time manipulator collision avoidance control has achieved. A repulsive force gain is introduced through the approaches for definition of link coordinate frames and kinematics computations. The safety distance between objects is affected by the repulsive force gain. This makes the safety zone adjustable and provides greater intelligence for robotic tasks under the ever-changing environment.", "title": "" } ]
scidocsrr
aa7c85f32127a96c63fc22c07cbede29
Unsupervised Discovery of Discourse Relations for Eliminating Intra-sentence Polarity Ambiguities
[ { "docid": "7723c78b2ff8f9fdc285ee05b482efef", "text": "We describe our experience in developing a discourse-annotated corpus for community-wide use. Working in the framework of Rhetorical Structure Theory, we were able to create a large annotated resource with very high consistency, using a well-defined methodology and protocol. This resource is made publicly available through the Linguistic Data Consortium to enable researchers to develop empirically grounded, discourse-specific applications.", "title": "" } ]
[ { "docid": "ff1834a5b249c436dfa5a48b5f464568", "text": "Communication primitives such as coding and multiple antenna processing have provided significant benefits for traditional wireless systems. Existing designs, however, consume significant power and computational resources, and hence cannot be run on low complexity, power constrained backscatter devices. This paper makes two main contributions: (1) we introduce the first multi-antenna cancellation design that operates on backscatter devices while retaining a small form factor and power footprint, (2) we introduce a novel coding mechanism that enables long range communication as well as concurrent transmissions and can be decoded on backscatter devices. We build hardware prototypes of the above designs that can be powered solely using harvested energy from TV and solar sources. The results show that our designs provide benefits for both RFID and ambient backscatter systems: they enable RFID tags to communicate directly with each other at distances of tens of meters and through multiple walls. They also increase the communication rate and range achieved by ambient backscatter systems by 100X and 40X respectively. We believe that this paper represents a substantial leap in the capabilities of backscatter communication.", "title": "" }, { "docid": "ca8d70248ef68c41f34eee375e511abf", "text": "While mobile advertisement is the dominant source of revenue for mobile apps, the usage patterns of mobile users, and thus their engagement and exposure times, may be in conflict with the effectiveness of current ads. Users engagement with apps can range from a few seconds to several minutes, depending on a number of factors such as users' locations, concurrent activities and goals. Despite the wide-range of engagement times, the current format of ad auctions dictates that ads are priced, sold and configured prior to actual viewing, that is regardless of the actual ad exposure time.\n We argue that the wealth of easy-to-gather contextual information on mobile devices is sufficient to allow advertisers to make better choices by effectively predicting exposure time. We analyze mobile device usage patters with a detailed two-week long user study of 37 users in the US and South Korea. After characterizing application session times, we use factor analysis to derive a simple predictive model and show that is able to offer improved accuracy compared to mean session time over 90% of the time. We make the case for including predicted ad exposure duration in the price of mobile advertisements and posit that such information could significantly impact the effectiveness of mobile ads by giving publishers the ability to tune campaigns for engagement length, and enable a more efficient market for ad impressions while lowering network utilization and device power consumption.", "title": "" }, { "docid": "a258c6b5abf18cb3880e4bc7a436c887", "text": "We propose a reactive controller framework for robust quadrupedal locomotion, designed to cope with terrain irregularities, trajectory tracking errors and poor state estimation. The framework comprises two main modules: One related to the generation of elliptic trajectories for the feet and the other for control of the stability of the whole robot. We propose a task space CPG-based trajectory generation that can be modulated according to terrain irregularities and the posture of the robot trunk. To improve the robot's stability, we implemented a null space based attitude control for the trunk and a push recovery algorithm based on the concept of capture points. Simulations and experimental results on the hydraulically actuated quadruped robot HyQ will be presented to demonstrate the effectiveness of our framework.", "title": "" }, { "docid": "c2e7425f719dd51eec0d8e180577269e", "text": "Most important way of communication among humans is language and primary medium used for the said is speech. The speech recognizers make use of a parametric form of a signal to obtain the most important distinguishable features of speech signal for recognition purpose. In this paper, Linear Prediction Cepstral Coefficient (LPCC), Mel Frequency Cepstral Coefficient (MFCC) and Bark frequency Cepstral coefficient (BFCC) feature extraction techniques for recognition of Hindi Isolated, Paired and Hybrid words have been studied and the corresponding recognition rates are compared. Artifical Neural Network is used as back end processor. The experimental results show that the better recognition rate is obtained for MFCC as compared to LPCC and BFCC for all the three types of words.", "title": "" }, { "docid": "04a85672df9da82f7e5da5b8b25c9481", "text": "This study investigated long-term effects of training on postural control using the model of deficits in activation of transversus abdominis (TrA) in people with recurrent low back pain (LBP). Nine volunteers with LBP attended four sessions for assessment and/or training (initial, two weeks, four weeks and six months). Training of repeated isolated voluntary TrA contractions were performed at the initial and two-week session with feedback from real-time ultrasound imaging. Home program involved training twice daily for four weeks. Electromyographic activity (EMG) of trunk and deltoid muscles was recorded with surface and fine-wire electrodes. Rapid arm movement and walking were performed at each session, and immediately after training on the first two sessions. Onset of trunk muscle activation relative to prime mover deltoid during arm movements, and the coefficient of variation (CV) of EMG during averaged gait cycle were calculated. Over four weeks of training, onset of TrA EMG was earlier during arm movements and CV of TrA EMG was reduced (consistent with more sustained EMG activity). Changes were retained at six months follow-up (p<0.05). These results show persistence of motor control changes following training and demonstrate that this training approach leads to motor learning of automatic postural control strategies.", "title": "" }, { "docid": "f6342101ff8315bcaad4e4f965e6ba8a", "text": "In radar imaging it is well known that relative motion or deformation of parts of illuminated objects induce additional features in the Doppler frequency spectra. These features are called micro-Doppler effect and appear as sidebands around the central Doppler frequency. They can provide valuable information about the structure of the moving parts and may be used for identification purposes [1].", "title": "" }, { "docid": "df677d32bdbba01d27c8eb424b9893e9", "text": "Active learning is an area of machine learning examining strategies for allocation of finite resources, particularly human labeling efforts and to an extent feature extraction, in situations where available data exceeds available resources. In this open problem paper, we motivate the necessity of active learning in the security domain, identify problems caused by the application of present active learning techniques in adversarial settings, and propose a framework for experimentation and implementation of active learning systems in adversarial contexts. More than other contexts, adversarial contexts particularly need active learning as ongoing attempts to evade and confuse classifiers necessitate constant generation of labels for new content to keep pace with adversarial activity. Just as traditional machine learning algorithms are vulnerable to adversarial manipulation, we discuss assumptions specific to active learning that introduce additional vulnerabilities, as well as present vulnerabilities that are amplified in the active learning setting. Lastly, we present a software architecture, Security-oriented Active Learning Testbed (SALT), for the research and implementation of active learning applications in adversarial contexts.", "title": "" }, { "docid": "8439309414a9999abbd0e0be95a25fb8", "text": "Cython is a Python language extension that allows explicit type declarations and is compiled directly to C. As such, it addresses Python's large overhead for numerical loops and the difficulty of efficiently using existing C and Fortran code, which Cython can interact with natively.", "title": "" }, { "docid": "89238dd77c0bf0994b53190078eb1921", "text": "Several methods exist for a computer to generate music based on data including Markov chains, recurrent neural networks, recombinancy, and grammars. We explore the use of unit selection and concatenation as a means of generating music using a procedure based on ranking, where, we consider a unit to be a variable length number of measures of music. We first examine whether a unit selection method, that is restricted to a finite size unit library, can be sufficient for encompassing a wide spectrum of music. This is done by developing a deep autoencoder that encodes a musical input and reconstructs the input by selecting from the library. We then describe a generative model that combines a deep structured semantic model (DSSM) with an LSTM to predict the next unit, where units consist of four, two, and one measures of music. We evaluate the generative model using objective metrics including mean rank and accuracy and with a subjective listening test in which expert musicians are asked to complete a forcedchoiced ranking task. Our system is compared to a note-level generative baseline model that consists of a stacked LSTM trained to predict forward by one note.", "title": "" }, { "docid": "410bd8286a87a766dd221c1269f05c04", "text": "The lowand mid-frequency model of the transformer with resistive load is analysed for different values of coupling coefficients. The model comprising of coupling-dependent inductances is used to derive the following characteristics: voltage gain, current gain, bandwidth, input impedance, and transformer efficiency. It is shown that in the lowand mid-frequency range, the turns ratio between the windings is a strong function of the coupling coefficient, i.e., if the coupling coefficient decreases, then the effective turns ratio reduces. A practical transformer was designed, simulated, and tested. It was observed that the magnitudes of the voltage transfer function and current transfer function exhibit a maximum value each at a different value of coupling coefficient. In addition, as the coupling coefficient decreases, the transformer bandwidth also decreases. Furthermore, analytical expressions for the transformer efficiency for resistive loads are derived and its variation with respect to frequency at different coupling coefficients is investigated. It is shown that the transformer efficiency is maximum at any coupling coefficient if the input resistance is equal to the load resistance. Experimental validation of the theoretical results was performed using a practical transformer set-up. The theoretical predictions were found to be in good agreement with the experimental results.", "title": "" }, { "docid": "2ea886246d4f59d88c3eabd99c60dd5d", "text": "This paper proposes a Modified Particle Swarm Optimization with Time Varying Acceleration Coefficients (MPSO-TVAC) for solving economic load dispatch (ELD) problem. Due to prohibited operating zones (POZ) and ramp rate limits of the practical generators, the ELD problems become nonlinear and nonconvex optimization problem. Furthermore, the ELD problem may be more complicated if transmission losses are considered. Particle swarm optimization (PSO) is one of the famous heuristic methods for solving nonconvex problems. However, this method may suffer to trap at local minima especially for multimodal problem. To improve the solution quality and robustness of PSO algorithm, a new best neighbour particle called ‘rbest’ is proposed. The rbest provides extra information for each particle that is randomly selected from other best particles in order to diversify the movement of particle and avoid premature convergence. The effectiveness of MPSO-TVAC algorithm is tested on different power systems with POZ, ramp-rate limits and transmission loss constraints. To validate the performances of the proposed algorithm, comparative studies have been carried out in terms of convergence characteristic, solution quality, computation time and robustness. Simulation results found that the proposed MPSO-TVAC algorithm has good solution quality and more robust than other methods reported in previous work.", "title": "" }, { "docid": "aa64bd9576044ec5e654c9f29c4f7d84", "text": "BACKGROUND\nSocial media are dynamic and interactive computer-mediated communication tools that have high penetration rates in the general population in high-income and middle-income countries. However, in medicine and health care, a large number of stakeholders (eg, clinicians, administrators, professional colleges, academic institutions, ministries of health, among others) are unaware of social media's relevance, potential applications in their day-to-day activities, as well as the inherent risks and how these may be attenuated and mitigated.\n\n\nOBJECTIVE\nWe conducted a narrative review with the aim to present case studies that illustrate how, where, and why social media are being used in the medical and health care sectors.\n\n\nMETHODS\nUsing a critical-interpretivist framework, we used qualitative methods to synthesize the impact and illustrate, explain, and provide contextual knowledge of the applications and potential implementations of social media in medicine and health care. Both traditional (eg, peer-reviewed) and nontraditional (eg, policies, case studies, and social media content) sources were used, in addition to an environmental scan (using Google and Bing Web searches) of resources.\n\n\nRESULTS\nWe reviewed, evaluated, and synthesized 76 articles, 44 websites, and 11 policies/reports. Results and case studies are presented according to 10 different categories of social media: (1) blogs (eg, WordPress), (2) microblogs (eg, Twitter), (3) social networking sites (eg, Facebook), (4) professional networking sites (eg, LinkedIn, Sermo), (5) thematic networking sites (eg, 23andMe), (6) wikis (eg, Wikipedia), (7) mashups (eg, HealthMap), (8) collaborative filtering sites (eg, Digg), (9) media sharing sites (eg, YouTube, Slideshare), and others (eg, SecondLife). Four recommendations are provided and explained for stakeholders wishing to engage with social media while attenuating risk: (1) maintain professionalism at all times, (2) be authentic, have fun, and do not be afraid, (3) ask for help, and (4) focus, grab attention, and engage.\n\n\nCONCLUSIONS\nThe role of social media in the medical and health care sectors is far reaching, and many questions in terms of governance, ethics, professionalism, privacy, confidentiality, and information quality remain unanswered. By following the guidelines presented, professionals have a starting point to engage with social media in a safe and ethical manner. Future research will be required to understand the synergies between social media and evidence-based practice, as well as develop institutional policies that benefit patients, clinicians, public health practitioners, and industry alike.", "title": "" }, { "docid": "06f6ffa9c1c82570b564e1cd0f719950", "text": "Widespread use of biometric architectures implies the need to secure highly sensitive data to respect the privacy rights of the users. In this paper, we discuss the following question: To what extent can biometric designs be characterized as Privacy Enhancing Technologies? The terms of privacy and security for biometric schemes are defined, while current regulations for the protection of biometric information are presented. Additionally, we analyze and compare cryptographic techniques for secure biometric designs. Finally, we introduce a privacy-preserving approach for biometric authentication in mobile electronic financial applications. Our model utilizes the mechanism of pseudonymous biometric identities for secure user registration and authentication. We discuss how the privacy requirements for the processing of biometric data can be met in our scenario. This work attempts to contribute to the development of privacy-by-design biometric technologies.", "title": "" }, { "docid": "74a91327b85ac9681f618d4ba6a86151", "text": "In this paper, a miniaturized planar antenna with enhanced bandwidth is designed for the ISM 433 MHz applications. The antenna is realized by cascading two resonant structures with meander lines, thus introducing two different radiating branches to realize two neighboring resonant frequencies. The techniques of shorting pin and novel ground plane are adopted for bandwidth enhancement. Combined with these structures, a novel antenna with a total size of 23 mm × 49.5 mm for the ISM band application is developed and fabricated. Measured results show that the proposed antenna has good performance with the -10 dB impedance bandwidth is about 12.5 MHz and the maximum gain is about -2.8 dBi.", "title": "" }, { "docid": "f0f88be4a2b7619f6fb5cdcca1741d1f", "text": "BACKGROUND\nThere is no evidence from randomized trials to support a strategy of lowering systolic blood pressure below 135 to 140 mm Hg in persons with type 2 diabetes mellitus. We investigated whether therapy targeting normal systolic pressure (i.e., <120 mm Hg) reduces major cardiovascular events in participants with type 2 diabetes at high risk for cardiovascular events.\n\n\nMETHODS\nA total of 4733 participants with type 2 diabetes were randomly assigned to intensive therapy, targeting a systolic pressure of less than 120 mm Hg, or standard therapy, targeting a systolic pressure of less than 140 mm Hg. The primary composite outcome was nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes. The mean follow-up was 4.7 years.\n\n\nRESULTS\nAfter 1 year, the mean systolic blood pressure was 119.3 mm Hg in the intensive-therapy group and 133.5 mm Hg in the standard-therapy group. The annual rate of the primary outcome was 1.87% in the intensive-therapy group and 2.09% in the standard-therapy group (hazard ratio with intensive therapy, 0.88; 95% confidence interval [CI], 0.73 to 1.06; P=0.20). The annual rates of death from any cause were 1.28% and 1.19% in the two groups, respectively (hazard ratio, 1.07; 95% CI, 0.85 to 1.35; P=0.55). The annual rates of stroke, a prespecified secondary outcome, were 0.32% and 0.53% in the two groups, respectively (hazard ratio, 0.59; 95% CI, 0.39 to 0.89; P=0.01). Serious adverse events attributed to antihypertensive treatment occurred in 77 of the 2362 participants in the intensive-therapy group (3.3%) and 30 of the 2371 participants in the standard-therapy group (1.3%) (P<0.001).\n\n\nCONCLUSIONS\nIn patients with type 2 diabetes at high risk for cardiovascular events, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events. (ClinicalTrials.gov number, NCT00000620.)", "title": "" }, { "docid": "f3cb18c15459dd7a9c657e32442bd289", "text": "The advent of crowdsourcing has created a variety of new opportunities for improving upon traditional methods of data collection and annotation. This in turn has created intriguing new opportunities for data-driven machine learning (ML). Convenient access to crowd workers for simple data collection has further generalized to leveraging more arbitrary crowd-based human computation (von Ahn 2005) to supplement automated ML. While new potential applications of crowdsourcing continue to emerge, a variety of practical and sometimes unexpected obstacles have already limited the degree to which its promised potential can be actually realized in practice. This paper considers two particular aspects of crowdsourcing and their interplay, data quality control (QC) and ML, reflecting on where we have been, where we are, and where we might go from here.", "title": "" }, { "docid": "400048566b24d7527845f7c6b6d86fc0", "text": "In brief: Diagnosis of skier's thumb-a common sports injury-is based on physical examination and history of the injury. The most important findings from the physical exam are point tenderness over the ulnar collateral ligament and instability, which is tested with the thumb at 0° and at 20° to 30° of flexion. Grade 1 and 2 injuries, which involve torn fibers but no loss of integrity, can be treated with casting and/or splinting and physical therapy. Grade 3 injuries involve complete disruption of the ligament and usually require surgical repair. Results from treatment are generally excellent, and with appropriate rehabilitation, athletes recover pinch and grip strength and return to sports.", "title": "" }, { "docid": "06d2d07ed7532aa19b779607a21afef7", "text": "BACKGROUND\nMyocardium irreversibly injured by ischemic stress must be efficiently repaired to maintain tissue integrity and contractile performance. Macrophages play critical roles in this process. These cells transform across a spectrum of phenotypes to accomplish diverse functions ranging from mediating the initial inflammatory responses that clear damaged tissue to subsequent reparative functions that help rebuild replacement tissue. Although macrophage transformation is crucial to myocardial repair, events governing this transformation are poorly understood.\n\n\nMETHODS\nHere, we set out to determine whether innate immune responses triggered by cytoplasmic DNA play a role.\n\n\nRESULTS\nWe report that ischemic myocardial injury, along with the resulting release of nucleic acids, activates the recently described cyclic GMP-AMP synthase-stimulator of interferon genes pathway. Animals lacking cyclic GMP-AMP synthase display significantly improved early survival after myocardial infarction and diminished pathological remodeling, including ventricular rupture, enhanced angiogenesis, and preserved ventricular contractile function. Furthermore, cyclic GMP-AMP synthase loss of function abolishes the induction of key inflammatory programs such as inducible nitric oxide synthase and promotes the transformation of macrophages to a reparative phenotype, which results in enhanced repair and improved hemodynamic performance.\n\n\nCONCLUSIONS\nThese results reveal, for the first time, that the cytosolic DNA receptor cyclic GMP-AMP synthase functions during cardiac ischemia as a pattern recognition receptor in the sterile immune response. Furthermore, we report that this pathway governs macrophage transformation, thereby regulating postinjury cardiac repair. Because modulators of this pathway are currently in clinical use, our findings raise the prospect of new treatment options to combat ischemic heart disease and its progression to heart failure.", "title": "" }, { "docid": "f443e22db2a2313b47168740662ad187", "text": "Tunneling-field-effect-transistor (TFET) has emerged as an alternative for conventional CMOS by enabling the supply voltage (VDD) scaling in ultra-low power, energy efficient computing, due to its sub-60 mV/ decade sub-threshold slope (SS). Given its unique device characteristics such as the asymmetrical source/drain design induced uni-directional conduction, enhanced on-state Miller capacitance effect and steep switching at low voltages, TFET based circuit design requires strong interactions between the device-level and the circuit-level to explore the performance benefits, with certain modifications of the conventional CMOS circuits to achieve the functionality and optimal energy efficiency. Because TFET operates at low supply voltage range (VDD < 0:5 V) to outperform CMOS, reliability issues can have profound impact on the circuit design from the practical application perspective. In this review paper, we present recent development on Tunnel FET device design, and modeling technique for circuit implementation and performance benchmarking. We focus on the reliability issues such as soft-error, electrical noise and process variation, and their impact on TFET based circuit performance compared to sub-threshold CMOS. Analytical models of electrical noise and process variation are also discussed for circuit-level", "title": "" }, { "docid": "1e25480ef6bd5974fcd806aac7169298", "text": "Alphabetical ciphers are being used since centuries for inducing confusion in messages, but there are some drawbacks that are associated with Classical alphabetic techniques like concealment of key and plaintext. Here in this paper we will suggest an encryption technique that is a blend of both classical encryption as well as modern technique, this hybrid technique will be superior in terms of security than average Classical ciphers.", "title": "" } ]
scidocsrr
2ea6466de9702c55fb87df541947b9d0
Searching by Talking: Analysis of Voice Queries on Mobile Web Search
[ { "docid": "ef08ef786fd759b33a7d323c69be19db", "text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.", "title": "" } ]
[ { "docid": "f4abfe0bb969e2a6832fa6317742f202", "text": "We built a highly compliant, underactuated, robust and at the same time dexterous anthropomorphic hand. We evaluate its dexterous grasping capabilities by implementing the comprehensive Feix taxonomy of human grasps and by assessing the dexterity of its opposable thumb using the Kapandji test. We also illustrate the hand’s payload limits and demonstrate its grasping capabilities in real-world grasping experiments. To support our claim that compliant structures are beneficial for dexterous grasping, we compare the dimensionality of control necessary to implement the diverse grasp postures with the dimensionality of the grasp postures themselves. We find that actuation space is smaller than posture space and explain the difference with the mechanic interaction between hand and grasped object. Additional desirable properties are derived from using soft robotics technology: the hand is robust to impact and blunt collisions, inherently safe, and not affected by dirt, dust, or liquids. Furthermore, the hand is simple and inexpensive to manufacture.", "title": "" }, { "docid": "b0c60343724a49266fac2d2f4c2d37d3", "text": "In the Western world, aging is a growing problem of the society and computer assisted treatments can facilitate the telemedicine for old people or it can help in rehabilitations of patients after sport accidents in far locations. Physical exercises play an important role in physiotherapy and RGB-D devices can be utilized to recognize them in order to make interactive computer healthcare applications in the future. A practical model definition is introduced in this paper to recognize different exercises with Asus Xtion camera. One of the contributions is the extendable recognition models to detect other human activities with noisy sensors, but avoiding heavy data collection. The experiments show satisfactory detection performance without any false positives which is unique in the field to the best of the author knowledge. The computational costs are negligible thus the developed models can be suitable for embedded systems.", "title": "" }, { "docid": "d7bb22eefbff0a472d3e394c61788be2", "text": "Crowd evacuation of a building has been studied over the last decades. In this paper, seven methodological approaches for crowd evacuation have been identified. These approaches include cellular automata models, lattice gas models, social force models, fluid-dynamic models, agent-based models, game theoretic models, and approaches based on experiments with animals. According to available literatures, we discuss the advantages and disadvantages of these approaches, and conclude that a variety of different kinds of approaches should be combined to study crowd evacuation. Psychological and physiological elements affecting individual and collective behaviors should be also incorporated into the evacuation models. & 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ca9c4512d2258a44590a298879219970", "text": "I propose a common framework that combines three different paradigms in machine learning: generative, discriminative and imitative learning. A generative probabilistic distribution is a principled way to model many machine learning and machine perception problems. Therein, one provides domain specific knowledge in terms of structure and parameter priors over the joint space of variables. Bayesian networks and Bayesian statistics provide a rich and flexible language for specifying this knowledge and subsequently refining it with data and observations. The final result is a distribution that is a good generator of novel exemplars. Conversely, discriminative algorithms adjust a possibly non-distributional model to data optimizing for a specific task, such as classification or prediction. This typically leads to superior performance yet compromises the flexibility of generative modeling. I present Maximum Entropy Discrimination (MED) as a framework to combine both discriminative estimation and generative probability densities. Calculations involve distributions over parameters, margins, and priors and are provably and uniquely solvable for the exponential family. Extensions include regression, feature selection, and transduction. SVMs are also naturally subsumed and can be augmented with, for example, feature selection, to obtain substantial improvements. To extend to mixtures of exponential families, I derive a discriminative variant of the ExpectationMaximization (EM) algorithm for latent discriminative learning (or latent MED). While EM and Jensen lower bound log-likelihood, a dual upper bound is made possible via a novel reverse-Jensen inequality. The variational upper bound on latent log-likelihood has the same form as EM bounds, is computable efficiently and is globally guaranteed. It permits powerful discriminative learning with the wide range of contemporary probabilistic mixture models (mixtures of Gaussians, mixtures of multinomials and hidden Markov models). We provide empirical results on standardized data sets that demonstrate the viability of the hybrid discriminative-generative approaches of MED and reverse-Jensen bounds over state of the art discriminative techniques or generative approaches. Subsequently, imitative learning is presented as another variation on generative modeling which also learns from exemplars from an observed data source. However, the distinction is that the generative model is an agent that is interacting in a much more complex surrounding external world. It is not efficient to model the aggregate space in a generative setting. I demonstrate that imitative learning (under appropriate conditions) can be adequately addressed as a discriminative prediction task which outperforms the usual generative approach. This discriminative-imitative learning approach is applied with a generative perceptual system to synthesize a real-time agent that learns to engage in social interactive behavior. Thesis Supervisor: Alex Pentland Title: Toshiba Professor of Media Arts and Sciences, MIT Media Lab Discriminative, Generative and Imitative Learning", "title": "" }, { "docid": "9584909fc62cca8dc5c9d02db7fa7e5d", "text": "As the nature of many materials handling tasks have begun to change from lifting to pushing and pulling, it is important that one understands the biomechanical nature of the risk to which the lumbar spine is exposed. Most previous assessments of push-pull tasks have employed models that may not be sensitive enough to consider the effects of the antagonistic cocontraction occurring during complex pushing and pulling motions in understanding the risk to the spine and the few that have considered the impact of cocontraction only consider spine load at one lumbar level. This study used an electromyography-assisted biomechanical model sensitive to complex motions to assess spine loadings throughout the lumbar spine as 10 males and 10 females pushed and pulled loads at three different handle heights and of three different load magnitudes. Pulling induced greater spine compressive loads than pushing, whereas the reverse was true for shear loads at the different lumbar levels. The results indicate that, under these conditions, anterior-posterior (A/P) shear loads were of sufficient magnitude to be of concern especially at the upper lumbar levels. Pushing and pulling loads equivalent to 20% of body weight appeared to be the limit of acceptable exertions, while pulling at low and medium handle heights (50% and 65% of stature) minimised A/P shear. These findings provide insight to the nature of spine loads and their potential risk to the low back during modern exertions.", "title": "" }, { "docid": "4cc4c8fd07f30b5546be2376c1767c19", "text": "We apply new bilevel and trilevel optimization models to make critical infrastructure more resilient against terrorist attacks. Each model features an intelligent attacker (terrorists) and a defender (us), information transparency, and sequential actions by attacker and defender. We illustrate with examples of the US Strategic Petroleum Reserve, the US Border Patrol at Yuma, Arizona, and an electrical transmission system. We conclude by reporting insights gained from the modeling experience and many “red-team” exercises. Each exercise gathers open-source data on a real-world infrastructure system, develops an appropriate bilevel or trilevel model, and uses these to identify vulnerabilities in the system or to plan an optimal defense.", "title": "" }, { "docid": "8c174dbb8468b1ce6f4be3676d314719", "text": "An estimated 24 million people worldwide have dementia, the majority of whom are thought to have Alzheimer's disease. Thus, Alzheimer's disease represents a major public health concern and has been identified as a research priority. Although there are licensed treatments that can alleviate symptoms of Alzheimer's disease, there is a pressing need to improve our understanding of pathogenesis to enable development of disease-modifying treatments. Methods for improving diagnosis are also moving forward, but a better consensus is needed for development of a panel of biological and neuroimaging biomarkers that support clinical diagnosis. There is now strong evidence of potential risk and protective factors for Alzheimer's disease, dementia, and cognitive decline, but further work is needed to understand these better and to establish whether interventions can substantially lower these risks. In this Seminar, we provide an overview of recent evidence regarding the epidemiology, pathogenesis, diagnosis, and treatment of Alzheimer's disease, and discuss potential ways to reduce the risk of developing the disease.", "title": "" }, { "docid": "8af2e53cb3f77a2590945f135a94279b", "text": "Time series data are an ubiquitous and important data source in many domains. Most companies and organizations rely on this data for critical tasks like decision-making, planning, and analytics in general. Usually, all these tasks focus on actual data representing organization and business processes. In order to assess the robustness of current systems and methods, it is also desirable to focus on time-series scenarios which represent specific time-series features. This work presents a generally applicable and easy-to-use method for the feature-driven generation of time series data. Our approach extracts descriptive features of a data set and allows the construction of a specific version by means of the modification of these features.", "title": "" }, { "docid": "6b8329ef59c6811705688e48bf6c0c08", "text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.", "title": "" }, { "docid": "1785d1d7da87d1b6e5c41ea89e447bf9", "text": "Web usage mining is the application of data mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Web usage mining consists of three phases, namely preprocessing, pattern discovery, and pattern analysis. This paper describes each of these phases in detail. Given its application potential, Web usage mining has seen a rapid increase in interest, from both the research and practice communities. This paper provides a detailed taxonomy of the work in this area, including research efforts as well as commercial offerings. An up-to-date survey of the existing work is also provided. Finally, a brief overview of the WebSIFT system as an example of a prototypical Web usage mining system is given.", "title": "" }, { "docid": "924768b271caa9d1ba0cb32ab512f92e", "text": "Traditional keyboard and mouse based presentation prevents lecturers from interacting with the audiences freely and closely. In this paper, we propose a gesture-aware presentation tool named SlideShow to liberate lecturers from physical space constraints and make human-computer interaction more natural and convenient. In our system, gesture data is obtained by a handle controller with 3-axis accelerometer and gyro and transmitted to host-side through bluetooth, then we use Bayesian change point detection to segment continuous gesture series and HMM to recognize the gesture. In consequence Slideshow could carry out the corresponding operations on PowerPoint(PPT) to make a presentation, and operation states can be switched automatically and intelligently during the presentation. Both the experimental and testing results show our approach is practical, useful and convenient.", "title": "" }, { "docid": "d2f64c21d0a3a54b4a2b75b7dd7df029", "text": "Library of Congress Cataloging in Publication Data EB. Boston studies in the philosophy of science.The concept of autopoiesis is due to Maturana and Varela 8, 9. The aim of this article is to revisit the concepts of autopoiesis and cognition in the hope of.Amazon.com: Autopoiesis and Cognition: The Realization of the Living Boston Studies in the Philosophy of Science, Vol. 42 9789027710161: H.R. Maturana.Autopoiesis, The Santiago School of Cognition, and. In their early work together Maturana and Varela developed the idea of autopoiesis.Autopoiesis and Cognition: The Realization of the Living Dordecht.", "title": "" }, { "docid": "566c6e3f9267fc8ccfcf337dc7aa7892", "text": "Research into the values motivating unsustainable behavior has generated unique insight into how NGOs and environmental campaigns contribute toward successfully fostering significant and long-term behavior change, yet thus far this research has not been applied to the domain of sustainable HCI. We explore the implications of this research as it relates to the potential limitations of current approaches to persuasive technology, and what it means for designing higher impact interventions. As a means of communicating these implications to be readily understandable and implementable, we develop a set of antipatterns to describe persuasive technology approaches that values research suggests are unlikely to yield significant sustainability wins, and a complementary set of patterns to describe new guidelines for what may become persuasive technology best practice.", "title": "" }, { "docid": "f48d02ff3661d3b91c68d6fcf750f83e", "text": "There have been a number of techniques developed in recent years for the efficient analysis of probabilistic inference problems, represented as Bayes' networks or influence diagrams (Lauritzen and Spiegelhalter [9], Pearl [12], Shachter [14]). To varying degrees these methods exploit the conditional independence assumed and revealed in the problem structure to analyze problems in polynomial time, essentially polynomial in the number of variables and the size of the largest state space encountered during the evaluation. Unfortunately, there are many problems of interest for which the variables of interest are continuous rather than discrete, so the relevant state spaces become infinite and the polynomial complexity is of little help.", "title": "" }, { "docid": "c3558d8f79cd8a7f53d8b6073c9a7db3", "text": "De novo assembly of RNA-seq data enables researchers to study transcriptomes without the need for a genome sequence; this approach can be usefully applied, for instance, in research on 'non-model organisms' of ecological and evolutionary importance, cancer samples or the microbiome. In this protocol we describe the use of the Trinity platform for de novo transcriptome assembly from RNA-seq data in non-model organisms. We also present Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes. In the procedure, we provide a workflow for genome-independent transcriptome analysis leveraging the Trinity platform. The software, documentation and demonstrations are freely available from http://trinityrnaseq.sourceforge.net. The run time of this protocol is highly dependent on the size and complexity of data to be analyzed. The example data set analyzed in the procedure detailed herein can be processed in less than 5 h.", "title": "" }, { "docid": "745cdbb442c73316f691dc20cc696f31", "text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.", "title": "" }, { "docid": "f90784e4bdaad1f8ecb5941867a467cf", "text": "Social Networks (SN) Sites are becoming very popular and the number of users is increasing rapidly. However, with that increase there is also an increase in the security threats which affect the users’ privacy, identity and confidentiality. Different research groups highlighted the security threats in SN and attempted to offer some solutions to these issues. In this paper we survey several examples of this research and highlight the approaches. All the models we surveyed were focusing on protecting users’ information yet they failed to cover other important issues. For example, none of the mechanisms provided the users with control over what others can reveal about them; and encryption of images is still not achieved properly. Generally having higher security measures will affect the system’s performance in terms of speed and response time. However, this trade-off was not discussed or addressed in any of the models we surveyed.", "title": "" }, { "docid": "a38986fcee27fb733ec51cf83771a85f", "text": "A tunable broadband inverted microstrip line phase shifter filled with Liquid Crystals (LCs) is investigated between 1.125 GHz and 35 GHz at room temperature. The effective dielectric anisotropy is tuned by a DC-voltage of up to 30 V. In addition to standard LCs like K15 (5CB), a novel highly anisotropic LC mixture is characterized by a resonator method at 8.5 GHz, showing a very high dielectric anisotropy /spl Delta/n of 0.32 for the novel mixture compared to 0.13 for K15. These LCs are filled into two inverted microstrip line phase shifter devices with different polyimide films and heights. With a physical length of 50 mm, the insertion losses are about 4 dB for the novel mixture compared to 6 dB for K15 at 24 GHz. A differential phase shift of 360/spl deg/ can be achieved at 30 GHz with the novel mixture. The figure-of-merit of the phase shifter exceeds 110/spl deg//dB for the novel mixture compared to 21/spl deg//dB for K15 at 24 GHz. To our knowledge, this is the best value above 20 GHz at room temperature demonstrated for a tunable phase shifter based on nonlinear dielectrics up to now. This substantial progress opens up totally new low-cost LC applications beyond optics.", "title": "" }, { "docid": "ab0c80a10d26607134828c6b350089aa", "text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.", "title": "" } ]
scidocsrr
286ea8972c234744e1b70f8e9d9b0bed
A Novel Approach for Effective Recognition of the Code-Switched Data on Monolingual Language Model
[ { "docid": "1d05fb1a3ca5e83659996fba154fb12e", "text": "Code-switching is a very common phenomenon in multilingual communities. In this paper, we investigate language modeling for conversational Mandarin-English code-switching (CS) speech recognition. First, we investigate the prediction of code switches based on textual features with focus on Part-of-Speech (POS) tags and trigger words. Second, we propose a structure of recurrent neural networks to predict code-switches. We extend the networks by adding POS information to the input layer and by factorizing the output layer into languages. The resulting models are applied to our task of code-switching language modeling. The final performance shows 10.8% relative improvement in perplexity on the SEAME development set which transforms into a 2% relative improvement in terms of Mixed Error Rate and a relative improvement of 16.9% in perplexity on the evaluation set which leads to a 2.7% relative improvement of MER.", "title": "" }, { "docid": "9df0cdd0273b19737de0591310131bff", "text": "We present freely available open-source toolkit for training recurrent neural network based language models. I t can be easily used to improve existing speech recognition and ma chine translation systems. Also, it can be used as a baseline for fu ture research of advanced language modeling techniques. In the p a er, we discuss optimal parameter selection and different modes of functionality. The toolkit, example scripts and basic setups are freely available at http://rnnlm.sourceforge.net/. I. I NTRODUCTION, MOTIVATION AND GOALS Statistical language modeling attracts a lot of attention, as models of natural languages are important part of many practical systems today. Moreover, it can be estimated that with further research progress, language models will becom e closer to human understanding [1] [2], and completely new applications will become practically realizable. Immedia tely, any significant progress in language modeling can be utilize d in the esisting speech recognition and statistical machine translation systems. However, the whole research field struggled for decades to overcome very simple, but also effective models based on ngram frequencies [3] [4]. Many techniques were developed to beat n-grams, but the improvements came at the cost of computational complexity. Moreover, the improvements wer e often reported on very basic systems, and after application to state-of-the-art setups and comparison to n-gram models trained on large amounts of data, improvements provided by many techniques vanished. This has lead to scepticism among speech recognition researchers. In our previous work, we have compared many major advanced language modeling techniques, and found that neur al network based language models (NNLM) perform the best on several standard setups [5]. Models of this type were introduced by Bengio in [6], about ten years ago. Their main weaknesses were huge computational complexity, and nontrivial implementation. Successful training of neural net works require well chosen hyper-parameters, such as learning rat e and size of hidden layer. To help overcome these basic obstacles, we have decided to release our toolkit for training recurrent neural network b ased language models (RNNLM). We have shown that the recurrent architecture outperforms the feedforward one on several se tup in [7]. Moreover, the implemenation is simple and easy to understand. The most importantly, recurrent neural networ ks are very interesting from the research point of view, as they allow effective processing of sequences and patterns with arbitraty length these models can learn to store informati on in the hidden layer. Recurrent neural networks can have memory , and are thus important step forward to overcome the most painful and often criticized drawback of n-gram models dependence on previous two or three words only. In this paper we present an open source and freely available toolkit for training statistical language models base d or recurrent neural networks. It includes techniques for redu cing computational complexity (classes in the output layer and direct connections between input and output layer). Our too lkit has been designed to provide comparable results to the popul ar toolkit for training n-gram models, SRILM [8]. The main goals for the RNNLM toolkit are these: • promotion of research of advanced language modeling techniques • easy usage • simple portable code without any dependencies • computational efficiency In the paper, we describe how to easily make RNNLM part of almost any speech recognition or machine translation syste m that produces lattices. II. RECURRENTNEURAL NETWORK The recurrent neural network architecture used in the toolk it is shown at Figure 1 (usually called Elman network, or simple RNN). The input layer uses the 1-of-N representation of the previous wordw(t) concatenated with previous state of the hidden layers(t − 1). The neurons in the hidden layer s(t) use sigmoid activation function. The output layer (t) has the same dimensionality as w(t), and after the network is trained, it represents probability distribution of the next word giv en the previous word and state of the hidden layer in the previous time step [9]. The class layer c(t) can be optionally used to reduce computational complexity of the model, at a small cost of accuracy [7]. Training is performed by the standard stochastic gradient descent algorithm, and the matrix W that", "title": "" }, { "docid": "f09733894d94052707ed768aea8d26e6", "text": "The aim of this paper is to investigate the rules and constraints of code-switching (CS) in Hindi-English mixed language data. In this paper, we’ll discuss how we collected the mixed language corpus. This corpus is primarily made up of student interview speech. The speech was manually transcribed and verified by bilingual speakers of Hindi and English. The code-switching cases in the corpus are discussed and the reasons for code-switching are explained.", "title": "" } ]
[ { "docid": "0ab14a40df6fe28785262d27a4f5b8ce", "text": "State-of-the-art 3D shape classification and retrieval algorithms, hereinafter referred to as shape analysis, are often based on comparing signatures or descriptors that capture the main geometric and topological properties of 3D objects. None of the existing descriptors, however, achieve best performance on all shape classes. In this article, we explore, for the first time, the usage of covariance matrices of descriptors, instead of the descriptors themselves, in 3D shape analysis. Unlike histogram -based techniques, covariance-based 3D shape analysis enables the fusion and encoding of different types of features and modalities into a compact representation. Covariance matrices, however, are elements of the non-linear manifold of symmetric positive definite (SPD) matrices and thus \\BBL2 metrics are not suitable for their comparison and clustering. In this article, we study geodesic distances on the Riemannian manifold of SPD matrices and use them as metrics for 3D shape matching and recognition. We then: (1) introduce the concepts of bag of covariance (BoC) matrices and spatially-sensitive BoC as a generalization to the Riemannian manifold of SPD matrices of the traditional bag of features framework, and (2) generalize the standard kernel methods for supervised classification of 3D shapes to the space of covariance matrices. We evaluate the performance of the proposed BoC matrices framework and covariance -based kernel methods and demonstrate their superiority compared to their descriptor-based counterparts in various 3D shape matching, retrieval, and classification setups.", "title": "" }, { "docid": "cce36b208b8266ddacc8baea18cd994b", "text": "Shape from shading is known to be an ill-posed problem. We show in this paper that if we model the problem in a different way than it is usually done, more precisely by taking into account the 1/r/sup 2/ attenuation term of the illumination, shape from shading becomes completely well-posed. Thus the shading allows to recover (almost) any surface from only one image (of this surface) without any additional data (in particular, without the knowledge of the heights of the solution at the local intensity \"minima\", contrary to [P. Dupuis et al. (1994), E. Prados et al. (2004), B. Horn (1986), E. Rouy et al. (1992), R. Kimmel et al. (2001)]) and without regularity assumptions (contrary to [J. Oliensis et al. (1993), R. Kimmel et al. (1995)], for example). More precisely, we formulate the problem as that of solving a new partial differential equation (PDE), we develop a complete mathematical study of this equation and we design a new provably convergent numerical method. Finally, we present results of our new shape from shading method on various synthetic and real images.", "title": "" }, { "docid": "3f6572916ac697188be30ef798acbbff", "text": "The vector representation of Bengali words using word2vec model (Mikolov et al. (2013)) plays an important role in Bengali sentiment classification. It is observed that the words that are from same context stay closer in the vector space of word2vec model and they are more similar than other words. In this article, a new approach of sentiment classification of Bengali comments with word2vec and Sentiment extraction of words are presented. Combining the results of word2vec word co-occurrence score with the sentiment polarity score of the words, the accuracy obtained is 75.5%.", "title": "" }, { "docid": "46291c5a7fafd089c7729f7bc77ae8b7", "text": "This paper proposes a new system for offline writer identification and writer verification. The proposed method uses GMM supervectors to encode the feature distribution of individual writers. Each supervector originates from an individual GMM which has been adapted from a background model via a maximum-a-posteriori step followed by mixing the new statistics with the background model. We show that this approach improves the TOP-1 accuracy of the current best ranked methods evaluated at the ICDAR-2013 competition dataset from 95.1% [13] to 97.1%, and from 97.9% [11] to 99.2% at the CVL dataset, respectively. Additionally, we compare the GMM supervector encoding with other encoding schemes, namely Fisher vectors and Vectors of Locally Aggregated Descriptors.", "title": "" }, { "docid": "5855428c40fd0e25e0d05554d2fc8864", "text": "When the landmark patient Phineas Gage died in 1861, no autopsy was performed, but his skull was later recovered. The brain lesion that caused the profound personality changes for which his case became famous has been presumed to have involved the left frontal region, but questions have been raised about the involvement of other regions and about the exact placement of the lesion within the vast frontal territory. Measurements from Gage's skull and modern neuroimaging techniques were used to reconstitute the accident and determine the probable location of the lesion. The damage involved both left and right prefrontal cortices in a pattern that, as confirmed by Gage's modern counterparts, causes a defect in rational decision making and the processing of emotion.", "title": "" }, { "docid": "56a52c6a6b1815daee9f65d8ffc2610e", "text": "State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark.", "title": "" }, { "docid": "adafa8a9f41878df975c239e592dc236", "text": "Cognitive behavioral therapy (CBT) is one of the most effective psychotherapy modalities used to treat depression and anxiety disorders. Homework is an integral component of CBT, but homework compliance in CBT remains problematic in real-life practice. The popularization of the mobile phone with app capabilities (smartphone) presents a unique opportunity to enhance CBT homework compliance; however, there are no guidelines for designing mobile phone apps created for this purpose. Existing literature suggests 6 essential features of an optimal mobile app for maximizing CBT homework compliance: (1) therapy congruency, (2) fostering learning, (3) guiding therapy, (4) connection building, (5) emphasis on completion, and (6) population specificity. We expect that a well-designed mobile app incorporating these features should result in improved homework compliance and better outcomes for its users.", "title": "" }, { "docid": "0bc403d33be9115e860cfe925ee8437a", "text": "Orofacial analysis has been used by dentists for many years. The process involves applying mathematical rules, geometric principles, and straight lines to create either parallel or perpendicular references based on the true horizon and/or natural head position. These reference lines guide treatment planning and smile design for restorative treatments to achieve harmony between the new smile and the face. The goal is to obtain harmony and not symmetry. Faces are asymmetrical entities and because of that cannot be analyzed using purely straight lines. In this article, a more natural, organic, and dynamic process of evaluation is presented to minimize errors and generate harmoniously balanced smiles instead of perfect, mathematical smiles.", "title": "" }, { "docid": "f27ad6bf5c65fdea1a98b118b1a43c85", "text": "Localization is one of the problems that often appears in the world of robotics. Monte Carlo Localization (MCL) are the one of the popular algorithms in localization because easy to implement on issues Global Localization. This algorithm using particles to represent the robot position. MCL can simulated by Robot Operating System (ROS) using robot type is Pioneer3-dx. In this paper we will discuss about this algorithm on ROS, by analyzing the influence of the number particle that are used for localization of the actual robot position.", "title": "" }, { "docid": "37a108b2d30a08cb78321f96c1e9eca4", "text": "The TRAM flap, DIEP flap, and gluteal free flaps are routinely used for breast reconstruction. However, these have seldom been described for reconstruction of buttock deformities. We present three cases of free flaps used to restore significant buttock contour deformities. They introduce vascularised bulky tissue and provide adequate cushioning for future sitting, as well as correction of the aesthetic defect.", "title": "" }, { "docid": "42e2aec24a5ab097b5fff3ec2fe0385d", "text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.", "title": "" }, { "docid": "390cb70c820d0ebefe936318f8668ac3", "text": "BACKGROUND\nMandatory labeling of products with top allergens has improved food safety for consumers. Precautionary allergen labeling (PAL), such as \"may contain\" or \"manufactured on shared equipment,\" are voluntarily placed by the food industry.\n\n\nOBJECTIVE\nTo establish knowledge of PAL and its impact on purchasing habits by food-allergic consumers in North America.\n\n\nMETHODS\nFood Allergy Research & Education and Food Allergy Canada surveyed consumers in the United States and Canada on purchasing habits of food products featuring different types of PAL. Associations between respondents' purchasing behaviors and individual characteristics were estimated using multiple logistic regression.\n\n\nRESULTS\nOf 6684 participants, 84.3% (n = 5634) were caregivers of a food-allergic child and 22.4% had food allergy themselves. Seventy-one percent reported a history of experiencing a severe allergic reaction. Buying practices varied on the basis of PAL wording; 11% of respondents purchased food with \"may contain\" labeling, whereas 40% purchased food that used \"manufactured in a facility that also processes.\" Twenty-nine percent of respondents were unaware that the law requires labeling of priority food allergens. Forty-six percent were either unsure or incorrectly believed that PAL is required by law. Thirty-seven percent of respondents thought PAL was based on the amount of allergen present. History of a severe allergic reaction decreased the odds of purchasing foods with PAL.\n\n\nCONCLUSIONS\nAlmost half of consumers falsely believed that PAL was required by law. Up to 40% surveyed consumers purchased products with PAL. Understanding of PAL is poor, and improved awareness and guidelines are needed to help food-allergic consumers purchase food safely.", "title": "" }, { "docid": "f97490dfe6b7d77870c3effbba14c204", "text": "Mobile phones and carriers trust the traditional base stations which serve as the interface between the mobile devices and the fixed-line communication network. Femtocells, miniature cellular base stations installed in homes and businesses, are equally trusted yet are placed in possibly untrustworthy hands. By making several modifications to a commercially available femtocell, we evaluate the impact of attacks originating from a compromised device. We show that such a rogue device can violate all the important aspects of security for mobile subscribers, including tracking phones, intercepting communication and even modifying and impersonating traffic. The specification also enables femtocells to directly communicate with other femtocells over a VPN and the carrier we examined had no filtering on such communication, enabling a single rogue femtocell to directly communicate with (and thus potentially attack) all other femtocells within the carrier’s network.", "title": "" }, { "docid": "01651546f9fb6c984e84cfd2d1702b8e", "text": "There is increasing evidence for the involvement of glutamate-mediated neurotoxicity in the pathogenesis of Alzheimer's disease (AD). We suggest that glutamate receptors of the N-methyl-D-aspartate (NMDA) type are overactivated in a tonic rather than a phasic manner in this disorder. This continuous mild activation may lead to neuronal damage and impairment of synaptic plasticity (learning). It is likely that under such conditions Mg(2+) ions, which block NMDA receptors under normal resting conditions, can no longer do so. We found that overactivation of NMDA receptors using a direct agonist or a decrease in Mg(2+) concentration produced deficits in synaptic plasticity (in vivo: passive avoidance test and/or in vitro: LTP in the CA1 region). In both cases, memantine-an uncompetitive NMDA receptor antagonists with features of an 'improved' Mg(2+) (voltage-dependency, kinetics, affinity)-attenuated this deficit. Synaptic plasticity was restored by therapeutically-relevant concentrations of memantine (1 microM). Moreover, doses leading to similar brain/serum levels provided neuroprotection in animal models relevant for neurodegeneration in AD such as neurotoxicity produced by inflammation in the NBM or beta-amyloid injection to the hippocampus. As such, if overactivation of NMDA receptors is present in AD, memantine would be expected to improve both symptoms (cognition) and to slow down disease progression because it takes over the physiological function of magnesium.", "title": "" }, { "docid": "d9bbe52033912f29c98ef620e70f1cb1", "text": "Low-cost hardware platforms for biomedical engineering are becoming increasingly available, which empower the research community in the development of new projects in a wide range of areas related with physiological data acquisition. Building upon previous work by our group, this work compares the quality of the data acquired by means of two different versions of the multimodal physiological computing platform BITalino, with a device that can be considered a reference. We acquired data from 5 sensors, namely Accelerometry (ACC), Electrocardiography (ECG), Electroencephalography (EEG), Electrodermal Activity (EDA) and Electromyography (EMG). Experimental evaluation shows that ACC, ECG and EDA data are highly correlated with the reference in what concerns the raw waveforms. When compared by means of their commonly used features, EEG and EMG data are also quite similar across the different devices.", "title": "" }, { "docid": "a966216fd4fc3a93e50dbbb1be84e908", "text": "Extracting temporal information from raw text is fundamental for deep language understanding, and key to many applications like question answering, information extraction, and document summarization. Our long-term goal is to build complete temporal structure of documents and use the temporal structure in other applications like textual entailment, question answering, visualization, or others. In this paper, we present a first step, a system for extracting events, event features, main events, temporal expressions and their normalized values from raw text. Our system is a combination of deep semantic parsing with extraction rules, Markov Logic Network classifiers and Conditional Random Field classifiers. To compare with existing systems, we evaluated our system on the TempEval 1 and TempEval 2 corpus. Our system outperforms or performs competitively with existing systems that evaluate on the TimeBank, TempEval 1 and TempEval 2 corpus and our performance is very close to inter-annotator agreement of the TimeBank annotators.", "title": "" }, { "docid": "826fd1fbf5fc5e72ed4c2a1cdce00dec", "text": "In this paper, we design a fast MapReduce algorithm for Monte Carlo approximation of personalized PageRank vectors of all the nodes in a graph. The basic idea is very efficiently doing single random walks of a given length starting at each node in the graph. More precisely, we design a MapReduce algorithm, which given a graph G and a length », outputs a single random walk of length » starting at each node in G. We will show that the number of MapReduce iterations used by our algorithm is optimal among a broad family of algorithms for the problem, and its I/O efficiency is much better than the existing candidates. We will then show how we can use this algorithm to very efficiently approximate all the personalized PageRank vectors. Our empirical evaluation on real-life graph data and in production MapReduce environment shows that our algorithm is significantly more efficient than all the existing algorithms in the MapReduce setting.", "title": "" }, { "docid": "e48941f23ee19ec4b26c4de409a84fe2", "text": "Object recognition is challenging especially when the objects from different categories are visually similar to each other. In this paper, we present a novel joint dictionary learning (JDL) algorithm to exploit the visual correlation within a group of visually similar object categories for dictionary learning where a commonly shared dictionary and multiple category-specific dictionaries are accordingly modeled. To enhance the discrimination of the dictionaries, the dictionary learning problem is formulated as a joint optimization by adding a discriminative term on the principle of the Fisher discrimination criterion. As well as presenting the JDL model, a classification scheme is developed to better take advantage of the multiple dictionaries that have been trained. The effectiveness of the proposed algorithm has been evaluated on popular visual benchmarks.", "title": "" }, { "docid": "fc25adc42c7e4267a9adfe13ddcabf75", "text": "As automotive electronics have increased, models for predicting the transmission characteristics of wiring harnesses, suitable for the automotive EMC tests, are needed. In this paper, the repetitive structures of the cross-sectional shape of the twisted pair cable is focused on. By taking account of RLGC parameters, a theoretical analysis modeling for whole cables, based on multi-conductor transmission line theory, is proposed. Furthermore, the theoretical values are compared with measured values and a full-wave simulator. In case that a twisted pitch, a length of the cable, and a height of reference ground plane are changed, the validity of the proposed model is confirmed.", "title": "" } ]
scidocsrr
7b25c401a85ee8722811b60d0ad7cdee
Skinning mesh animations
[ { "docid": "0382ad43b6d31a347d9826194a7261ce", "text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.", "title": "" } ]
[ { "docid": "281c64b492a1aff7707dbbb5128799c8", "text": "Internet business models have been widely discussed in literature and applied within the last decade. Nevertheless, a clear understanding of some e-commerce concepts does not exist yet. The classification of business models in e-commerce is one of these areas. The current research tries to fill this gap through a conceptual and qualitative study. Nine main e-commerce business model types are selected from literature and analyzed to define the criteria and their sub-criteria (characteristics). As a result three different classifications for business models are determined. This study can be used to improve the understanding of essential functions, relations and mechanisms of existing e-commerce business models.", "title": "" }, { "docid": "030c8aeb4e365bfd2fdab710f8c9f598", "text": "By combining linear graph theory with the principle of virtual work, a dynamic formulation is obtained that extends graph-theoretic modelling methods to the analysis of exible multibody systems. The system is represented by a linear graph, in which nodes represent reference frames on rigid and exible bodies, and edges represent components that connect these frames. By selecting a spanning tree for the graph, the analyst can choose the set of coordinates appearing in the nal system of equations. This set can include absolute, joint, or elastic coordinates, or some combination thereof. If desired, all non-working constraint forces and torques can be automatically eliminated from the dynamic equations by exploiting the properties of virtual work. The formulation has been implemented in a computer program, DynaFlex, that generates the equations of motion in symbolic form. Three examples are presented to demonstrate the application of the formulation, and to validate the symbolic computer implementation.", "title": "" }, { "docid": "3c778c71f621b2c887dc81e7a919058e", "text": "We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality.", "title": "" }, { "docid": "540a6dd82c7764eedf99608359776e66", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "22ef70869ce47993bbdf24b18b6988f5", "text": "Recent results suggest that it is possible to grasp a variety of singulated objects with high precision using Convolutional Neural Networks (CNNs) trained on synthetic data. This paper considers the task of bin picking, where multiple objects are randomly arranged in a heap and the objective is to sequentially grasp and transport each into a packing box. We model bin picking with a discrete-time Partially Observable Markov Decision Process that specifies states of the heap, point cloud observations, and rewards. We collect synthetic demonstrations of bin picking from an algorithmic supervisor uses full state information to optimize for the most robust collision-free grasp in a forward simulator based on pybullet to model dynamic object-object interactions and robust wrench space analysis from the Dexterity Network (Dex-Net) to model quasi-static contact between the gripper and object. We learn a policy by fine-tuning a Grasp Quality CNN on Dex-Net 2.1 to classify the supervisor’s actions from a dataset of 10,000 rollouts of the supervisor in the simulator with noise injection. In 2,192 physical trials of bin picking with an ABB YuMi on a dataset of 50 novel objects, we find that the resulting policies can achieve 94% success rate and 96% average precision (very few false positives) on heaps of 5-10 objects and can clear heaps of 10 objects in under three minutes. Datasets, experiments, and supplemental material are available at http://berkeleyautomation.github.io/dex-net.", "title": "" }, { "docid": "6dbaeff4f3cb814a47e8dc94c4660d33", "text": "An Intrusion Detection System (IDS) is a software that monitors a single or a network of computers for malicious activities (attacks) that are aimed at stealing or censoring information or corrupting network protocols. Most techniques used in today’s IDS are not able to deal with the dynamic and complex nature of cyber attacks on computer networks. Hence, efficient adaptive methods like various techniques of machine learning can result in higher detection rates, lower false alarm rates and reasonable computation and communication costs. In this paper, we study several such schemes and compare their performance. We divide the schemes into methods based on classical artificial intelligence (AI) and methods based on computational intelligence (CI). We explain how various characteristics of CI techniques can be used to build efficient IDS.", "title": "" }, { "docid": "7f3c6e8f0915160bbc9feba4d2175fb3", "text": "Memory leaks are major problems in all kinds of applications, depleting their performance, even if they run on platforms with automatic memory management, such as Java Virtual Machine. In addition, memory leaks contribute to software aging, increasing the complexity of software maintenance. So far memory leak detection was considered to be a part of development process, rather than part of software maintenance. To detect slow memory leaks as a part of quality assurance process or in production environments statistical approach for memory leak detection was implemented and deployed in a commercial tool called Plumbr. It showed promising results in terms of leak detection precision and recall, however, even better detection quality was desired. To achieve this improvement goal, classification algorithms were applied to the statistical data, which was gathered from customer environments where Plumbr was deployed. This paper presents the challenges which had to be solved, method that was used to generate features for supervised learning and the results of the corresponding experiments.", "title": "" }, { "docid": "23129bd3b502cd06e347b90f5a1516bc", "text": "ISSN 2277 5080 | © 2012 Bonfring Abstract--This paper discusses DSP based implementation of Gaussian Minimum Shift Keying (GMSK) demodulator using Polarity type Costas loop. The demodulator consists of a Polarity type Costas loop for carrier recovery, data recovery, and phase detection. Carrier has been recovered using a loop of center-frequency locking scheme as in M-ary Phase Shift Keying (MPSK) Polarity type Costas-loop. Phase unwrapping and Bit-Reconstruction is presented in detail. All the modules are first modeled in MATLAB (Simulink) and Systemview. After bit true simulation, the design is coded in VHDL and code simulation is done using QuestaSim 6.3c. The design is targeted to Virtex-4 XC4VSX35-10FF668 Xilinx FPGA (Field programmable gate array) for real time testing, which is carried out on Xtreme DSP development platform.", "title": "" }, { "docid": "643e97c3bc0cdde54bf95720fe52f776", "text": "Ego-motion estimation based on images from a stereo camera has become a common function for autonomous mobile systems and is gaining increasing importance in the automotive sector. Unlike general robotic platforms, vehicles have a suspension adding degrees of freedom and thus complexity to their dynamics model. Some parameters of the model, such as the vehicle mass, are non-static as they depend on e.g. the specific load conditions and thus need to be estimated online to guarantee a concise and safe autonomous maneuvering of the vehicle. In this paper, a novel visual odometry based approach to simultaneously estimate ego-motion and selected vehicle parameters using a dual Ensemble Kalman Filter and a non-linear single-track model with pitch dynamics is presented. The algorithm has been validated using simulated data and showed a good performance for both the estimation of the ego-motion and of the relevant vehicle parameters.", "title": "" }, { "docid": "9e0cbbe8d95298313fd929a7eb2bfea9", "text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.", "title": "" }, { "docid": "63602b90688ddb0e8ba691702cbdaab8", "text": "This paper presents a 50-d.o.f. humanoid robot, Computational Brain (CB). CB is a humanoid robot created for exploring the underlying processing of the human brain while dealing with the real world. We place our investigations within real—world contexts, as humans do. In so doing, we focus on utilizing a system that is closer to humans—in sensing, kinematics configuration and performance. We present the real-time network-based architecture for the control of all 50 d.o.f. The controller provides full position/velocity/force sensing and control at 1 kHz, allowing us the flexibility in deriving various forms of control. A dynamic simulator is also presented; the simulator acts as a realistic testbed for our controllers and acts as a common interface to our humanoid robots. A contact model developed to allow better validation of our controllers prior to final testing on the physical robot is also presented. Three aspects of the system are highlighted in this paper: (i) physical power for walking, (ii) full-body compliant control—physical interactions and (iii) perception and control—visual ocular-motor responses.", "title": "" }, { "docid": "23d2349831a364e6b77e3c263a8321c8", "text": "lmost a decade has passed since we started advocating a process of usability design [20-22]. This article is a status report about the value of this process and, mainly, a description of new ideas for enhancing the use of the process. We first note that, when followed , the process leads to usable, useful, likeable computer systems and applications. Nevertheless, experience and observational evidence show that (because of the way development work is organized and carried out) the process is often not followed, despite designers' enthusiasm and motivation to do so. To get around these organizational and technical obstacles, we propose a) greater reliance on existing methodologies for establishing test-able usability and productivity-enhancing goals; b) a new method for identifying and focuging attention on long-term, trends about the effects that computer applications have on end-user productivity; and c) a new approach, now under way, to application development, particularly the development of user interfaces. The process consists of four activities [18, 20-22]. Early Focus On Users. Designers should have direct contact with intended or actual users-via interviews , observations, surveys, partic-ipatory design. The aim is to understand users' cognitive, behav-ioral, attitudinal, and anthropomet-ric characteristics-and the characteristics of the jobs they will be doing. Integrated Design. All aspects of usability (e.g., user interface, help system, training plan, documentation) should evolve in parallel, rather than be defined sequentially, and should be under one management. Early~And Continual~User Testing. The only presently feasible approach to successful design is an empirical one, requiring observation and measurement of user behavior , careful evaluation of feedback , insightful solutions to existing problems, and strong motivation to make design changes. Iterative Design. A system under development must be modified based upon the results of behav-ioral tests of functions, user interface , help system, documentation, training approach. This process of implementation, testing, feedback, evaluation, and change must be repeated to iteratively improve the system. We, and others proposing similar ideas (see below), have worked hard at spreading this process of usabil-ity design. We have used numerous channels to accomplish this: frequent talks, workshops, seminars, publications, consulting, addressing arguments used against it [22], conducting a direct case study of the process [20], and identifying methods for people not fully trained as human factors professionals to use in carrying out this process [18]. The Process Works. Several lines of evidence indicate that this usabil-ity design process leads to systems, applications, and products …", "title": "" }, { "docid": "111743197c23aff0fac0699a30edca23", "text": "Origami describes rules for creating folded structures from patterns on a flat sheet, but does not prescribe how patterns can be designed to fit target shapes. Here, starting from the simplest periodic origami pattern that yields one-degree-of-freedom collapsible structures-we show that scale-independent elementary geometric constructions and constrained optimization algorithms can be used to determine spatially modulated patterns that yield approximations to given surfaces of constant or varying curvature. Paper models confirm the feasibility of our calculations. We also assess the difficulty of realizing these geometric structures by quantifying the energetic barrier that separates the metastable flat and folded states. Moreover, we characterize the trade-off between the accuracy to which the pattern conforms to the target surface, and the effort associated with creating finer folds. Our approach enables the tailoring of origami patterns to drape complex surfaces independent of absolute scale, as well as the quantification of the energetic and material cost of doing so.", "title": "" }, { "docid": "3754b5c86e0032382f144ded5f1ca4d8", "text": "Use and users have an important and acknowledged role to most designers of interactive systems. Nevertheless any touch of user hands does not in itself secure development of meaningful artifacts. In this article we stress the need for a professional PD practice in order to yield the full potentiality of user involvement. We suggest two constituting elements of such a professional PD practice. The existence of a shared 'where-to' and 'why' artifact and an ongoing reflection and off-loop reflection among practitioners in the PD process.", "title": "" }, { "docid": "a5a53221aa9ccda3258223b9ed4e2110", "text": "Accurate and reliable inventory forecasting can save an organization from overstock, under-stock and no stock/stock-out situation of inventory. Overstocking leads to high cost of storage and its maintenance, whereas under-stocking leads to failure to meet the demand and losing profit and customers, similarly stock-out leads to complete halt of production or sale activities. Inventory transactions generate data, which is a time-series data having characteristic volume, speed, range and regularity. The inventory level of an item depends on many factors namely, current stock, stock-on-order, lead-time, annual/monthly target. In this paper, we present a perspective of treating Inventory management as a problem of Genetic Programming based on inventory transactions data. A Genetic Programming — Symbolic Regression (GP-SR) based mathematical model is developed and subsequently used to make forecasts using Holt-Winters Exponential Smoothing method for time-series modeling. The GP-SR model evolves based on RMSE as the fitness function. The performance of the model is measured in terms of RMSE and MAE. The estimated values of item demand from the GP-SR model is finally used to simulate a time-series and forecasts are generated for inventory required on a monthly time horizon.", "title": "" }, { "docid": "69e0179971396fcaf09c9507735a8d5b", "text": "In this paper, we describe a statistical approach to both an articulatory-to-acoustic mapping and an acoustic-to-articulatory inversion mapping without using phonetic information. The joint probability density of an articulatory parameter and an acoustic parameter is modeled using a Gaussian mixture model (GMM) based on a parallel acoustic-articulatory speech database. We apply the GMM-based mapping using the minimum mean-square error (MMSE) criterion, which has been proposed for voice conversion, to the two mappings. Moreover, to improve the mapping performance, we apply maximum likelihood estimation (MLE) to the GMM-based mapping method. The determination of a target parameter trajectory having appropriate static and dynamic properties is obtained by imposing an explicit relationship between static and dynamic features in the MLE-based mapping. Experimental results demonstrate that the MLE-based mapping with dynamic features can significantly improve the mapping performance compared with the MMSE-based mapping in both the articulatory-to-acoustic mapping and the inversion mapping.", "title": "" }, { "docid": "490dc6ee9efd084ecf2496b72893a39a", "text": "The rise of blockchain-based cryptocurrencies has led to an explosion of services using distributed ledgers as their underlying infrastructure. However, due to inherently single-service oriented blockchain protocols, such services can bloat the existing ledgers, fail to provide sufficient security, or completely forego the property of trustless auditability. Security concerns, trust restrictions, and scalability limits regarding the resource requirements of users hamper the sustainable development of loosely-coupled services on blockchains. This paper introduces Aspen, a sharded blockchain protocol designed to securely scale with increasing number of services. Aspen shares the same trust model as Bitcoin in a peer-to-peer network that is prone to extreme churn containing Byzantine participants. It enables introduction of new services without compromising the security, leveraging the trust assumptions, or flooding users with irrelevant messages.", "title": "" }, { "docid": "9cc2dfde38bed5e767857b1794d987bc", "text": "Smartphones providing proprietary encryption schemes, albeit offering a novel paradigm to privacy, are becoming a bone of contention for certain sovereignties. These sovereignties have raised concerns about their security agencies not having any control on the encrypted data leaving their jurisdiction and the ensuing possibility of it being misused by people with malicious intents. Such smartphones have typically two types of customers, independent users who use it to access public mail servers and corporates/enterprises whose employees use it to access corporate emails in an encrypted form. The threat issues raised by security agencies concern mainly the enterprise servers where the encrypted data leaves the jurisdiction of the respective sovereignty while on its way to the global smartphone router. In this paper, we have analyzed such email message transfer mechanisms in smartphones and proposed some feasible solutions, which, if accepted and implemented by entities involved, can lead to a possible win-win situation for both the parties, viz., the smartphone provider who does not want to lose the customers and these sovereignties who can avoid the worry of encrypted data leaving their jurisdiction.", "title": "" }, { "docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d", "text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.", "title": "" }, { "docid": "c8fdcfa08aff6286a02b984cc5f716b2", "text": "As interest in adopting Cloud computing for various applications is rapidly growing, it is important to understand how these applications and systems will perform when deployed on Clouds. Due to the scale and complexity of shared resources, it is often hard to analyze the performance of new scheduling and provisioning algorithms on actual Cloud test beds. Therefore, simulation tools are becoming more and more important in the evaluation of the Cloud computing model. Simulation tools allow researchers to rapidly evaluate the efficiency, performance and reliability of their new algorithms on a large heterogeneous Cloud infrastructure. However, current solutions lack either advanced application models such as message passing applications and workflows or scalable network model of data center. To fill this gap, we have extended a popular Cloud simulator (CloudSim) with a scalable network and generalized application model, which allows more accurate evaluation of scheduling and resource provisioning policies to optimize the performance of a Cloud infrastructure.", "title": "" } ]
scidocsrr
b4fdf378ed0e152b0ad8c7e77967f38f
Towards intelligent lower limb wearable robots: Challenges and perspectives - State of the art
[ { "docid": "b2199b7be543f0f287e0cbdb7a477843", "text": "We developed a pneumatically powered orthosis for the human ankle joint. The orthosis consisted of a carbon fiber shell, hinge joint, and two artificial pneumatic muscles. One artificial pneumatic muscle provided plantar flexion torque and the second one provided dorsiflexion torque. Computer software adjusted air pressure in each artificial muscle independently so that artificial muscle force was proportional to rectified low-pass-filtered electromyography (EMG) amplitude (i.e., proportional myoelectric control). Tibialis anterior EMG activated the artificial dorsiflexor and soleus EMG activated the artificial plantar flexor. We collected joint kinematic and artificial muscle force data as one healthy participant walked on a treadmill with the orthosis. Peak plantar flexor torque provided by the orthosis was 70 Nm, and peak dorsiflexor torque provided by the orthosis was 38 Nm. The orthosis could be useful for basic science studies on human locomotion or possibly for gait rehabilitation after neurological injury.", "title": "" }, { "docid": "69b1c87a06b1d83fd00d9764cdadc2e9", "text": "Sarcos Research Corporation, and the Center for Engineering Design at the University of Utah, have long been interested in both the fundamental and the applied aspects of robots and other computationally driven machines. We have produced substantial numbers of systems that function as products for commercial applications, and as advanced research tools specifically designed for experimental", "title": "" } ]
[ { "docid": "38c78be386aa3827f39825f9e40aa3cc", "text": "Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on lightabsorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.", "title": "" }, { "docid": "88077fe7ce2ad4a3c3052a988f9f96c1", "text": "When collecting patient-level resource use data for statistical analysis, for some patients and in some categories of resource use, the required count will not be observed. Although this problem must arise in most reported economic evaluations containing patient-level data, it is rare for authors to detail how the problem was overcome. Statistical packages may default to handling missing data through a so-called 'complete case analysis', while some recent cost-analyses have appeared to favour an 'available case' approach. Both of these methods are problematic: complete case analysis is inefficient and is likely to be biased; available case analysis, by employing different numbers of observations for each resource use item, generates severe problems for standard statistical inference. Instead we explore imputation methods for generating 'replacement' values for missing data that will permit complete case analysis using the whole data set and we illustrate these methods using two data sets that had incomplete resource use information.", "title": "" }, { "docid": "80de1fba41f93953ea21a517065f8ca8", "text": "This paper presents the kinematic calibration of a novel 7-degree-of-freedom (DOF) cable-driven robotic arm (CDRA), aimed at improving its absolute positioning accuracy. This CDRA consists of three 'self-calibrated' cable-driven parallel mechanism (CDPM) modules. In order to account for any kinematic errors that might arise when assembling the individual CDPMs, a calibration model is formulated based on the local product-of-exponential formula and the measurement residues in the tool-tip frame poses. An iterative least-squares algorithm is employed to identify the errors in the fixed transformation frames of the sequentially assembled 'self- calibrated' CDPM modules. Both computer simulations and experimental studies were carried out to verify the robustness and effectiveness of the proposed calibration algorithm. From the experimental studies, errors in the fixed kinematic transformation frames were precisely recovered after a minimum of 15 pose measurements.", "title": "" }, { "docid": "8bed049baa03a11867b0205e16402d0e", "text": "The paper investigates potential bias in awards of player disciplinary sanctions, in the form of cautions (yellow cards) and dismissals (red cards) by referees in the English Premier League and the German Bundesliga. Previous studies of behaviour of soccer referees have not adequately incorporated within-game information.Descriptive statistics from our samples clearly show that home teams receive fewer yellow and red cards than away teams. These differences may be wrongly interpreted as evidence of bias where the modeller has failed to include withingame events such as goals scored and recent cards issued.What appears as referee favouritism may actually be excessive and illegal aggressive behaviour by players in teams that are behind in score. We deal with these issues by using a minute-by-minute bivariate probit analysis of yellow and red cards issued in games over six seasons in the two leagues. The significance of a variable to denote the difference in score at the time of sanction suggests that foul play that is induced by a losing position is an important influence on the award of yellow and red cards. Controlling for various pre-game and within-game variables, we find evidence that is indicative of home team favouritism induced by crowd pressure: in Germany home teams with running tracks in their stadia attract more yellow and red cards than teams playing in stadia with less distance between the crowd and the pitch. Separating the competing teams in matches by favourite and underdog status, as perceived by the betting market, yields further evidence, this time for both leagues, that the source of home teams receiving fewer cards is not just that they are disproportionately often the favoured team and disproportionately ahead in score.Thus there is evidence that is consistent with pure referee bias in relative treatments of home and away teams.", "title": "" }, { "docid": "e754c7c7821703ad298d591a3f7a3105", "text": "The rapid growth in the population density in urban cities and the advancement in technology demands real-time provision of services and infrastructure. Citizens, especially travelers, want to be reached within time to the destination. Consequently, they require to be facilitated with smart and real-time traffic information depending on the current traffic scenario. Therefore, in this paper, we proposed a graph-oriented mechanism to achieve the smart transportation system in the city. We proposed to deploy road sensors to get the overall traffic information as well as the vehicular network to obtain location and speed information of the individual vehicle. These Internet of Things (IoT) based networks generate enormous volume of data, termed as Big Data, depicting the traffic information of the city. To process incoming Big Data from IoT devices, then generating big graphs from the data, and processing them, we proposed an efficient architecture that uses the Giraph tool with parallel processing servers to achieve real-time efficiency. Later, various graph algorithms are used to achieve smart transportation by making real-time intelligent decisions to facilitate the citizens as well as the metropolitan authorities. Vehicular Datasets from various reliable resources representing the real city traffic are used for analysis and evaluation purpose. The system is implemented using Giraph and Spark tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.", "title": "" }, { "docid": "96055f0e41d62dc0ef318772fa6d6d9f", "text": "Building Information Modeling (BIM) has rapidly grown from merely being a three-dimensional (3D) model of a facility to serving as “a shared knowledge resource for information about a facility, forming a reliable basis for decisions during its life cycle from inception onward” [1]. BIM with three primary spatial dimensions (width, height, and depth) becomes 4D BIM when time (construction scheduling information) is added, and 5D BIM when cost information is added to it. Although the sixth dimension of the 6D BIM is often attributed to asset information useful for Facility Management (FM) processes, there is no agreement in the research literature on what each dimension represents beyond the fifth dimension [2]. BIM ultimately seeks to digitize the different stages of a building lifecycle such as planning, design, construction, and operation such that consistent digital information of a building project can be used by stakeholders throughout the building life-cycle [3]. The United States National Building Information Model Standard (NBIMS) initially characterized BIMs as digital representations of physical and functional aspects of a facility. But, in the most recent version released in July 2015, the NBIMS’ definition of BIM includes three separate but linked functions, namely business process, digital representation, and organization and control [4]. A number of national-level initiatives are underway in various countries to formally encourage the adoption of BIM technologies in the Architecture, Engineering, and Construction (AEC) and FM industries. Building SMART, with 18 chapters across the globe, including USA, UK, Australasia, etc., was established in 1995 with the aim of developing and driving the active use of open internationally-recognized standards to support the wider adoption of BIM across the building and infrastructure sectors [5]. The UK BIM Task Group, with experts from industry, government, public sector, institutes, and academia, is committed to facilitate the implementation of ‘collaborative 3D BIM’, a UK Government Construction Strategy initiative [6]. Similarly, the EUBIM Task Group was started with a vision to foster the common use of BIM in public works and produce a handbook containing the common BIM principles, guidance and practices for public contracting entities and policy makers [7].", "title": "" }, { "docid": "13cfc33bd8611b3baaa9be37ea9d627e", "text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.", "title": "" }, { "docid": "f0d3ab8a530d7634149a5c29fa8bfe1b", "text": "In this paper, a novel broadband dual-polarized (slant ±45°) base station antenna element operating at 790–960 MHz is proposed. The antenna element consists of two pairs of symmetrical dipoles, four couples of baluns, a cricoid pedestal and two kinds of plastic fasteners. Specific shape metal reflector is also designed to achieve stable radiation pattern and high front-to-back ratio (FBR). All the simulated and measured results show that the proposed antenna element has wide impedance bandwidth (about 19.4%), low voltage standing wave ratio (VSWR < 1.4) and high port to port isolation (S21 < −25 dB) at the whole operating frequency band. Stable horizontal half-power beam width (HPBW) with 65°±4.83° and high gain (> 9.66 dBi) are also achieved. The proposed antenna element fabricated by integrated metal casting technology has great mechanical properties such as compact structure, low profile, good stability, light weight and easy to fabricate. Due to its good electrical and mechanical characteristics, the antenna element is suitable for European Digital Dividend, CDMA800 and GSM900 bands in base station antenna of modern mobile communication.", "title": "" }, { "docid": "60d6869cadebea71ef549bb2a7d7e5c3", "text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.", "title": "" }, { "docid": "d9123053892ce671665a3a4a1694a57c", "text": "Visual perceptual learning (VPL) is defined as a long-term improvement in performance on a visual task. In recent years, the idea that conscious effort is necessary for VPL to occur has been challenged by research suggesting the involvement of more implicit processing mechanisms, such as reinforcement-driven processing and consolidation. In addition, we have learnt much about the neural substrates of VPL and it has become evident that changes in visual areas and regions beyond the visual cortex can take place during VPL.", "title": "" }, { "docid": "7677b67bd95f05c2e4c87022c3caa938", "text": "The semi-supervised learning usually only predict labels for unlabeled data appearing in training data, and cannot effectively predict labels for testing data never appearing in training set. To handle this outof-sample problem, many inductive methods make a constraint such that the predicted label matrix should be exactly equal to a linear model. In practice, this constraint is too rigid to capture the manifold structure of data. Motivated by this deficiency, we relax the rigid linear embedding constraint and propose to use an elastic embedding constraint on the predicted label matrix such that the manifold structure can be better explored. To solve our new objective and also a more general optimization problem, we study a novel adaptive loss with efficient optimization algorithm. Our new adaptive loss minimization method takes the advantages of both L1 norm and L2 norm, and is robust to the data outlier under Laplacian distribution and can efficiently learn the normal data under Gaussian distribution. Experiments have been performed on image classification tasks and our approach outperforms other state-of-the-art methods.", "title": "" }, { "docid": "3646b64ac400c12f9c9c4f8ba4f53591", "text": "Cerebral organoids recapitulate human brain development at a considerable level of detail, even in the absence of externally added signaling factors. The patterning events driving this self-organization are currently unknown. Here, we examine the developmental and differentiative capacity of cerebral organoids. Focusing on forebrain regions, we demonstrate the presence of a variety of discrete ventral and dorsal regions. Clearing and subsequent 3D reconstruction of entire organoids reveal that many of these regions are interconnected, suggesting that the entire range of dorso-ventral identities can be generated within continuous neuroepithelia. Consistent with this, we demonstrate the presence of forebrain organizing centers that express secreted growth factors, which may be involved in dorso-ventral patterning within organoids. Furthermore, we demonstrate the timed generation of neurons with mature morphologies, as well as the subsequent generation of astrocytes and oligodendrocytes. Our work provides the methodology and quality criteria for phenotypic analysis of brain organoids and shows that the spatial and temporal patterning events governing human brain development can be recapitulated in vitro.", "title": "" }, { "docid": "4db29a3fd1f1101c3949d3270b15ef07", "text": "Human goal-directed action emerges from the interaction between stimulus-driven sensorimotor online systems and slower-working control systems that relate highly processed perceptual information to the construction of goal-related action plans. This distribution of labor requires the acquisition of enduring action representations; that is, of memory traces which capture the main characteristics of successful actions and their consequences. It is argued here that these traces provide the building blocks for off-line prospective action planning, which renders the search through stored action representations an essential part of action control. Hence, action planning requires cognitive search (through possible options) and might have led to the evolution of cognitive search routines that humans have learned to employ for other purposes as well, such as searching for perceptual events and through memory. Thus, what is commonly considered to represent different types of search operations may all have evolved from action planning and share the same characteristics. Evidence is discussed which suggests that all types of cognitive search—be it in searching for perceptual events, for suitable actions, or through memory—share the characteristic of following a fi xed sequence of cognitive operations: divergent search followed by convergent search.", "title": "" }, { "docid": "7c295cb178e58298b1f60f5a829118fd", "text": "A dual-band 0.92/2.45 GHz circularly-polarized (CP) unidirectional antenna using the wideband dual-feed network, two orthogonally positioned asymmetric H-shape slots, and two stacked concentric annular-ring patches is proposed for RF identification (RFID) applications. The measurement result shows that the antenna achieves the impedance bandwidths of 15.4% and 41.9%, the 3-dB axial-ratio (AR) bandwidths of 4.3% and 21.5%, and peak gains of 7.2 dBic and 8.2 dBic at 0.92 and 2.45 GHz bands, respectively. Moreover, the antenna provides stable symmetrical radiation patterns and wide-angle 3-dB AR beamwidths in both lower and higher bands for unidirectional wide-coverage RFID reader applications. Above all, the dual-band CP unidirectional patch antenna presented is beneficial to dual-band RFID system on configuration, implementation, as well as cost reduction.", "title": "" }, { "docid": "ba4d30e7ea09d84f8f7d96c426e50f34", "text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.", "title": "" }, { "docid": "c695f74a41412606e31c771ec9d2b6d3", "text": "Osteochondrosis dissecans (OCD) is a form of osteochondrosis limited to the articular epiphysis. The most commonly affected areas include, in decreasing order of frequency, the femoral condyles, talar dome and capitellum of the humerus. OCD rarely occurs in the shoulder joint, where it involves either the humeral head or the glenoid. The purpose of this report is to present a case with glenoid cavity osteochondritis dissecans and clinical and radiological outcome after arthroscopic debridement. The patient underwent arthroscopy to remove the loose body and to microfracture the cavity. The patient was followed-up for 4 years and she is pain-free with full range of motion and a stable shoulder joint.", "title": "" }, { "docid": "678ef706d4cb1c35f6b3d82bf25a4aa7", "text": "This article is an extremely rapid survey of the modern theory of partial differential equations (PDEs). Sources of PDEs are legion: mathematical physics, geometry, probability theory, continuum mechanics, optimization theory, etc. Indeed, most of the fundamental laws of the physical sciences are partial differential equations and most papers published in applied math concern PDEs. The following discussion is consequently very broad, but also very shallow, and will certainly be inadequate for any given PDE the reader may care about. The goal is rather to highlight some of the many key insights and unifying principles across the entire subject.", "title": "" }, { "docid": "db190bb0cf83071b6e19c43201f92610", "text": "In this paper, a MATLAB based simulation of Grid connected PV system is presented. The main components of this simulation are PV solar panel, Boost converter; Maximum Power Point Tracking System (MPPT) and Grid Connected PV inverter with closed loop control system is designed and simulated. A simulation studies is carried out in different solar radiation level.", "title": "" }, { "docid": "ac156d7b3069ff62264bd704b7b8dfc9", "text": "Rynes, Colbert, and Brown (2002) presented the following statement to 959 members of the Society for Human Resource Management (SHRM): “Surveys that directly ask employees how important pay is to them are likely to overestimate pay’s true importance in actual decisions” (p. 158). If our interpretation (and that of Rynes et al.) of the research literature is accurate, then the correct true-false answer to the above statement is “false.” In other words, people are more likely to underreport than to overreport the importance of pay as a motivational factor in most situations. Put another way, research suggests that pay is much more important in people’s actual choices and behaviors than it is in their self-reports of what motivates them, much like the cartoon viewers mentioned in the quote above. Yet, only 35% of the respondents in the Rynes et al. study answered in a way consistent with research findings (i.e., chose “false”). Our objective in this article is to show that employee surveys regarding the importance of various factors in motivation generally produce results that are inconsistent with studies of actual employee behavior. In particular, we focus on well-documented findings that employees tend to say that pay THE IMPORTANCE OF PAY IN EMPLOYEE MOTIVATION: DISCREPANCIES BETWEEN WHAT PEOPLE SAY AND WHAT THEY DO", "title": "" }, { "docid": "5008ecf234a3449f524491de04b7868c", "text": "Cross-domain recommendations are currently available in closed, proprietary social networking ecosystems such as Facebook, Twitter and Google+. I propose an open framework as an alternative, which enables cross-domain recommendations with domain-agnostic user profiles modeled as semantic interest graphs. This novel framework covers all parts of a recommender system. It includes an architecture for privacy-enabled profile exchange, a distributed and domain-agnostic user model and a cross-domain recommendation algorithm. This enables users to receive recommendations for a target domain (e.g. food) based on any kind of previous interests.", "title": "" } ]
scidocsrr
5f7cb537da11a86fcd3b211ca8da75bb
Toward parallel continuum manipulators
[ { "docid": "f80f1952c5b58185b261d53ba9830c47", "text": "This paper presents a new class of thin, dexterous continuum robots, which we call active cannulas due to their potential medical applications. An active cannula is composed of telescoping, concentric, precurved superelastic tubes that can be axially translated and rotated at the base relative to one another. Active cannulas derive bending not from tendon wires or other external mechanisms but from elastic tube interaction in the backbone itself, permitting high dexterity and small size, and dexterity improves with miniaturization. They are designed to traverse narrow and winding environments without relying on ldquoguidingrdquo environmental reaction forces. These features seem ideal for a variety of applications where a very thin robot with tentacle-like dexterity is needed. In this paper, we apply beam mechanics to obtain a kinematic model of active cannula shape and describe design tools that result from the modeling process. After deriving general equations, we apply them to a simple three-link active cannula. Experimental results illustrate the importance of including torsional effects and the ability of our model to predict energy bifurcation and active cannula shape.", "title": "" }, { "docid": "be749e59367ee1033477bb88503032cf", "text": "This paper describes the results of field trials and associated testing of the OctArm series of multi-section continuous backbone \"continuum\" robots. This novel series of manipulators has recently (Spring 2005) undergone a series of trials including open-air and in-water field tests. Outcomes of the trials, in which the manipulators demonstrated the ability for adaptive and novel manipulation in challenging environments, are described. Implications for the deployment of continuum robots in a variety of applications are discussed", "title": "" }, { "docid": "8bb465b2ec1f751b235992a79c6f7bf1", "text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.", "title": "" } ]
[ { "docid": "d157d7b6e1c5796b6d7e8fedf66e81d8", "text": "Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides , anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the current research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, recent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.", "title": "" }, { "docid": "b55eb410f2a2c7eb6be1c70146cca203", "text": "Permissioned blockchains are arising as a solution to federate companies prompting accountable interactions. A variety of consensus algorithms for such blockchains have been proposed, each of which has different benefits and drawbacks. Proof-of-Authority (PoA) is a new family of Byzantine fault-tolerant (BFT) consensus algorithms largely used in practice to ensure better performance than traditional Practical Byzantine Fault Tolerance (PBFT). However, the lack of adequate analysis of PoA hinders any cautious evaluation of their effectiveness in real-world permissioned blockchains deployed over the Internet, hence on an eventually synchronous network experimenting Byzantine nodes. In this paper, we analyse two of the main PoA algorithms, named Aura and Clique, both in terms of provided guarantees and performances. First, we derive their functioning including how messages are exchanged, then we weight, by relying on the CAP theorem, consistency, availability and partition tolerance guarantees. We also report a qualitative latency analysis based on message rounds. The analysis advocates that PoA for permissioned blockchains, deployed over the Internet with Byzantine nodes, do not provide adequate consistency guarantees for scenarios where data integrity is essential. We claim that PBFT can fit better such scenarios, despite a limited loss in terms of performance.", "title": "" }, { "docid": "969a8e447fb70d22a7cbabe7fc47a9c9", "text": "A novel multi-level AC six-phase motor drive is developed in this paper. The scheme is based on three conventional 2-level three-phase voltage source inverters (VSIs) supplying the open-end windings of a dual three-phase motor (six-phase induction machine). The proposed inverter is capable of supply the machine with multi-level voltage waveforms. The developed system is compared with the conventional solution and it is demonstrated that the drive system permits to reduce the harmonic distortion of the machine currents, to reduce the total semiconductor losses and to decrease the power processed by converter switches. The system model and the Pulse-Width Modulation (PWM) strategy are presented. The experimental verification was obtained by using IGBTs with dedicated drives and a digital signal processor (DSP) with plug-in boards and sensors.", "title": "" }, { "docid": "97412a2a6e6d91fef2c75b62aca5b6f4", "text": "Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.", "title": "" }, { "docid": "dd4cc15729f65a0102028949b34cc56f", "text": "Autonomous vehicles platooning has received considerable attention in recent years, due to its potential to significantly benefit road transportation, improving traffic efficiency, enhancing road safety and reducing fuel consumption. The Vehicular ad hoc Networks and the de facto vehicular networking standard IEEE 802.11p communication protocol are key tools for the deployment of platooning applications, since the cooperation among vehicles is based on a reliable communication structure. However, vehicular networks can suffer different security threats. Indeed, in collaborative driving applications, the sudden appearance of a malicious attack can mainly compromise: (i) the correctness of data traffic flow on the vehicular network by sending malicious messages that alter the platoon formation and its coordinated motion; (ii) the safety of platooning application by altering vehicular network communication capability. In view of the fact that cyber attacks can lead to dangerous implications for the security of autonomous driving systems, it is fundamental to consider their effects on the behavior of the interconnected vehicles, and to try to limit them from the control design stage. To this aim, in this work we focus on some relevant types of malicious threats that affect the platoon safety, i.e. application layer attacks (Spoofing and Message Falsification) and network layer attacks (Denial of Service and Burst Transmission), and we propose a novel collaborative control strategy for enhancing the protection level of autonomous platoons. The control protocol is designed and validated in both analytically and experimental way, for the appraised malicious attack scenarios and for different communication topology structures. The effectiveness of the proposed strategy is shown by using PLEXE, a state of the art inter-vehicular communications and mobility simulator that includes basic building blocks for platooning. A detailed experimental analysis discloses the robustness of the proposed approach and its capabilities in reacting to the malicious attack effects.", "title": "" }, { "docid": "25ed874d2bf1125b5539d595319d334b", "text": "The notion of creativity, as opposed to related concepts such as beauty or interestingness, has not been studied from the perspective of automatic analysis of multimedia content. Meanwhile, short online videos shared on social media platforms, or micro-videos, have arisen as a new medium for creative expression. In this paper we study creative micro-videos in an effort to understand the features that make a video creative, and to address the problem of automatic detection of creative content. Defining creative videos as those that are novel and have aesthetic value, we conduct a crowdsourcing experiment to create a dataset of over 3, 800 micro-videos labelled as creative and non-creative. We propose a set of computational features that we map to the components of our definition of creativity, and conduct an analysis to determine which of these features correlate most with creative video. Finally, we evaluate a supervised approach to automatically detect creative video, with promising results, showing that it is necessary to model both aesthetic value and novelty to achieve optimal classification accuracy.", "title": "" }, { "docid": "5de19873c4bd67cdcc57d879d923dc10", "text": "BACKGROUND AND PURPOSE\nNeuromyelitis optica (NMO) or Devic's disease is a rare inflammatory and demyelinating autoimmune disorder of the central nervous system (CNS) characterized by recurrent attacks of optic neuritis (ON) and longitudinally extensive transverse myelitis (LETM), which is distinct from multiple sclerosis (MS). The guidelines are designed to provide guidance for best clinical practice based on the current state of clinical and scientific knowledge.\n\n\nSEARCH STRATEGY\nEvidence for this guideline was collected by searches for original articles, case reports and meta-analyses in the MEDLINE and Cochrane databases. In addition, clinical practice guidelines of professional neurological and rheumatological organizations were studied.\n\n\nRESULTS\nDifferent diagnostic criteria for NMO diagnosis [Wingerchuk et al. Revised NMO criteria, 2006 and Miller et al. National Multiple Sclerosis Society (NMSS) task force criteria, 2008] and features potentially indicative of NMO facilitate the diagnosis. In addition, guidance for the work-up and diagnosis of spatially limited NMO spectrum disorders is provided by the task force. Due to lack of studies fulfilling requirement for the highest levels of evidence, the task force suggests concepts for treatment of acute exacerbations and attack prevention based on expert opinion.\n\n\nCONCLUSIONS\nStudies on diagnosis and management of NMO fulfilling requirements for the highest levels of evidence (class I-III rating) are limited, and diagnostic and therapeutic concepts based on expert opinion and consensus of the task force members were assembled for this guideline.", "title": "" }, { "docid": "53a55e8aa8b3108cdc8d015eabb3476d", "text": "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.", "title": "" }, { "docid": "79e2e4af34e8a2b89d9439ff83b9fd5a", "text": "PROBLEM\nThe current nursing workforce is composed of multigenerational staff members creating challenges and at times conflict for managers.\n\n\nMETHODS\nGenerational cohorts are defined and two multigenerational scenarios are presented and discussed using the ACORN imperatives and Hahn's Five Managerial Strategies for effectively managing a multigenerational staff.\n\n\nFINDINGS\nCommunication and respect are the underlying key strategies to understanding and bridging the generational gap in the workplace.\n\n\nCONCLUSION\nEmbracing and respecting generational differences can bring strength and cohesiveness to nursing teams on the managerial or unit level.", "title": "" }, { "docid": "1878b3e7742a0ffbd3da67be23c6e366", "text": "Compensation for geometrical spreading along a raypath is one of the key steps in AVO amplitude-variation-with-offset analysis, in particular, for wide-azimuth surveys. Here, we propose an efficient methodology to correct long-spread, wide-azimuth reflection data for geometrical spreading in stratified azimuthally anisotropic media. The P-wave geometrical-spreading factor is expressed through the reflection traveltime described by a nonhyperbolic moveout equation that has the same form as in VTI transversely isotropic with a vertical symmetry axis media. The adapted VTI equation is parameterized by the normal-moveout NMO ellipse and the azimuthally varying anellipticity parameter . To estimate the moveout parameters, we apply a 3D nonhyperbolic semblance algorithm of Vasconcelos and Tsvankin that operates simultaneously with traces at all offsets and", "title": "" }, { "docid": "ef372c1537c8eabb4595dc5385199575", "text": "This article provides a review of the traditional clinical concepts for the design and fabrication of removable partial dentures (RPDs). Although classic theories and rules for RPD designs have been presented and should be followed, excellent clinical care for partially edentulous patients may also be achieved with computer-aided design/computer-aided manufacturing technology and unique blended designs. These nontraditional RPD designs and fabrication methods provide for improved fit, function, and esthetics by using computer-aided design software, composite resin for contours and morphology of abutment teeth, metal support structures for long edentulous spans and collapsed occlusal vertical dimensions, and flexible, nylon thermoplastic material for metal-supported clasp assemblies.", "title": "" }, { "docid": "afdc8b3e00a4fe39b281e17056d97664", "text": "This demo presents the features of the Proactive Insights (PI) engine, which uses machine learning and artificial intelligence capabilities to automatically identify weaknesses in business processes, to reveal their root causes, and to give intelligent advice on how to improve process inefficiencies. We demonstrate the four PI elements covering Conformance, Machine Learning, Social, and Companion. The new insights are especially valuable for process managers and academics interested in BPM and process mining.", "title": "" }, { "docid": "df404258bca8d16cabf935fd94fc7463", "text": "Training deep neural networks with Stochastic Gradient Descent, or its variants, requires careful choice of both learning rate and batch size. While smaller batch sizes generally converge in fewer training epochs, larger batch sizes offer more parallelism and hence better computational efficiency. We have developed a new training approach that, rather than statically choosing a single batch size for all epochs, adaptively increases the batch size during the training process. Our method delivers the convergence rate of small batch sizes while achieving performance similar to large batch sizes. We analyse our approach using the standard AlexNet, ResNet, and VGG networks operating on the popular CIFAR10, CIFAR-100, and ImageNet datasets. Our results demonstrate that learning with adaptive batch sizes can improve performance by factors of up to 6.25 on 4 NVIDIA Tesla P100 GPUs while changing accuracy by less than 1% relative to training with fixed batch sizes.", "title": "" }, { "docid": "ed769b97bea6d4bbe7e282ad6dbb1c67", "text": "Three basic switching structures are defined: one is formed by two capacitors and three diodes; the other two are formed by two inductors and two diodes. They are inserted in either a Cuk converter, or a Sepic, or a Zeta converter. The SC/SL structures are built in such a way as when the active switch of the converter is on, the two inductors are charged in series or the two capacitors are discharged in parallel. When the active switch is off, the two inductors are discharged in parallel or the two capacitors are charged in series. As a result, the line voltage is reduced more times than in classical Cuk/Sepic/Zeta converters. The steady-state analysis of the new converters, a comparison of the DC voltage gain and of the voltage and current stresses of the new hybrid converters with those of the available quadratic converters, and experimental results are given", "title": "" }, { "docid": "b36e9a2f1143fa242c4d372cb0ba38b3", "text": "Invariance to nuisance transformations is one of the desirable properties of effective representations. We consider transformations that form a group and propose an approach based on kernel methods to derive local group invariant representations. Locality is achieved by defining a suitable probability distribution over the group which in turn induces distributions in the input feature space. We learn a decision function over these distributions by appealing to the powerful framework of kernel methods and generate local invariant random feature maps via kernel approximations. We show uniform convergence bounds for kernel approximation and provide generalization bounds for learning with these features. We evaluate our method on three real datasets, including Rotated MNIST and CIFAR-10, and observe that it outperforms competing kernel based approaches. The proposed method also outperforms deep CNN on RotatedMNIST and performs comparably to the recently proposed group-equivariant CNN.", "title": "" }, { "docid": "daa30843c26d285b3b42cb588e4d0cd1", "text": "In this paper, we rigorously study tractable models for provably recovering low-rank tensors. Unlike their matrix-based predecessors, current convex approaches for recovering low-rank tensors based on incomplete (tensor completion) and/or grossly corrupted (tensor robust principal analysis) observations still suffer from the lack of theoretical guarantees, although they have been used in various recent applications and have exhibited promising empirical performance. In this work, we attempt to fill this gap. Specifically, we propose a class of convex recovery models (including strongly convex programs) that can be proved to guarantee exact recovery under certain conditions. All parameters in our formulations can be determined beforehand based on the measurement data and thus there is no parameter tuning involved.", "title": "" }, { "docid": "49d5f6fdc02c777d42830bac36f6e7e2", "text": "Current tools for exploratory data analysis (EDA) require users to manually select data attributes, statistical computations and visual encodings. This can be daunting for large-scale, complex data. We introduce Foresight, a visualization recommender system that helps the user rapidly explore large high-dimensional datasets through “guideposts.” A guidepost is a visualization corresponding to a pronounced instance of a statistical descriptor of the underlying data, such as a strong linear correlation between two attributes, high skewness or concentration about the mean of a single attribute, or a strong clustering of values. For each descriptor, Foresight initially presents visualizations of the “strongest” instances, based on an appropriate ranking metric. Given these initial guideposts, the user can then look at “nearby” guideposts by issuing “guidepost queries” containing constraints on metric type, metric strength, data attributes, and data values. Thus, the user can directly explore the network of guideposts, rather than the overwhelming space of data attributes and visual encodings. Foresight also provides for each descriptor a global visualization of ranking-metric values to both help orient the user and ensure a thorough exploration process. Foresight facilitates interactive exploration of large datasets using fast, approximate sketching to compute ranking metrics. We also contribute insights on EDA practices of data scientists, summarizing results from an interview study we conducted to inform the design of Foresight.", "title": "" }, { "docid": "7ec93b17c88d09f8a442dd32127671d8", "text": "Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals. This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel.", "title": "" }, { "docid": "eebeb59c737839e82ecc20a748b12c6b", "text": "We present SWARM, a wearable affective technology designed to help a user to reflect on their own emotional state, modify their affect, and interpret the emotional states of others. SWARM aims for a universal design (inclusive of people with various disabilities), with a focus on modular actuation components to accommodate users' sensory capabilities and preferences, and a scarf form-factor meant to reduce the stigma of accessible technologies through a fashionable embodiment. Using an iterative, user-centered approach, we present SWARM's design. Additionally, we contribute findings for communicating emotions through technology actuations, wearable design techniques (including a modular soft circuit design technique that fuses conductive fabric with actuation components), and universal design considerations for wearable technology.", "title": "" } ]
scidocsrr
cad6d5cdd67c96838b3f48470ebf28b1
Visual Query Language: Finding patterns in and relationships among time series data
[ { "docid": "44f41d363390f6f079f2e67067ffa36d", "text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢", "title": "" } ]
[ { "docid": "e7a260bfb238d8b4f147ac9c2a029d1d", "text": "The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full DRO policy for further details.", "title": "" }, { "docid": "c46b0f8d340bd45c0b64c5d6cfd752a3", "text": "We propose a method for inferring the existence of a latent common cause (“confounder”) of two observed random variables. The method assumes that the two effects of the confounder are (possibly nonlinear) functions of the confounder plus independent, additive noise. We discuss under which conditions the model is identifiable (up to an arbitrary reparameterization of the confounder) from the joint distribution of the effects. We state and prove a theoretical result that provides evidence for the conjecture that the model is generically identifiable under suitable technical conditions. In addition, we propose a practical method to estimate the confounder from a finite i.i.d. sample of the effects and illustrate that the method works well on both simulated and real-world data.", "title": "" }, { "docid": "d0c4997c611d8759805d33cf1ad9eef1", "text": "The automatic evaluation of text-based assessment items, such as short answers or essays, is an open and important research challenge. In this paper, we compare several features for the classification of short open-ended responses to questions related to a large first-year health sciences course. These features include a) traditional n-gram models; b) entity URIs (Uniform Resource Identifier) and c) entity mentions extracted using a semantic annotation API; d) entity mention embeddings based on GloVe, and e) entity URI embeddings extracted from Wikipedia. These features are used in combination with classification algorithms to discriminate correct answers from incorrect ones. Our results show that, on average, n-gram features performed the best in terms of precision and entity mentions in terms of f1-score. Similarly, in terms of accuracy, entity mentions and n-gram features performed the best. Finally, features based on dense vector representations such as entity embeddings and mention embeddings obtained the best f1-score for predicting correct answers.", "title": "" }, { "docid": "284c7292bd7e79c5c907fc2aa21fb52c", "text": "Monte Carlo Tree Search (MCTS) is an AI technique that has been successfully applied to many deterministic games of perfect information, leading to large advances in a number of domains, such as Go and General Game Playing. Imperfect information games are less well studied in the field of AI despite being popular and of significant commercial interest, for example in the case of computer and mobile adaptations of turn based board and card games. This is largely because hidden information and uncertainty leads to a large increase in complexity compared to perfect information games. In this thesis MCTS is extended to games with hidden information and uncertainty through the introduction of the Information Set MCTS (ISMCTS) family of algorithms. It is demonstrated that ISMCTS can handle hidden information and uncertainty in a variety of complex board and card games. This is achieved whilst preserving the general applicability of MCTS and using computational budgets appropriate for use in a commercial game. The ISMCTS algorithm is shown to outperform the existing approach of Perfect Information Monte Carlo (PIMC) search. Additionally it is shown that ISMCTS can be used to solve two known issues with PIMC search, namely strategy fusion and non-locality. ISMCTS has been integrated into a commercial game, Spades by AI Factory, with over 2.5 million downloads. The Information Capture And ReUSe (ICARUS) framework is also introduced in this thesis. The ICARUS framework generalises MCTS enhancements in terms of information capture (from MCTS simulations) and reuse (to improve MCTS tree and simulation policies). The ICARUS framework is used to express existing enhancements, to provide a tool to design new ones, and to rigorously define how MCTS enhancements can be combined. The ICARUS framework is tested across a wide variety of games.", "title": "" }, { "docid": "7b4dd695182f7e15e58f44e309bf897c", "text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.", "title": "" }, { "docid": "0022623017e81ee0a102da0524c83932", "text": "Calcite is a new Eclipse plugin that helps address the difficulty of understanding and correctly using an API. Calcite finds the most popular ways to instantiate a given class or interface by using code examples. To allow the users to easily add these object instantiations to their code, Calcite adds items to the popup completion menu that will insert the appropriate code into the user’s program. Calcite also uses crowd sourcing to add to the menu instructions in the form of comments that help the user perform functions that people have identified as missing from the API. In a user study, Calcite improved users’ success rate by 40%.", "title": "" }, { "docid": "c253083ab44c842819059ad64781d51d", "text": "RGB-D data is getting ever more interest from the research community as both cheap cameras appear in the market and the applications of this type of data become more common. A current trend in processing image data is the use of convolutional neural networks (CNNs) that have consistently beat competition in most benchmark data sets. In this paper we investigate the possibility of transferring knowledge between CNNs when processing RGB-D data with the goal of both improving accuracy and reducing training time. We present experiments that show that our proposed approach can achieve both these goals.", "title": "" }, { "docid": "1aa7e7fe70bdcbc22b5d59b0605c34e9", "text": "Surgical tasks are complex multi-step sequences of smaller subtasks (often called surgemes) and it is useful to segment task demonstrations into meaningful subsequences for:(a) extracting finite-state machines for automation, (b) surgical training and skill assessment, and (c) task classification. Existing supervised methods for task segmentation use segment labels from a dictionary of motions to build classifiers. However, as the datasets become voluminous, the labeling becomes arduous and further, this method doesnt́ generalize to new tasks that dont́ use the same dictionary. We propose an unsupervised semantic task segmentation framework by learning “milestones”, ellipsoidal regions of the position and feature states at which a task transitions between motion regimes modeled as locally linear. Milestone learning uses a hierarchy of Dirichlet Process Mixture Models, learned through Expectation-Maximization, to cluster the transition points and optimize the number of clusters. It leverages transition information from kinematic state as well as environment state such as visual features. We also introduce a compaction step which removes repetitive segments that correspond to a mid-demonstration failure recovery by retrying an action. We evaluate Milestones Learning on three surgical subtasks: pattern cutting, suturing, and needle passing. Initial results suggest that our milestones qualitatively match manually annotated segmentation. While one-to-one correspondence of milestones with annotated data is not meaningful, the milestones recovered from our method have exactly one annotated surgeme transition in 74% (needle passing) and 66% (suturing) of total milestones, indicating a semantic match.", "title": "" }, { "docid": "d151881de9a0e1699e95db7bbebc032b", "text": "Despite the noticeable progress in perceptual tasks like detection, instance segmentation and human parsing, computers still perform unsatisfactorily on visually understanding humans in crowded scenes, such as group behavior analysis, person re-identification and autonomous driving, etc. To this end, models need to comprehensively perceive the semantic information and the differences between instances in a multi-human image, which is recently defined as the multi-human parsing task. In this paper, we present a new large-scale database “Multi-Human Parsing (MHP)” for algorithm development and evaluation, and advances the state-of-the-art in understanding humans in crowded scenes. MHP contains 25,403 elaborately annotated images with 58 fine-grained semantic category labels, involving 2-26 persons per image and captured in real-world scenes from various viewpoints, poses, occlusion, interactions and background. We further propose a novel deep Nested Adversarial Network (NAN) model for multi-human parsing. NAN consists of three Generative Adversarial Network (GAN)-like sub-nets, respectively performing semantic saliency prediction, instance-agnostic parsing and instance-aware clustering. These sub-nets form a nested structure and are carefully designed to learn jointly in an end-to-end way. NAN consistently outperforms existing state-of-the-art solutions on our MHP and several other datasets, and serves as a strong baseline to drive the future research for multi-human parsing.", "title": "" }, { "docid": "9858386550b0193c079f1d7fe2b5b8b3", "text": "Objective This study examined the associations between household food security (access to sufficient, safe, and nutritious food) during infancy and attachment and mental proficiency in toddlerhood. Methods Data from a longitudinal nationally representative sample of infants and toddlers (n = 8944) from the Early Childhood Longitudinal Study—9-month (2001–2002) and 24-month (2003–2004) surveys were used. Structural equation modeling was used to examine the direct and indirect associations between food insecurity at 9 months, and attachment and mental proficiency at 24 months. Results Food insecurity worked indirectly through depression and parenting practices to influence security of attachment and mental proficiency in toddlerhood. Conclusions Social policies that address the adequacy and predictability of food supplies in families with infants have the potential to affect parental depression and parenting behavior, and thereby attachment and cognitive development at very early ages.", "title": "" }, { "docid": "ba3bf5f03e44e29a657d8035bb00535c", "text": "Due to the broadcast nature of WiFi communication anyone with suitable hardware is able to monitor surrounding traffic. However, a WiFi device is able to listen to only one channel at any given time. The simple solution for capturing traffic across multiple channels involves channel hopping, which as a side effect reduces dwell time per channel. Hence monitoring with channel hopping does not produce a comprehensive view of the traffic across all channels at a given time.\n In this paper we present an inexpensive multi-channel WiFi capturing system (dubbed the wireless shark\") and evaluate its performance in terms of traffic cap- turing efficiency. Our results confirm and quantify the intuition that the performance is directly related to the number of WiFi adapters being used for listening. As a second contribution of the paper we use the wireless shark to observe the behavior of 14 different mobile devices, both in controlled and normal office environments. In our measurements, we focus on the probe traffic that the devices send when they attempt to discover available WiFi networks. Our results expose some distinct characteristics in various mobile devices' probing behavior.", "title": "" }, { "docid": "d71c2f3d1a10b5a2cb33247129bfd8e0", "text": "PURPOSE OF REVIEW\nTo review the current practice in the field of auricular reconstruction and to highlight the recent advances reported in the medical literature.\n\n\nRECENT FINDINGS\nThe majority of surgeons who perform auricular reconstruction continue to employ the well-established techniques developed by Brent and Nagata. Surgery takes between two and four stages, with the initial stage being construction of a framework of autogenous rib cartilage which is implanted into a subcutaneous pocket. Several modifications of these techniques have been reported. More recently, synthetic frameworks have been employed instead of autogenous rib cartilage. For this procedure, the implant is generally covered with a temporoparietal flap and a skin graft at the first stage of surgery. Tissue engineering is a rapidly developing field, and there have been several articles related to the field of auricular reconstruction. These show great potential to offer a solution to the challenge associated with construction of a viable autogenous cartilage framework, whilst avoiding donor-site morbidity.\n\n\nSUMMARY\nThis article gives an overview of the current practice in the field of auricular reconstruction and summarizes the recent surgical developments and relevant tissue engineering research.", "title": "" }, { "docid": "e26f8d654eb4bf0f3e974ed7e65fb4e1", "text": "The FIRE 2016 Microblog track focused on retrieval of microblogs (tweets posted on Twitter) during disaster events. A collection of about 50,000 microblogs posted during a recent disaster event was made available to the participants, along with a set of seven practical information needs during a disaster situation. The task was to retrieve microblogs relevant to these needs. 10 teams participated in the task, submitting a total of 15 runs. The task resulted in comparison among performances of various microblog retrieval strategies over a benchmark collection, and brought out the challenges in microblog retrieval.", "title": "" }, { "docid": "c9b4ada661599a4c0c78176840f78171", "text": "In this paper, we present the suite of tools of the FOMCON (“Fractional-order Modeling and Control”) toolbox for MATLAB that are used to carry out fractional-order PID controller design and hardware realization. An overview of the toolbox, its structure and particular modules, is presented with appropriate comments. We use a laboratory object designed to conduct temperature control experiments to illustrate the methods employed in FOMCON to derive suitable parameters for the controller and arrive at a digital implementation thereof on an 8-bit AVR microprocessor. The laboratory object is working under a real-time simulation platform with Simulink, Real-Time Windows Target toolbox and necessary drivers as its software backbone. Experimental results are provided which support the effectiveness of the proposed software solution.", "title": "" }, { "docid": "8b84dc47c6a9d39ef1d094aa173a954c", "text": "Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. We use the JavaNLP repository(http://nlp.stanford.edu/javanlp/ ) for its implementation of a Conditional Random Field(CRF) and a Conditional Markov Model(CMM), also called a Maximum Entropy Markov Model. We have obtained results on majority voting with different labeling schemes, with backward and forward parsing of the CMM, and also some results when we trained a decision tree to take a decision based on the outputs of the different labeling schemes. We have also tried to solve the problem of label inconsistency issue by attempting the naive approach of enforcing hard label-consistency by choosing the majority entity for a sequence of tokens, in the specific test document, as well as the whole test corpus, and managed to get reasonable gains. We also attempted soft label consistency in the following way. We use a portion of the training data to train a CRF to make predictions on the rest of the train data and on the test data. We then train a second CRF with the majority label predictions as additional input features.", "title": "" }, { "docid": "d2324527cd1b8e28fd63c8c20f57f4d4", "text": "Learning phonetic categories is one of the first steps to learning a language, yet is hard to do using only distributional phonetic information. Semantics could potentially be useful, since words with different meanings have distinct phonetics, but it is unclear how many word meanings are known to infants learning phonetic categories. We show that attending to a weaker source of semantics, in the form of a distribution over topics in the current context, can lead to improvements in phonetic category learning. In our model, an extension of a previous model of joint word-form and phonetic category inference, the probability of word-forms is topic-dependent, enabling the model to find significantly better phonetic vowel categories and word-forms than a model with no semantic knowledge.", "title": "" }, { "docid": "f489708f15f3e5cdd15f669fb9979488", "text": "Humans learn to play video games significantly faster than state-of-the-art reinforcement learning (RL) algorithms. Inspired by this, we introduce strategic object oriented reinforcement learning (SOORL) to learn simple dynamics model through automatic model selection and perform efficient planning with strategic exploration. We compare different exploration strategies in a model-based setting in which exact planning is impossible. Additionally, we test our approach on perhaps the hardest Atari game Pitfall! and achieve significantly improved exploration and performance over prior methods.", "title": "" }, { "docid": "748ae7abfd8b1dfb3e79c94c5adace9d", "text": "Users routinely access cloud services through third-party apps on smartphones by giving apps login credentials (i.e., a username and password). Unfortunately, users have no assurance that their apps will properly handle this sensitive information. In this paper, we describe the design and implementation of ScreenPass, which significantly improves the security of passwords on touchscreen devices. ScreenPass secures passwords by ensuring that they are entered securely, and uses taint-tracking to monitor where apps send password data. The primary technical challenge addressed by ScreenPass is guaranteeing that trusted code is always aware of when a user is entering a password. ScreenPass provides this guarantee through two techniques. First, ScreenPass includes a trusted software keyboard that encourages users to specify their passwords' domains as they are entered (i.e., to tag their passwords). Second, ScreenPass performs optical character recognition (OCR) on a device's screenbuffer to ensure that passwords are entered only through the trusted software keyboard. We have evaluated ScreenPass through experiments with a prototype implementation, two in-situ user studies, and a small app study. Our prototype detected a wide range of dynamic and static keyboard-spoofing attacks and generated zero false positives. As long as a screen is off, not updated, or not tapped, our prototype consumes zero additional energy; in the worst case, when a highly interactive app rapidly updates the screen, our prototype under a typical configuration introduces only 12% energy overhead. Participants in our user studies tagged their passwords at a high rate and reported that tagging imposed no additional burden. Finally, a study of malicious and non-malicious apps running under ScreenPass revealed several cases of password mishandling.", "title": "" }, { "docid": "b5f7511566b902bc206228dc3214c211", "text": "In the imitation learning paradigm algorithms learn from expert demonstrations in order to become able to accomplish a particular task. Daumé III et al. (2009) framed structured prediction in this paradigm and developed the search-based structured prediction algorithm (Searn) which has been applied successfully to various natural language processing tasks with state-of-the-art performance. Recently, Ross et al. (2011) proposed the dataset aggregation algorithm (DAgger) and compared it with Searn in sequential prediction tasks. In this paper, we compare these two algorithms in the context of a more complex structured prediction task, namely biomedical event extraction. We demonstrate that DAgger has more stable performance and faster learning than Searn, and that these advantages are more pronounced in the parameter-free versions of the algorithms.", "title": "" } ]
scidocsrr
18aea5129e8608abd1d5fd6b2c9d7a71
A Framework for Clustering Uncertain Data
[ { "docid": "f5168565306f6e7f2b36ef797a6c9de8", "text": "We study the problem of clustering data objects whose locations are uncertain. A data object is represented by an uncertainty region over which a probability density function (pdf) is defined. One method to cluster uncertain objects of this sort is to apply the UK-means algorithm, which is based on the traditional K-means algorithm. In UK-means, an object is assigned to the cluster whose representative has the smallest expected distance to the object. For arbitrary pdf, calculating the expected distance between an object and a cluster representative requires expensive integration computation. We study various pruning methods to avoid such expensive expected distance calculation.", "title": "" }, { "docid": "5f1f7847600207d1216384f8507be63b", "text": "This paper introduces U-relations, a succinct and purely relational representation system for uncertain databases. U-relations support attribute-level uncertainty using vertical partitioning. If we consider positive relational algebra extended by an operation for computing possible answers, a query on the logical level can be translated into, and evaluated as, a single relational algebra query on the U-relational representation. The translation scheme essentially preserves the size of the query in terms of number of operations and, in particular, number of joins. Standard techniques employed in off-the-shelf relational database management systems are effective for optimizing and processing queries on U-relations. In our experiments we show that query evaluation on U-relations scales to large amounts of data with high degrees of uncertainty.", "title": "" } ]
[ { "docid": "72a1798a864b4514d954e1e9b6089ad8", "text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.", "title": "" }, { "docid": "24e10d8e12d8b3c618f88f1f0d33985d", "text": "W -algebras of finite type are certain finitely generated associative algebras closely related to universal enveloping algebras of semisimple Lie algebras. In this paper we prove a conjecture of Premet that gives an almost complete classification of finite dimensional irreducible modules for W -algebras. Also we get some partial results towards a conjecture by Ginzburg on their finite dimensional bimodules.", "title": "" }, { "docid": "857efb4909ada73ca849acf24d6e74db", "text": "Owing to inevitable thermal/moisture instability for organic–inorganic hybrid perovskites, pure inorganic perovskite cesium lead halides with both inherent stability and prominent photovoltaic performance have become research hotspots as a promising candidate for commercial perovskite solar cells. However, it is still a serious challenge to synthesize desired cubic cesium lead iodides (CsPbI3) with superior photovoltaic performance for its thermodynamically metastable characteristics. Herein, polymer poly-vinylpyrrolidone (PVP)-induced surface passivation engineering is reported to synthesize extra-long-term stable cubic CsPbI3. It is revealed that acylamino groups of PVP induce electron cloud density enhancement on the surface of CsPbI3, thus lowering surface energy, conducive to stabilize cubic CsPbI3 even in micrometer scale. The cubic-CsPbI3 PSCs exhibit extra-long carrier diffusion length (over 1.5 μm), highest power conversion efficiency of 10.74% and excellent thermal/moisture stability. This result provides important progress towards understanding of phase stability in realization of large-scale preparations of efficient and stable inorganic PSCs. Inorganic cesium lead iodide perovskite is inherently more stable than the hybrid perovskites but it undergoes phase transition that degrades the solar cell performance. Here Li et al. stabilize it with poly-vinylpyrrolidone and obtain high efficiency of 10.74% with excellent thermal and moisture stability.", "title": "" }, { "docid": "5517c8f35c8e9df2994afc12d5cb928f", "text": "Glomus tumors of the penis are extremely rare. A patient with multiple regional glomus tumors involving the penis is reported. A 16-year-old boy presented with the complaint of painless penile masses and resection of the lesions was performed. The pathologic diagnosis was glomus tumor of the penis. This is the ninth case of glomus tumor of the penis to be reported in the literature.", "title": "" }, { "docid": "fd652333e274b25440767de985702111", "text": "The global gold market has recently attracted a lot of attention and the price of gold is relatively higher than its historical trend. For mining companies to mitigate risk and uncertainty in gold price fluctuations, make hedging, future investment and evaluation decisions, depend on forecasting future price trends. The first section of this paper reviews the world gold market and the historical trend of gold prices from January 1968 to December 2008. This is followed by an investigation into the relationship between gold price and other key influencing variables, such as oil price and global inflation over the last 40 years. The second section applies a modified econometric version of the longterm trend reverting jump and dip diffusion model for forecasting natural-resource commodity prices. This method addresses the deficiencies of previous models, such as jumps and dips as parameters and unit root test for long-term trends. The model proposes that historical data of mineral commodities have three terms to demonstrate fluctuation of prices: a long-term trend reversion component, a diffusion component and a jump or dip component. The model calculates each term individually to estimate future prices of mineral commodities. The study validates the model and estimates the gold price for the next 10 years, based on monthly historical data of nominal gold price. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "080032ded41edee2a26320e3b2afb123", "text": "The aim of this study was to evaluate the effects of calisthenic exercises on psychological status in patients with ankylosing spondylitis (AS) and multiple sclerosis (MS). This study comprised 40 patients diagnosed with AS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based) and 40 patients diagnosed with MS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based). The exercise programme was completed by 73 participants (hospital-based = 34, home-based = 39). Mean age was 33.75 ± 5.77 years. After the 8-week exercise programme in the AS group, the home-based exercise group showed significant improvements in erythrocyte sedimentation rates (ESR). The hospital-based exercise group showed significant improvements in terms of the Bath AS Metrology Index (BASMI) and Hospital Anxiety and Depression Scale-Anxiety (HADS-A) scores. After the 8-week exercise programme in the MS group, the home-based and hospital-based exercise groups showed significant improvements in terms of the 10-m walking test, Berg Balance Scale (BBS), HADS-A, and MS international Quality of Life (MusiQoL) scores. There was a significant improvement in the hospital-based and a significant deterioration in the home-based MS patients according to HADS-Depression (HADS-D) score. The positive effects of exercises on neurologic and rheumatic chronic inflammatory processes associated with disability should not be underestimated. Ziel der vorliegenden Studie war die Untersuchung der Wirkungen von gymnastischen Übungen auf die psychische Verfassung von Patienten mit Spondylitis ankylosans (AS) und multipler Sklerose (MS). Die Studie umfasste 40 Patienten mit der Diagnose AS, die randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant), und 40 Patienten mit der Diagnose MS, die ebenfalls randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant). Vollständig absolviert wurde das Übungsprogramm von 73 Patienten (stationär: 34, ambulant: 39). Das Durchschnittsalter betrug 33,75 ± 5,77 Jahre. Nach dem 8-wöchigen Übungsprogramm in der AS-Gruppe zeigten sich bei der ambulanten Übungsgruppe signifikante Verbesserungen bei der Blutsenkungsgeschwindigkeit (BSG). Die stationäre Übungsgruppe wies signifikante Verbesserungen in Bezug auf den BASMI-Score (Bath AS Metrology Index) und den HADS-A-Score (Hospital Anxiety and Depression Scale-Anxiety) auf. Nach dem 8-wöchigen Übungsprogramm in der MS-Gruppe zeigten sich sowohl in der ambulanten als auch in der stationären Übungsgruppe signifikante Verbesserungen hinsichtlich des 10-m-Gehtests, des BBS-Ergebnisses (Berg Balance Scale), des HADS-A- sowie des MusiQoL-Scores (MS international Quality of Life). Beim HADS-D-Score (HADS-Depression) bestand eine signifikante Verbesserung bei den stationären und eine signifikante Verschlechterung bei den ambulanten MS-Patienten. Die positiven Wirkungen von gymnastischen Übungen auf neurologische und rheumatische chronisch entzündliche Prozesse mit Behinderung sollten nicht unterschätzt werden.", "title": "" }, { "docid": "66474114bf431f3ee6973ad6469565b2", "text": "Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to protect PV modules from damage and to eliminate risks of safety hazards. This paper focuses on line–line faults in PV arrays that may be caused by short-circuit faults or double ground faults. The effect on fault current from a maximum-power-point tracking of a PV inverter is discussed and shown to, at times, prevent overcurrent protection devices (OCPDs) to operate properly. Furthermore, fault behavior of PV arrays is highly related to the fault location, fault impedance, irradiance level, and use of blocking diodes. Particularly, this paper examines the challenges to OCPD in a PV array brought by unique faults: One is a fault that occurs under low-irradiance conditions, and the other is a fault that occurs at night and evolves during “night-to-day” transition. In both circumstances, the faults might remain hidden in the PV system, no matter how irradiance changes afterward. These unique faults may subsequently lead to unexpected safety hazards, reduced system efficiency, and reduced reliability. A small-scale experimental PV system has been developed to further validate the conclusions.", "title": "" }, { "docid": "220532757b4a47422b5685577f7f4662", "text": "In many sequential decision-making problems one is interested in minimizing an expected cumulative cost while taking into account risk, i.e., increased awareness of events of small probability and high consequences. Accordingly, the objective of this paper is to present efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs), where risk is represented via a chance constraint or a constraint on the conditional value-at-risk (CVaR) of the cumulative cost. We collectively refer to such problems as percentile risk-constrained MDPs. Specifically, we first derive a formula for computing the gradient of the Lagrangian function for percentile riskconstrained MDPs. Then, we devise policy gradient and actor-critic algorithms that (1) estimate such gradient, (2) update the policy in the descent direction, and (3) update the Lagrange multiplier in the ascent direction. For these algorithms we prove convergence to locally optimal policies. Finally, we demonstrate the effectiveness of our algorithms in an optimal stopping problem and an online marketing application.", "title": "" }, { "docid": "b15dc135eda3a7c60565142ba7a6ae37", "text": "We propose a mechanism to reconstruct part annotated 3D point clouds of objects given just a single input image. We demonstrate that jointly training for both reconstruction and segmentation leads to improved performance in both the tasks, when compared to training for each task individually. The key idea is to propagate information from each task so as to aid the other during the training procedure. Towards this end, we introduce a location-aware segmentation loss in the training regime. We empirically show the effectiveness of the proposed loss in generating more faithful part reconstructions while also improving segmentation accuracy. We thoroughly evaluate the proposed approach on different object categories from the ShapeNet dataset to obtain improved results in reconstruction as well as segmentation. Codes are available at https://github.com/val-iisc/3d-psrnet.", "title": "" }, { "docid": "b7f4ad07e6d116df196da9c5be5d2fe8", "text": "An ego-motion estimation method based on the spatial and Doppler information obtained by an automotive radar is proposed. The estimation of the motion state vector is performed in a density-based framework. Compared to standard vehicle odometry the approach is capable to estimate the full two dimensional motion state with three degrees of freedom. The measurement of a Doppler radar sensor is represented as a mixture of Gaussians. This mixture is matched with the mixture of a previous measurement by applying the appropriate egomotion transformation. The parameters of the transformation are found by the optimization of a suitable join metric. Due to the Doppler information the method is very robust against disturbances by moving objects and clutter. It provides excellent results for highly nonlinear movements. Real world results of the proposed method are presented. The measurements are obtained by a 77GHz radar sensor mounted on a test vehicle. A comparison using a high-precision inertial measurement unit with differential GPS support is made. The results show a high accuracy in velocity and yaw-rate estimation.", "title": "" }, { "docid": "51d29ec1313df001efc78397cf1d4aaa", "text": "Numerous studies have established that aggregating judgments or predictions across individuals can be surprisingly accurate in a variety of domains, including prediction markets, political polls, game shows, and forecasting (see Surowiecki, 2004). Under Galton’s (1907) conditions of individuals having largely unbiased and independent judgments, the aggregated judgment of a group of individuals is uncontroversially better, on average, than the individual judgments themselves (e.g., Armstrong, 2001; Clemen, 1989; Galton, 1907; Surowiecki, 2004; Winkler, 1971). The boundary conditions of crowd wisdom, however, are not as well-understood. For example, when group members are allowed access to other members’ predictions, as opposed to making them independently, their predictions become more positively correlated and the crowd’s performance can diminish (Lorenz, Rauhut, Schweitzer, & Helbing, 2011). In the context of handicapping sports results, individuals have been found to make systematically biased predictions, so that their aggregated judgments may not be wise (Simmons, Nelson, Galak, & Frederick, 2011). How robust is crowd wisdom to factors such as non-independence and bias of crowd members’ judgments? If the conditions for crowd wisdom are less than ideal, is it better to aggregate judgments or, for instance, rely on a skilled individual judge? Would it be better to add a highly skilled crowd member or a less skilled one who makes systematically different predictions than other members, increasing diversity? We provide a simple, precise definition of the wisdom-of-the-crowd effect and a systematic way to examine its boundary conditions. We define a crowd as wise if a linear aggregate of its members’ judgments of a criterion value has less expected squared error than the judgments of an individual sampled randomly, but not necessarily uniformly, from the crowd. Previous definitions of the wisdom of the crowd effect have largely focused on comparing the crowd’s accuracy to that of the average individual member (Larrick, Mannes, & Soll, 2012). Our definition generalizes prior approaches in a couple of ways. We consider crowds created by any linear aggregate, not just simple averaging. Second, our definition allows the comparison of the crowd to an individual selected according to a distribution that could reflect past individual performance, e.g., their skill, or other attributes. On the basis of our definition, we develop a framework for analyzing crowd wisdom that includes various aggregation and sampling rules. These rules include both weighting the aggregate and sampling the individual according to skill, where skill is operationalized as predictive validity, i.e., the correlation between a judge’s prediction and the criterion. Although the amount of the crowd’s wisdom the expected difference between individual error and crowd error is non-linear in the amount of bias and non-independence of the judgments, our results yield simple and general rules specifying when a simple average will be wise. While a simple average of the crowd is not always wise if individuals are not sampled uniformly at random, we show that there always exists some a priori aggregation rule that makes the crowd wise.", "title": "" }, { "docid": "45ef4e4416a4cf20dec64f30ec584a7a", "text": "Driving simulators play an important role in the development of new vehicles and advanced driver assistance devices. In fact, on the one hand, having a human driver on a driving simulator allows automotive OEMs to bridge the gap between virtual prototyping and on-road testing during the vehicle development phase. On the other hand, novel driver assistance systems (such as advanced accident avoidance systems) can be safely tested by having the driver operating the vehicle in a virtual, highly realistic environment, while being exposed to hazardous situations. In both applications, it is crucial to faithfully reproduce in the simulator the drivers perception of forces acting on the vehicle and its acceleration. The strategy used to operate the simulator platform within its limited working space to provide the driver with the most realistic perception goes under the name of motion cueing. In this paper we describe a novel approach to motion cueing design that is based on Model Predictive Control techniques. Two features characterize the algorithm, namely, the use of a detailed model of the human vestibular system and a predictive strategy based on the availability of a virtual driver. Differently from classical schemes based on washout filters, such features allows a better implementation of tilt coordination and to handle more efficiently the platform limits.", "title": "" }, { "docid": "8f2cfb5cb55b093f67c1811aba8b87e2", "text": "“You make what you measure” is a familiar mantra at datadriven companies. Accordingly, companies must be careful to choose North Star metrics that create a better product. Metrics fall into two general categories: direct count metrics such as total revenue and monthly active users, and nuanced quality metrics regarding value or other aspects of the user experience. Count metrics, when used exclusively as the North Star, might inform product decisions that harm user experience. Therefore, quality metrics play an important role in product development. We present a five-step framework for developing quality metrics using a combination of machine learning and product intuition. Machine learning ensures that the metric accurately captures user experience. Product intuition makes the metric interpretable and actionable. Through a case study of the Endorsements product at LinkedIn, we illustrate the danger of optimizing exclusively for count metrics, and showcase the successful application of our framework toward developing a quality metric. We show how the new quality metric has driven significant improvements toward creating a valuable, user-first product.", "title": "" }, { "docid": "d7bd02def0f010016b53e2c41b42df35", "text": "We utilise smart eyeglasses for dietary monitoring, in particular to sense food chewing. Our approach is based on a 3D-printed regular eyeglasses design that could accommodate processing electronics and Electromyography (EMG) electrodes. Electrode positioning was analysed and an optimal electrode placement at the temples was identified. We further compared gel and dry fabric electrodes. For the subsequent analysis, fabric electrodes were attached to the eyeglasses frame. The eyeglasses were used in a data recording study with eight participants eating different foods. Two chewing cycle detection methods and two food classification algorithms were compared. Detection rates for individual chewing cycles reached a precision and recall of 80%. For five foods, classification accuracy for individual chewing cycles varied between 43% and 71%. Majority voting across intake sequences improved accuracy, ranging between 63% and 84%. We concluded that EMG-based chewing analysis using smart eyeglasses can contribute essential chewing structure information to dietary monitoring systems, while the eyeglasses remain inconspicuous and thus could be continuously used.", "title": "" }, { "docid": "78f8d28f4b20abbac3ad848033bb088b", "text": "Many real-world applications involve multilabel classification, in which the labels are organized in the form of a tree or directed acyclic graph (DAG). However, current research efforts typically ignore the label dependencies or can only exploit the dependencies in tree-structured hierarchies. In this paper, we present a novel hierarchical multilabel classification algorithm which can be used on both treeand DAG-structured hierarchies. The key idea is to formulate the search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. Using a simple greedy strategy, the proposed algorithm is computationally efficient, easy to implement, does not suffer from the problem of insufficient/skewed training data in classifier training, and can be readily used on large hierarchies. Theoretical results guarantee the optimality of the obtained solution. Experiments are performed on a large number of functional genomics data sets. The proposed method consistently outperforms the state-of-the-art method on both treeand DAG-structured hierarchies.", "title": "" }, { "docid": "231d8ef95d02889d70000d70d8743004", "text": "Last decade witnessed a lot of research in the field of sentiment analysis. Understanding the attitude and the emotions that people express in written text proved to be really important and helpful in sociology, political science, psychology, market research, and, of course, artificial intelligence. This paper demonstrates a rule-based approach to clause-level sentiment analysis of reviews in Ukrainian. The general architecture of the implemented sentiment analysis system is presented, the current stage of research is described and further work is explained. The main emphasis is made on the design of rules for computing sentiments.", "title": "" }, { "docid": "c28ee3a41d05654eedfd379baf2d5f24", "text": "The problem of classifying subjects into disease categories is of common occurrence in medical research. Machine learning tools such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Logistic Regression (LR) and Fisher’s Linear Discriminant Analysis (LDA) are widely used in the areas of prediction and classification. The main objective of these competing classification strategies is to predict a dichotomous outcome (e.g. disease/healthy) based on several features.", "title": "" }, { "docid": "fee504e2184570e80956ff1c8a4ec83c", "text": "The use of computed tomography (CT) in clinical practice has been increasing rapidly, with the number of CT examinations performed in adults and children rising by 10% per year in England. Because the radiology community strives to reduce the radiation dose associated with pediatric examinations, external factors, including guidelines for pediatric head injury, are raising expectations for use of cranial CT in the pediatric population. Thus, radiologists are increasingly likely to encounter pediatric head CT examinations in daily practice. The variable appearance of cranial sutures at different ages can be confusing for inexperienced readers of radiologic images. The evolution of multidetector CT with thin-section acquisition increases the clarity of some of these sutures, which may be misinterpreted as fractures. Familiarity with the normal anatomy of the pediatric skull, how it changes with age, and normal variants can assist in translating the increased resolution of multidetector CT into more accurate detection of fractures and confident determination of normality, thereby reducing prolonged hospitalization of children with normal developmental structures that have been misinterpreted as fractures. More important, the potential morbidity and mortality related to false-negative interpretation of fractures as normal sutures may be avoided. The authors describe the normal anatomy of all standard pediatric sutures, common variants, and sutural mimics, thereby providing an accurate and safe framework for CT evaluation of skull trauma in pediatric patients.", "title": "" }, { "docid": "f94385118e9fca123bae28093b288723", "text": "One of the major restrictions on the performance of videobased person re-id is partial noise caused by occlusion, blur and illumination. Since different spatial regions of a single frame have various quality, and the quality of the same region also varies across frames in a tracklet, a good way to address the problem is to effectively aggregate complementary information from all frames in a sequence, using better regions from other frames to compensate the influence of an image region with poor quality. To achieve this, we propose a novel Region-based Quality Estimation Network (RQEN), in which an ingenious training mechanism enables the effective learning to extract the complementary region-based information between different frames. Compared with other feature extraction methods, we achieved comparable results of 92.4%, 76.1% and 77.83% on the PRID 2011, iLIDS-VID and MARS, respectively. In addition, to alleviate the lack of clean large-scale person re-id datasets for the community, this paper also contributes a new high-quality dataset, named “Labeled Pedestrian in the Wild (LPW)” which contains 7,694 tracklets with over 590,000 images. Despite its relatively large scale, the annotations also possess high cleanliness. Moreover, it’s more challenging in the following aspects: the age of characters varies from childhood to elderhood; the postures of people are diverse, including running and cycling in addition to the normal walking state.", "title": "" }, { "docid": "78b874393739daa623724efad75cb97d", "text": "Building curious machines that can answer as well as ask questions is an important challenge for AI. The two tasks of question answering and question generation are usually tackled separately in the NLP literature. At the same time, both require significant amounts of supervised data which is hard to obtain in many domains. To alleviate these issues, we propose a self-training method for jointly learning to ask as well as answer questions, leveraging unlabeled text along with labeled question answer pairs for learning. We evaluate our approach on four benchmark datasets: SQUAD, MS MARCO, WikiQA and TrecQA, and show significant improvements over a number of established baselines on both question answering and question generation tasks. We also achieved new state-of-the-art results on two competitive answer sentence selection tasks: WikiQA and TrecQA.", "title": "" } ]
scidocsrr
c07287090c74ba660018576f21d102d7
How competitive are you: Analysis of people's attractiveness in an online dating system
[ { "docid": "9efa0ff0743edacc4e9421ed45441fde", "text": "Perception of universal facial beauty has long been debated amongst psychologists and anthropologists. In this paper, we perform experiments to evaluate the extent of universal beauty by surveying a number of diverse human referees to grade a collection of female facial images. Results obtained show that there exists a strong central tendency in the human grades, thus exhibiting agreement on beauty assessment. We then trained an automated classifier using the average human grades as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery.", "title": "" }, { "docid": "4f8fea97733000d58f2ff229c85aeaa0", "text": "Online dating sites have become popular platforms for people to look for potential romantic partners. Many online dating sites provide recommendations on compatible partners based on their proprietary matching algorithms. It is important that not only the recommended dates match the user’s preference or criteria, but also the recommended users are interested in the user and likely to reciprocate when contacted. The goal of this paper is to predict whether an initial contact message from a user will be replied to by the receiver. The study is based on a large scale real-world dataset obtained from a major dating site in China with more than sixty million registered users. We formulate our reply prediction as a link prediction problem of social networks and approach it using a machine learning framework. The availability of a large amount of user profile information and the bipartite nature of the dating network present unique opportunities and challenges to the reply prediction problem. We extract user-based features from user profiles and graph-based features from the bipartite dating network, apply them in a variety of classification algorithms, and compare the utility of the features and performance of the classifiers. Our results show that the user-based and graph-based features result in similar performance, and can be used to effectively predict the reciprocal links. Only a small performance gain is achieved when both feature sets are used. Among the five classifiers we considered, random forests method outperforms the other four algorithms (naive Bayes, logistic regression, KNN, and SVM). Our methods and results can provide valuable guidelines to the design and performance of recommendation engine for online dating sites.", "title": "" } ]
[ { "docid": "3fbb2bb37f44cb8f300fd28cdbd8bc06", "text": "The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (Some figures may appear in colour only in the online journal)", "title": "" }, { "docid": "3567af18bc17efdb0efeb41d08fabb7b", "text": "In this review we examine recent research in the area of motivation in mathematics education and discuss findings from research perspectives in this domain. We note consistencies across research perspectives that suggest a set of generalizable conclusions about the contextual factors, cognitive processes, and benefits of interventions that affect students’ and teachers’ motivational attitudes. Criticisms are leveled concerning the lack of theoretical guidance driving the conduct and interpretation of the majority of studies in the field. Few researchers have attempted to extend current theories of motivation in ways that are consistent with the current research on learning and classroom discourse. In particular, researchers interested in studying motivation in the content domain of school mathematics need to examine the relationship that exists between mathematics as a socially constructed field and students’ desire to achieve.", "title": "" }, { "docid": "6e82e635682cf87a84463f01c01a1d33", "text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.", "title": "" }, { "docid": "6e60d6b878c35051ab939a03bdd09574", "text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.", "title": "" }, { "docid": "049def2d879d0b873132660b0b856443", "text": "This report explores the relationship between narcissism and unethical conduct in an organization by answering two questions: (1) In what ways does narcissism affect an organization?, and (2) What is the relationship between narcissism and the financial industry? Research suggests the overall conclusion that narcissistic individuals directly influence the identity of an organization and how it behaves. Ways to address these issues are shown using Enron as a case study example.", "title": "" }, { "docid": "d835cb852c482c2b7e14f9af4a5a1141", "text": "This paper investigates the effectiveness of state-of-the-art classification algorithms to categorise road vehicles for an urban traffic monitoring system using a multi-shape descriptor. The analysis is applied to monocular video acquired from a static pole-mounted road side CCTV camera on a busy street. Manual vehicle segmentation was used to acquire a large (>2000 sample) database of labelled vehicles from which a set of measurement-based features (MBF) in combination with a pyramid of HOG (histogram of orientation gradients, both edge and intensity based) features. These are used to classify the objects into four main vehicle categories: car, van, bus and motorcycle. Results are presented for a number of experiments that were conducted to compare support vector machines (SVM) and random forests (RF) classifiers. 10-fold cross validation has been used to evaluate the performance of the classification methods. The results demonstrate that all methods achieve a recognition rate above 95% on the dataset, with SVM consistently outperforming RF. A combination of MBF and IPHOG features gave the best performance of 99.78%.", "title": "" }, { "docid": "9f530b42ae19ddcf52efa41272b2dbc7", "text": "Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learningby-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, the appearance variability as well as the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses realtime approximations for complex eyeball materials and structures as well as novel anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework freely available online for the benefit of the research community.", "title": "" }, { "docid": "759a19f60890a11e7e460aecd7bb6477", "text": "The stiff man syndrome (SMS) and its variants, focal SMS, stiff limb (or leg) syndrome (SLS), jerking SMS, and progressive encephalomyelitis with rigidity and myoclonus (PERM), appear to occur more frequently than hitherto thought. A characteristic ensemble of symptoms and signs allows a tentative clinical diagnosis. Supportive ancillary findings include (1) the demonstration of continuous muscle activity in trunk and proximal limb muscles despite attempted relaxation, (2) enhanced exteroceptive reflexes, and (3) antibodies to glutamic acid decarboxylase (GAD) in both serum and spinal fluid. Antibodies to GAD are not diagnostic or specific for SMS and the role of these autoantibodies in the pathogenesis of SMS/SLS/PERM is the subject of debate and difficult to reconcile on the basis of our present knowledge. Nevertheless, evidence is emerging to suggest that SMS/SLS/PERM are manifestations of an immune-mediated chronic encephalomyelitis and immunomodulation is an effective therapeutic approach.", "title": "" }, { "docid": "5ca5cfcd0ed34d9b0033977e9cde2c74", "text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We …nd that RP signi…cantly reduces both brand-name and generic prices, and results in signi…cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi…cant cost-savings, and that patients’ copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi…cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for …nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: [email protected]. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: [email protected]. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: [email protected].", "title": "" }, { "docid": "00c17123df0fa10f0d405b4d0c9dfad0", "text": "Touchless hand gesture recognition systems are becoming important in automotive user interfaces as they improve safety and comfort. Various computer vision algorithms have employed color and depth cameras for hand gesture recognition, but robust classification of gestures from different subjects performed under widely varying lighting conditions is still challenging. We propose an algorithm for drivers’ hand gesture recognition from challenging depth and intensity data using 3D convolutional neural networks. Our solution combines information from multiple spatial scales for the final prediction. It also employs spatiotemporal data augmentation for more effective training and to reduce potential overfitting. Our method achieves a correct classification rate of 77.5% on the VIVA challenge dataset.", "title": "" }, { "docid": "8f7428569e1d3036cdf4842d48b56c22", "text": "This paper describes a unified model for role-based access control (RBAC). RBAC is a proven technology for large-scale authorization. However, lack of a standard model results in uncertainty and confusion about its utility and meaning. The NIST model seeks to resolve this situation by unifying ideas from prior RBAC models, commercial products and research prototypes. It is intended to serve as a foundation for developing future standards. RBAC is a rich and open-ended technology which is evolving as users, researchers and vendors gain experience with it. The NIST model focuses on those aspects of RBAC for which consensus is available. It is organized into four levels of increasing functional capabilities called flat RBAC, hierarchical RBAC, constrained RBAC and symmetric RBAC. These levels are cumulative and each adds exactly one new requirement. An alternate approach comprising flat and hierarchical RBAC in an ordered sequence and two unordered features—constraints and symmetry—is also presented. The paper furthermore identifies important attributes of RBAC not included in the NIST model. Some are not suitable for inclusion in a consensus document. Others require further work and agreement before standardization is feasible.", "title": "" }, { "docid": "895f0424cb71c79b86ecbd11a4f2eb8e", "text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.", "title": "" }, { "docid": "d488d9d754c360efb3910c83e3175756", "text": "The most common question asked by patients with inflammatory bowel disease (IBD) is, \"Doctor, what should I eat?\" Findings from epidemiology studies have indicated that diets high in animal fat and low in fruits and vegetables are the most common pattern associated with an increased risk of IBD. Low levels of vitamin D also appear to be a risk factor for IBD. In murine models, diets high in fat, especially saturated animal fats, also increase inflammation, whereas supplementation with omega 3 long-chain fatty acids protect against intestinal inflammation. Unfortunately, omega 3 supplements have not been shown to decrease the risk of relapse in patients with Crohn's disease. Dietary intervention studies have shown that enteral therapy, with defined formula diets, helps children with Crohn's disease and reduces inflammation and dysbiosis. Although fiber supplements have not been shown definitively to benefit patients with IBD, soluble fiber is the best way to generate short-chain fatty acids such as butyrate, which has anti-inflammatory effects. Addition of vitamin D and curcumin has been shown to increase the efficacy of IBD therapy. There is compelling evidence from animal models that emulsifiers in processed foods increase risk for IBD. We discuss current knowledge about popular diets, including the specific carbohydrate diet and diet low in fermentable oligo-, di-, and monosaccharides and polyols. We present findings from clinical and basic science studies to help gastroenterologists navigate diet as it relates to the management of IBD.", "title": "" }, { "docid": "3f2d4df1b0ef315ee910636c9439b049", "text": "Real-Time Line and Disk Light Shading\n Eric Heitz and Stephen Hill\n At SIGGRAPH 2016, we presented a new real-time area lighting technique for polygonal sources. In this talk, we will show how the underlying framework, based on Linearly Transformed Cosines (LTCs), can be extended to support line and disk lights. We will discuss the theory behind these approaches as well as practical implementation tips and tricks concerning numerical precision and performance.\n Physically Based Shading at DreamWorks Animation\n Feng Xie and Jon Lanz\n PDI/DreamWorks was one of the first animation studios to adopt global illumination in production rendering. Concurrently, we have also been developing and applying physically based shading principles to improve the consistency and realism of our material models, while balancing the need for intuitive artistic control required for feature animations.\n In this talk, we will start by presenting the evolution of physically based shading in our films. Then we will present some fundamental principles with respect to importance sampling and energy conservation in our BSDF framework with a pragmatic and efficient approach to transimssion fresnel modeling. Finally, we will present our new set of physically plausible production shaders for our new path tracer, which includes our new hard surface shader, our approach to material layering and some new developments in fabric and glitter shading.\n Volumetric Skin and Fabric Shading at Framestore\n Nathan Walster\n Recent advances in shading have led to the use of free-path sampling to better solve complex light transport within volumetric materials. In this talk, we describe how we have implemented these ideas and techniques within a production environment, their application on recent shows---such as Guardians of the Galaxy Vol. 2 and Alien: Covenant---and the effect this has had on artists' workflow within our studio.\n Practical Multilayered Materials in Call of Duty: Infinite Warfare\n Michał Drobot\n This talk presents a practical approach to multilayer, physically based surface rendering, specifically optimized for Forward+ rendering pipelines. The presented pipeline allows for the creation of complex surface by decomposing them into different mediums, each represented by a simple BRDF/BSSRDF and set of simple, physical macro properties, such as thickness, scattering and absorption. The described model is explained via practical examples of common multilayer materials such as car paint, lacquered wood, ice, and semi-translucent plastics. Finally, the talk describes intrinsic implementation details for achieving a low performance budget for 60 Hz titles as well as supporting multiple rendering modes: opaque, alpha blend, and refractive blend.\n Pixar's Foundation for Materials: PxrSurface and PxrMarschnerHair\n Christophe Hery and Junyi Ling\n Pixar's Foundation Materials, PxrSurface and PxrMarschnerHair, began shipping with RenderMan 21.\n PxrSurface is the standard surface shader developed in the studio for Finding Dory and used more recently for Cars 3 and Coco. This shader contains nine lobes that cover the entire gamut of surface materials for these two films: diffuse, three specular, iridescence, fuzz, subsurface, single scatter and a glass lobe. Each of these BxDF lobes is energy conserving, but conservation is not enforced between lobes on the surface level. We use parameter layering methods to feed a PxrSurface with pre-layered material descriptions. This simultaneously allows us the flexibility of a multilayered shading pipeline together with efficient and consistent rendering behavior.\n We also implemented our individual BxDFs with the latest state-of-the-art techniques. For example, our three specular lobes can be switched between Beckmann and GGX modes. Many compound materials have multiple layers of specular; these lobes interact with each other modulated by the Fresnel effect of the clearcoat layer. We also leverage LEADR mapping to recreate sub-displacement micro features such as metal flakes and clearcoat scratches.\n Another example is that PxrSurface ships with Jensen, d'Eon and Burley diffusion profiles. Additionally, we implemented a novel subsurface model using path-traced volumetric scattering, which represents a significant advancement. It captures zero and single scattering events of subsurface scattering implicit to the path-tracing algorithm. The user can adjust the phase-function of the scattering events and change the extinction profiles, and it also comes with standardized color inversion features for intuitive albedo input. To the best of our knowledge, this is the first commercially available rendering system to model these features and the rendering cost is comparable to classic diffusion subsurface scattering models.\n PxrMarschnerHair implements Marschner's seminal hair illumination model with importance sampling. We also account for the residual energy left after the R, TT, TRT and glint lobes, through a fifth diffuse lobe. We show that this hair surface shader can reproduce dark and blonde hair effectively in a path-traced production context. Volumetric scattering from fiber to fiber changes the perceived hue and saturation of a groom, so we also provide a color inversion scheme to invert input albedos, such that the artistic inputs are straightforward and intuitive.\n Revisiting Physically Based Shading at Imageworks\n Christopher Kulla and Alejandro Conty\n Two years ago, the rendering and shading groups at Sony Imageworks embarked on a project to review the structure of our physically based shaders in an effort to simplify their implementation, improve quality and pave the way to take advantage of future improvements in light transport algorithms.\n We started from classic microfacet BRDF building blocks and investigated energy conservation and artist friendly parametrizations. We continued by unifying volume rendering and subsurface scattering algorithms and put in place a system for medium tracking to improve the setup of nested media. Finally, from all these building blocks, we rebuilt our artist-facing shaders with a simplified interface and a more flexible layering approach through parameter blending.\n Our talk will discuss the details of our various building blocks, what worked and what didn't, as well as some future research directions we are still interested in exploring.", "title": "" }, { "docid": "4689161101a990d17b08e27b3ccf2be3", "text": "The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer’s perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focuses mainly on an empirical investigation of the effect of key developer’s factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer’s factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer’s factors on the game development process.", "title": "" }, { "docid": "934ee0b55bf90eed86fabfff8f1238d1", "text": "Schelling (1969, 1971a,b, 1978) considered a simple proximity model of segregation where individual agents only care about the types of people living in their own local geographical neighborhood, the spatial structure being represented by oneor two-dimensional lattices. In this paper, we argue that segregation might occur not only in the geographical space, but also in social environments. Furthermore, recent empirical studies have documented that social interaction structures are well-described by small-world networks. We generalize Schelling’s model by allowing agents to interact in small-world networks instead of regular lattices. We study two alternative dynamic models where agents can decide to move either arbitrarily far away (global model) or are bound to choose an alternative location in their social neighborhood (local model). Our main result is that the system attains levels of segregation that are in line with those reached in the lattice-based spatial proximity model. Thus, Schelling’s original results seem to be robust to the structural properties of the network.", "title": "" }, { "docid": "c6ebb1f54f42f38dae8c19566f2459ce", "text": "We develop several predictive models linking legislative sentiment to legislative text. Our models, which draw on ideas from ideal point estimation and topic models, predict voting patterns based on the contents of bills and infer the political leanings of legislators. With supervised topics, we provide an exploratory window into how the language of the law is correlated with political support. We also derive approximate posterior inference algorithms based on variational methods. Across 12 years of legislative data, we predict specific voting patterns with high accuracy.", "title": "" }, { "docid": "1865a404c970d191ed55e7509b21fb9e", "text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1", "title": "" }, { "docid": "0b6a3b143dfccd7ca9ea09f7fa5b5e8c", "text": "Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.", "title": "" }, { "docid": "2b98fd7a61fd7c521758651191df74d0", "text": "Nowadays, a great effort is done to find new alternative renewable energy sources to replace part of nuclear energy production. In this context, this paper presents a new axial counter-rotating turbine for small-hydro applications which is developed to recover the energy lost in release valves of water supply. The design of the two PM-generators, their mechanical integration in a bulb placed into the water conduit and the AC-DC Vienna converter developed for these turbines are presented. The sensorless regulation of the two generators is also briefly discussed. Finally, measurements done on the 2-kW prototype are analyzed and compared with the simulation.", "title": "" } ]
scidocsrr
299763e0a76597424582bf792d879f1d
Sexuality before and after male-to-female sex reassignment surgery.
[ { "docid": "9b1a4e27c5d387ef091fdb9140eb8795", "text": "In this study I investigated the relation between normal heterosexual attraction and autogynephilia (a man's propensity to be sexually aroused by the thought or image of himself as a woman). The subjects were 427 adult male outpatients who reported histories of dressing in women's garments, of feeling like women, or both. The data were questionnaire measures of autogynephilia, heterosexual interest, and other psychosexual variables. As predicted, the highest levels of autogynephilia were observed at intermediate rather than high levels of heterosexual interest; that is, the function relating these variables took the form of an inverted U. This finding supports the hypothesis that autogynephilia is a misdirected type of heterosexual impulse, which arises in association with normal heterosexuality but also competes with it.", "title": "" } ]
[ { "docid": "fa260dabc7a58b760b4306e880afb821", "text": "BACKGROUND\nPerforator-based flaps have been explored across almost all of the lower leg except in the Achilles tendon area. This paper introduced a perforator flap sourced from this area with regard to its anatomic basis and clinical applications.\n\n\nMETHODS\nTwenty-four adult cadaver legs were dissected to investigate the perforators emerging along the lateral edge of the Achilles tendon in terms of number and location relative to the tip of the lateral malleolus, and distribution. Based on the anatomic findings, perforator flaps, based on the perforator(s) of the lateral calcaneal artery (LCA) alone or in concert with the perforator of the peroneal artery (PA), were used for reconstruction of lower-posterior heel defects in eight cases. Postoperatively, subjective assessment and Semmes-Weinstein filament test were performed to evaluate the sensibility of the sural nerve-innerved area.\n\n\nRESULTS\nThe PA ended into the anterior perforating branch and LCA at the level of 6.0 ± 1.4 cm (range 3.3-9.4 cm) above the tip of the lateral malleolus. Both PA and LCA, especially the LCA, gave rise to perforators to contribute to the integument overlying the Achilles tendon. Of eight flaps, six were based on perforator(s) of the LCA and two were on perforators of the PA and LCA. Follow-up lasted for 6-28 months (mean 13.8 months), during which total flap loss and nerve injury were not found. Functional and esthetic outcomes were good in all patients.\n\n\nCONCLUSION\nThe integument overlying the Achilles tendon gets its blood supply through the perforators of the LCA primarily and that of through the PA secondarily. The LCA perforator(s)-based and the LCA plus PA perforators-based stepladder flap is a reliable, sensate flap, and should be thought of as a valuable procedure of choice for coverage of lower-posterior heel defects in selected patients.", "title": "" }, { "docid": "82dd67625fd8f2af3bf825fdef410836", "text": "Public health thrives on high-quality evidence, yet acquiring meaningful data on a population remains a central challenge of public health research and practice. Social monitoring, the analysis of social media and other user-generated web data, has brought advances in the way we leverage population data to understand health. Social media offers advantages over traditional data sources, including real-time data availability, ease of access, and reduced cost. Social media allows us to ask, and answer, questions we never thought possible. This book presents an overview of the progress on uses of social monitoring to study public health over the past decade. We explain available data sources, common methods, and survey research on social monitoring in a wide range of public health areas. Our examples come from topics such as disease surveillance, behavioral medicine, and mental health, among others. We explore the limitations and concerns of these methods. Our survey of this exciting new field of data-driven research lays out future research directions.", "title": "" }, { "docid": "553719cb1cb8829ceaf8e0f1a40953ff", "text": "“The distinctive faculties of Man are visibly expressed in his elevated cranial domeda feature which, though much debased in certain savage races, essentially characterises the human species. But, considering that the Neanderthal skull is eminently simial, both in its general and particular characters, I feel myself constrained to believe that the thoughts and desires which once dwelt within it never soared beyond those of a brute. The Andamaner, it is indisputable, possesses but the dimmest conceptions of the existence of the Creator of the Universe: his ideas on this subject, and on his own moral obligations, place him very little above animals of marked sagacity; nevertheless, viewed in connection with the strictly human conformation of his cranium, they are such as to specifically identify him with Homo sapiens. Psychical endowments of a lower grade than those characterising the Andamaner cannot be conceived to exist: they stand next to brute benightedness. (.) Applying the above argument to the Neanderthal skull, and considering . that it more closely conforms to the brain-case of the Chimpanzee, . there seems no reason to believe otherwise than that similar darkness characterised the being to which the fossil belonged” (King, 1864; pp. 96).", "title": "" }, { "docid": "397b3b96c16b2ce310ab61f9d2d7bdbf", "text": "Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.", "title": "" }, { "docid": "5ff019e3c12f7b1c2b3518e0883e3b6f", "text": "A novel PFC (Power Factor Corrected) Converter using Zeta DC-DC converter feeding a BLDC (Brush Less DC) motor drive using a single voltage sensor is proposed for fan applications. A single phase supply followed by an uncontrolled bridge rectifier and a Zeta DC-DC converter is used to control the voltage of a DC link capacitor which is lying between the Zeta converter and a VSI (Voltage Source Inverter). Voltage of a DC link capacitor of Zeta converter is controlled to achieve the speed control of BLDC motor. The Zeta converter is working as a front end converter operating in DICM (Discontinuous Inductor Current Mode) and thus using a voltage follower approach. The DC link capacitor of the Zeta converter is followed by a VSI which is feeding a BLDC motor. A sensorless control of BLDC motor is used to eliminate the requirement of Hall Effect position sensors. A MATLAB/Simulink environment is used to simulate the developed model to achieve a wide range of speed control with high PF (power Factor) and improved PQ (Power Quality) at the supply.", "title": "" }, { "docid": "27f3060ef96f1656148acd36d50f02ce", "text": "Video sensors become particularly important in traffic applications mainly due to their fast response, easy installation, operation and maintenance, and their ability to monitor wide areas. Research in several fields of traffic applications has resulted in a wealth of video processing and analysis methods. Two of the most demanding and widely studied applications relate to traffic monitoring and automatic vehicle guidance. In general, systems developed for these areas must integrate, amongst their other tasks, the analysis of their static environment (automatic lane finding) and the detection of static or moving obstacles (object detection) within their space of interest. In this paper we present an overview of image processing and analysis tools used in these applications and we relate these tools with complete systems developed for specific traffic applications. More specifically, we categorize processing methods based on the intrinsic organization of their input data (feature-driven, area-driven, or model-based) and the domain of processing (spatial/frame or temporal/video). Furthermore, we discriminate between the cases of static and mobile camera. Based on this categorization of processing tools, we present representative systems that have been deployed for operation. Thus, the purpose of the paper is threefold. First, to classify image-processing methods used in traffic applications. Second, to provide the advantages and disadvantages of these algorithms. Third, from this integrated consideration, to attempt an evaluation of shortcomings and general needs in this field of active research. q 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "06675c4b42683181cecce7558964c6b6", "text": "We present in this work an economic analysis of ransomware, with relevant data from Cryptolocker, CryptoWall, TeslaCrypt and other major strands. We include a detailed study of the impact that different price discrimination strategies can have on the success of a ransomware family, examining uniform pricing, optimal price discrimination and bargaining strategies and analysing their advantages and limitations. In addition, we present results of a preliminary survey that can helps in estimating an optimal ransom value. We discuss at each stage whether the different schemes we analyse have been encountered already in existing malware, and the likelihood of them being implemented and becoming successful. We hope this work will help to gain some useful insights for predicting how ransomware may evolve in the future and be better prepared to counter its current and future threat.", "title": "" }, { "docid": "386edbf8dee79dd53a0a6c3475286f13", "text": "The underrepresentation of women at the top of math-intensive fields is controversial, with competing claims of biological and sociocultural causation. The authors develop a framework to delineate possible causal pathways and evaluate evidence for each. Biological evidence is contradictory and inconclusive. Although cross-cultural and cross-cohort differences suggest a powerful effect of sociocultural context, evidence for specific factors is inconsistent and contradictory. Factors unique to underrepresentation in math-intensive fields include the following: (a) Math-proficient women disproportionately prefer careers in non-math-intensive fields and are more likely to leave math-intensive careers as they advance; (b) more men than women score in the extreme math-proficient range on gatekeeper tests, such as the SAT Mathematics and the Graduate Record Examinations Quantitative Reasoning sections; (c) women with high math competence are disproportionately more likely to have high verbal competence, allowing greater choice of professions; and (d) in some math-intensive fields, women with children are penalized in promotion rates. The evidence indicates that women's preferences, potentially representing both free and constrained choices, constitute the most powerful explanatory factor; a secondary factor is performance on gatekeeper tests, most likely resulting from sociocultural rather than biological causes.", "title": "" }, { "docid": "fe38de8c129845b86ee0ec4acf865c14", "text": "0 7 4 0 7 4 5 9 / 0 2 / $ 1 7 . 0 0 © 2 0 0 2 I E E E McDonald’s develop product lines. But software product lines are a relatively new concept. They are rapidly emerging as a practical and important software development paradigm. A product line succeeds because companies can exploit their software products’ commonalities to achieve economies of production. The Software Engineering Institute’s (SEI) work has confirmed the benefits of pursuing this approach; it also found that doing so is both a technical and business decision. To succeed with software product lines, an organization must alter its technical practices, management practices, organizational structure and personnel, and business approach.", "title": "" }, { "docid": "7e127a6f25e932a67f333679b0d99567", "text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.", "title": "" }, { "docid": "e1c877aa583aa10e2565ef2748585cb0", "text": "OBJECTIVE\nTo encourage treatment of depression and prevention of suicide in physicians by calling for a shift in professional attitudes and institutional policies to support physicians seeking help.\n\n\nPARTICIPANTS\nAn American Foundation for Suicide Prevention planning group invited 15 experts on the subject to evaluate the state of knowledge about physician depression and suicide and barriers to treatment. The group assembled for a workshop held October 6-7, 2002, in Philadelphia, Pa.\n\n\nEVIDENCE\nThe planning group worked with each participant on a preworkshop literature review in an assigned area. Abstracts of presentations and key publications were distributed to participants before the workshop. After workshop presentations, participants were assigned to 1 of 2 breakout groups: (1) physicians in their role as patients and (2) medical institutions and professional organizations. The groups identified areas that required further research, barriers to treatment, and recommendations for reform.\n\n\nCONSENSUS PROCESS\nThis consensus statement emerged from a plenary session during which each work group presented its recommendations. The consensus statement was circulated to and approved by all participants.\n\n\nCONCLUSIONS\nThe culture of medicine accords low priority to physician mental health despite evidence of untreated mood disorders and an increased burden of suicide. Barriers to physicians' seeking help are often punitive, including discrimination in medical licensing, hospital privileges, and professional advancement. This consensus statement recommends transforming professional attitudes and changing institutional policies to encourage physicians to seek help. As barriers are removed and physicians confront depression and suicidality in their peers, they are more likely to recognize and treat these conditions in patients, including colleagues and medical students.", "title": "" }, { "docid": "59c4b8a66a6cf6add26000cb2475ffe6", "text": "Intelligent transport systems are the rising technology in the near future to build cooperative vehicular networks in which a variety of different ITS applications are expected to communicate with a variety of different units. Therefore, the demand for highly customized communication channel for each or sets of similar ITS applications is increased. This article explores the capabilities of available wireless communication technologies in order to produce a win-win situation while selecting suitable carrier( s) for a single application or a profile of similar applications. Communication requirements for future ITS applications are described to select the best available communication interface for the target application(s).", "title": "" }, { "docid": "5aa8c418b63a3ecb71dc60d4863f35cc", "text": "Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the Bengali sentences automatically into different groups in accordance with their underlying senses. The input sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL project of the Govt. of India, while information about the different senses of particular ambiguous lexical item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses that render sentences in different meanings. In our experiment we have achieved around 84% accurate result on the sense classification over the total input sentences. We have analyzed those residual sentences that did not comply with our experiment and did affect the results to note that in many cases, wrong syntactic structures and less semantic information are the main hurdles in semantic classification of sentences. The applicational relevance of this study is attested in automatic text classification, machine learning, information extraction, and word sense disambiguation.", "title": "" }, { "docid": "0e153353fb8af1511de07c839f6eaca5", "text": "The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.", "title": "" }, { "docid": "9679713ae8ab7e939afba18223086128", "text": "If, as many psychologists seem to believe, im­ mediate memory represents a distinct system or set of processes from long-term memory (L TM), then what might· it be for? This fundamental, functional question was surprisingly unanswer­ able in the 1970s, given the volume of research that had explored short-term memory (STM), and given the ostensible role that STM was thought to play in cognitive control (Atkinson & Shiffrin, 1971 ). Indeed, failed attempts to link STM to complex cognitive· functions, such as reading comprehension, loomed large in Crow­ der's (1982) obituary for the concept. Baddeley and Hitch ( 197 4) tried to validate immediate memory's functions by testing sub­ jects in reasoning, comprehension, and list­ learning tasks at the same time their memory was occupied by irrelevant material. Generally, small memory loads (i.e., three or fewer items) were retained with virtually no effect on the primary tasks, whereas memory loads of six items consistently impaired reasoning, compre­ hension, and learning. Baddeley and Hitch therefore argued that \"working memory\" (WM)", "title": "" }, { "docid": "c625e9d1bb6cdb54864ab10ae2b0e060", "text": "This special issue of the proceedings of the IEEE presents a systematical and complete tutorial on digital television (DTV), produced by a team of DTV experts worldwide. This introductory paper puts the current DTV systems into perspective and explains the historical background and different evolution paths that each system took. The main focus is on terrestrial DTV systems, but satellite and cable DTV are also covered,as well as several other emerging services.", "title": "" }, { "docid": "5be35d2aa81cc1e15b857892f376fbf0", "text": "This paper proposes a new method for fabric defect classification by incorporating the design of a wavelet frames based feature extractor with the design of a Euclidean distance based classifier. Channel variances at the outputs of the wavelet frame decomposition are used to characterize each nonoverlapping window of the fabric image. A feature extractor using linear transformation matrix is further employed to extract the classification-oriented features. With a Euclidean distance based classifier, each nonoverlapping window of the fabric image is then assigned to its corresponding category. Minimization of the classification error is achieved by incorporating the design of the feature extractor with the design of the classifier based on minimum classification error (MCE) training method. The proposed method has been evaluated on the classification of 329 defect samples containing nine classes of fabric defects, and 328 nondefect samples, where 93.1% classification accuracy has been achieved.", "title": "" }, { "docid": "c68b94c11170fae3caf7dc211ab83f91", "text": "Data mining is the extraction of useful, prognostic, interesting, and unknown information from massive transaction databases and other repositories. Data mining tools predict potential trends and actions, allowing various fields to make proactive, knowledge-driven decisions. Recently, with the rapid growth of information technology, the amount of data has exponentially increased in various fields. Big data mostly comes from people’s day-to-day activities and Internet-based companies. Mining frequent itemsets and association rule mining (ARM) are well-analysed techniques for revealing attractive correlations among variables in huge datasets. The Apriori algorithm is one of the most broadly used algorithms in ARM, and it collects the itemsets that frequently occur in order to discover association rules in massive datasets. The original Apriori algorithm is for sequential (single node or computer) environments. This Apriori algorithm has many drawbacks for processing huge datasets, such as that a single machine’s memory, CPU and storage capacity are insufficient. Parallel and distributed computing is the better solution to overcome the above problems. Many researchers have parallelized the Apriori algorithm. This study performs a survey on several well-enhanced and revised techniques for the parallel Apriori algorithm in the HadoopMapReduce environment. The Hadoop-MapReduce framework is a programming model that efficiently and effectively processes enormous databases in parallel. It can handle large clusters of commodity hardware in a reliable and fault-tolerant manner. This survey will provide an overall view of the parallel Apriori algorithm implementation in the Hadoop-MapReduce environment and briefly discuss the challenges and open issues of big data in the cloud and Hadoop-MapReduce. Moreover, this survey will not only give overall existing improved Apriori algorithm methods on Hadoop-MapReduce but also provide future research direction for upcoming researchers.", "title": "" }, { "docid": "c3500e2b50f70c81d7f2c4a425f12742", "text": "Material recognition is an important subtask in computer vision. In this paper, we aim for the identification of material categories from a single image captured under unknown illumination and view conditions. Therefore, we use several features which cover various aspects of material appearance and perform supervised classification using Support Vector Machines. We demonstrate the feasibility of our approach by testing on the challenging Flickr Material Database. Based on this dataset, we also carry out a comparison to a previously published work [Liu et al., ”Exploring Features in a Bayesian Framework for Material Recognition”, CVPR 2010] which uses Bayesian inference and reaches a recognition rate of 44.6% on this dataset and represents the current state-of the-art. With our SVM approach we obtain 53.1% and hence, significantly outperform this approach.", "title": "" }, { "docid": "620574da26151188171a91eb64de344d", "text": "Major security issues for banking and financial institutions are Phishing. Phishing is a webpage attack, it pretends a customer web services using tactics and mimics from unauthorized persons or organization. It is an illegitimate act to steals user personal information such as bank details, social security numbers and credit card details, by showcasing itself as a truthful object, in the public network. When users provide confidential information, they are not aware of the fact that the websites they are using are phishing websites. This paper presents a technique for detecting phishing website attacks and also spotting phishing websites by combines source code and URL in the webpage. Keywords—Phishing, Website attacks, Source Code, URL.", "title": "" } ]
scidocsrr
8beb7712d1d49745bf134ca4276f2787
Overview: Generalizations of Multi-Agent Path Finding to Real-World Scenarios
[ { "docid": "8bc1d9cd9a912a7c3a8e874ce09cae52", "text": "Multi-Agent Path Finding (MAPF) is well studied in both AI and robotics. Given a discretized environment and agents with assigned start and goal locations, MAPF solvers from AI find collision-free paths for hundreds of agents with userprovided sub-optimality guarantees. However, they ignore that actual robots are subject to kinematic constraints (such as finite maximum velocity limits) and suffer from imperfect plan-execution capabilities. We therefore introduce MAPFPOST, a novel approach that makes use of a simple temporal network to postprocess the output of a MAPF solver in polynomial time to create a plan-execution schedule that can be executed on robots. This schedule works on non-holonomic robots, takes their maximum translational and rotational velocities into account, provides a guaranteed safety distance between them, and exploits slack to absorb imperfect plan executions and avoid time-intensive replanning in many cases. We evaluate MAPF-POST in simulation and on differentialdrive robots, showcasing the practicality of our approach.", "title": "" } ]
[ { "docid": "b59a2c49364f3e95a2c030d800d5f9ce", "text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.", "title": "" }, { "docid": "fbcaba091a407d2bd831d3520577cf27", "text": "Studying a software project by mining data from a single repository has been a very active research field in software engineering during the last years. However, few efforts have been devoted to perform studies by integrating data from various repositories, with different kinds of information, which would, for instance, track the different activities of developers. One of the main problems of these multi-repository studies is the different identities that developers use when they interact with different tools in different contexts. This makes them appear as different entities when data is mined from different repositories (and in some cases, even from a single one). In this paper we propose an approach, based on the application of heuristics, to identify the many identities of developers in such cases, and a data structure for allowing both the anonymized distribution of information, and the tracking of identities for verification purposes. The methodology will be presented in general, and applied to the GNOME project as a case example. Privacy issues and partial merging with new data sources will also be considered and discussed.", "title": "" }, { "docid": "cbe1b2575db111cd3b22b7288c0e345c", "text": "A reversible gate has the equal number of inputs and outputs and one-to-one mappings between input vectors and output vectors; so that, the input vector states can be always uniquely reconstructed from the output vector states. This correspondence introduces a reversible full-adder circuit that requires only three reversible gates and produces least number of \"garbage outputs \", that is two. After that, a theorem has been proposed that proves the optimality of the propounded circuit in terms of number of garbage outputs. An efficient algorithm is also introduced in this paper that leads to construct a reversible circuit.", "title": "" }, { "docid": "8d3a5a9327ab93fef50712e931d0e06b", "text": "Cite this article Romager JA, Hughes K, Trimble JE. Personality traits as predictors of leadership style preferences: Investigating the relationship between social dominance orientation and attitudes towards authentic leaders. Soc Behav Res Pract Open J. 2017; 3(1): 1-9. doi: 10.17140/SBRPOJ-3-110 Personality Traits as Predictors of Leadership Style Preferences: Investigating the Relationship Between Social Dominance Orientation and Attitudes Towards Authentic Leaders Original Research", "title": "" }, { "docid": "6655b03c0fcc83a71a3119d7e526eedc", "text": "Dynamic magnetic resonance imaging (MRI) scans can be accelerated by utilizing compressed sensing (CS) reconstruction methods that allow for diagnostic quality images to be generated from undersampled data. Unfortunately, CS reconstruction is time-consuming, requiring hours between a dynamic MRI scan and image availability for diagnosis. In this work, we train a convolutional neural network (CNN) to perform fast reconstruction of severely undersampled dynamic cardiac MRI data, and we explore the utility of CNNs for further accelerating dynamic MRI scan times. Compared to state-of-the-art CS reconstruction techniques, our CNN achieves reconstruction speeds that are 150x faster without significant loss of image quality. Additionally, preliminary results suggest that CNNs may allow scan times that are 2x faster than those allowed by CS.", "title": "" }, { "docid": "a433f47a3c7c06a409a8fc0d98e955be", "text": "The local-dimming backlight has recently been presented for use in LCD TVs. However, the image resolution is low, particularly at weak edges. In this work, a local-dimming backlight is developed to improve the image contrast and reduce power dissipation. The algorithm enhances low-level edge information to improve the perceived image resolution. Based on the algorithm, a 42-in backlight module with white light-emitting diode (LED) devices was driven by a local dimming control core. The block-wise register approach substantially reduced the number of required line-buffers and shortened the latency time. The measurements made in the laboratory indicate that the backlight system reduces power dissipation by an average of 48% and exhibits no visible distortion compared relative to the fixed backlighting system. The system was successfully demonstrated in a 42-in LCD TV, and the contrast ratio was greatly improved by a factor of 100.", "title": "" }, { "docid": "e6bbe7de06295817435acafbbb7470cc", "text": "Cortical circuits work through the generation of coordinated, large-scale activity patterns. In sensory systems, the onset of a discrete stimulus usually evokes a temporally organized packet of population activity lasting ∼50–200 ms. The structure of these packets is partially stereotypical, and variation in the exact timing and number of spikes within a packet conveys information about the identity of the stimulus. Similar packets also occur during ongoing stimuli and spontaneously. We suggest that such packets constitute the basic building blocks of cortical coding.", "title": "" }, { "docid": "e6291818253de22ee675f67eed8213d9", "text": "This literature review focuses on aesthetics of interaction design with further goal of outlining a study towards prediction model of aesthetic value. The review covers three main issues, tightly related to aesthetics of interaction design: evaluation of aesthetics, relations between aesthetics and interaction qualities and implementation of aesthetics in interaction design. Analysis of previous models is carried out according to definition of interaction aesthetics: holistic approach to aesthetic perception considering its' action- and appearance-related components. As a result the empirical study is proposed for investigating the relations between attributes of interaction and users' aesthetic experience.", "title": "" }, { "docid": "7579b5cb9f18e3dc296bcddc7831abc5", "text": "Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. Both AAE and VAE detect group anomalies using point-wise input data where group memberships are known a priori. We conduct extensive experiments to evaluate our models on real world datasets. The empirical results demonstrate that our approach is effective and robust in detecting group anomalies.", "title": "" }, { "docid": "860d39ff0ddd80caaf712e84a82f4d86", "text": "Steganography and steganalysis received a great deal of attention from media and law enforcement. Many powerful and robust methods of steganography and steganalysis have been developed. In this paper we are considering the methods of steganalysis that are to be used for this processes. Paper giving some idea about the steganalysis and its method. Keywords— Include at least 5 keywords or phrases", "title": "" }, { "docid": "1465b6c38296dfc46f8725dca5179cf1", "text": "A brief introduction is given to the actual mechanics of simulated annealing, and a simple example from an IC layout is used to illustrate how these ideas can be applied. The complexities and tradeoffs involved in attacking a realistically complex design problem are illustrated by dissecting two very different annealing algorithms for VLSI chip floorplanning. Several current research problems aimed at determining more precisely how and why annealing algorithms work are examined. Some philosophical issues raised by the introduction of annealing are discussed.<<ETX>>", "title": "" }, { "docid": "f8c1654abd0ffced4b5dbf3ef0724d36", "text": "The proposed social media crisis mapping platform for natural disasters uses locations from gazetteer, street map, and volunteered geographic information (VGI) sources for areas at risk of disaster and matches them to geoparsed real-time tweet data streams. The authors use statistical analysis to generate real-time crisis maps. Geoparsing results are benchmarked against existing published work and evaluated across multilingual datasets. Two case studies compare five-day tweet crisis maps to official post-event impact assessment from the US National Geospatial Agency (NGA), compiled from verified satellite and aerial imagery sources.", "title": "" }, { "docid": "1dee6d60a94e434dd6d3b6754e9cd3f3", "text": "The barrier function of the intestine is essential for maintaining the normal homeostasis of the gut and mucosal immune system. Abnormalities in intestinal barrier function expressed by increased intestinal permeability have long been observed in various gastrointestinal disorders such as Crohn's disease (CD), ulcerative colitis (UC), celiac disease, and irritable bowel syndrome (IBS). Imbalance of metabolizing junction proteins and mucosal inflammation contributes to intestinal hyperpermeability. Emerging studies exploring in vitro and in vivo model system demonstrate that Rho-associated coiled-coil containing protein kinase- (ROCK-) and myosin light chain kinase- (MLCK-) mediated pathways are involved in the regulation of intestinal permeability. With this perspective, we aim to summarize the current state of knowledge regarding the role of inflammation and ROCK-/MLCK-mediated pathways leading to intestinal hyperpermeability in gastrointestinal disorders. In the near future, it may be possible to specifically target these specific pathways to develop novel therapies for gastrointestinal disorders associated with increased gut permeability.", "title": "" }, { "docid": "e91dd3f9e832de48a27048a0efa1b67a", "text": "Smart Home technology is the future of residential related technology which is designed to deliver and distribute number of services inside and outside the house via networked devices in which all the different applications & the intelligence behind them are integrated and interconnected. These smart devices have the potential to share information with each other given the permanent availability to access the broadband internet connection. Hence, Smart Home Technology has become part of IoT (Internet of Things). In this work, a home model is analyzed to demonstrate an energy efficient IoT based smart home. Several Multiphysics simulations were carried out focusing on the kitchen of the home model. A motion sensor with a surveillance camera was used as part of the home security system. Coupled with the home light and HVAC control systems, the smart system can remotely control the lighting and heating or cooling when an occupant enters or leaves the kitchen.", "title": "" }, { "docid": "76e01466b9d7d4cbea714ce29f13759a", "text": "In this survey we review the image processing literature on the various approaches and models investigators have used for texture. These include statistical approaches of autocorrelation function, optical transforms, digital transforms, textural edgeness, structural element, gray tone cooccurrence, run lengths, and autoregressive models. We discuss and generalize some structural approaches to texture based on more complex primitives than gray tone. We conclude with some structural-statistical generalizations which apply the statistical techniques to the structural primitives.", "title": "" }, { "docid": "429b6eedecef4d769b3341aca7de85ef", "text": "Correspondence Lars Ruthotto, Department of Mathematics and Computer Science, Emory University, 400 Dowman Dr, Atlanta, GA 30322, USA. Email: [email protected] Summary Image registration is a central problem in a variety of areas involving imaging techniques and is known to be challenging and ill-posed. Regularization functionals based on hyperelasticity provide a powerful mechanism for limiting the ill-posedness. A key feature of hyperelastic image registration approaches is their ability to model large deformations while guaranteeing their invertibility, which is crucial in many applications. To ensure that numerical solutions satisfy this requirement, we discretize the variational problem using piecewise linear finite elements, and then solve the discrete optimization problem using the Gauss–Newton method. In this work, we focus on computational challenges arising in approximately solving the Hessian system. We show that the Hessian is a discretization of a strongly coupled system of partial differential equations whose coefficients can be severely inhomogeneous. Motivated by a local Fourier analysis, we stabilize the system by thresholding the coefficients. We propose a Galerkin-multigrid scheme with a collective pointwise smoother. We demonstrate the accuracy and effectiveness of the proposed scheme, first on a two-dimensional problem of a moderate size and then on a large-scale real-world application with almost 9 million degrees of freedom.", "title": "" }, { "docid": "c734c98b1ca8261694386c537870c2f3", "text": "Uncontrolled wind turbine configuration, such as stall-regulation captures, energy relative to the amount of wind speed. This configuration requires constant turbine speed because the generator that is being directly coupled is also connected to a fixed-frequency utility grid. In extremely strong wind conditions, only a fraction of available energy is captured. Plants designed with such a configuration are economically unfeasible to run in these circumstances. Thus, wind turbines operating at variable speed are better alternatives. This paper focuses on a controller design methodology applied to a variable-speed, horizontal axis wind turbine. A simple but rigid wind turbine model was used and linearised to some operating points to meet the desired objectives. By using blade pitch control, the deviation of the actual rotor speed from a reference value is minimised. The performances of PI and PID controllers were compared relative to a step wind disturbance. Results show comparative responses between these two controllers. The paper also concludes that with the present methodology, despite the erratic wind data, the wind turbine still manages to operate most of the time at 88% in the stable region.", "title": "" }, { "docid": "53b38576a378b7680a69bba1ebe971ba", "text": "The detection of symmetry axes through the optimization of a given symmetry measure, computed as a function of the mean-square error between the original and reflected images, is investigated in this paper. A genetic algorithm and an optimization scheme derived from the self-organizing maps theory are presented. The notion of symmetry map is then introduced. This transform allows us to map an object into a symmetry space where its symmetry properties can be analyzed. The locations of the different axes that globally and locally maximize the symmetry value can be obtained. The input data are assumed to be vector-valued, which allow to focus on either shape. color or texture information. Finally, the application to skin cancer diagnosis is illustrated and discussed.", "title": "" }, { "docid": "bb4a83a48d1943cc8205510dc2a750a8", "text": "Whenever a document containing sensitive information needs to be made public, privacy-preserving measures should be implemented. Document sanitization aims at detecting sensitive pieces of information in text, which are removed or hidden prior publication. Even though methods detecting sensitive structured information like e-mails, dates or social security numbers, or domain specific data like disease names have been developed, the sanitization of raw textual data has been scarcely addressed. In this paper, we present a general-purpose method to automatically detect sensitive information from textual documents in a domain-independent way. Relying on the Information Theory and a corpus as large as the Web, it assess the degree of sensitiveness of terms according to the amount of information they provide. Preliminary results show that our method significantly improves the detection recall in comparison with approaches based on trained classifiers.", "title": "" } ]
scidocsrr
6d89ecca492e99422e5f8208633f8685
Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments
[ { "docid": "7399a8096f56c46a20715b9f223d05bf", "text": "Recently, Rao-Blackwellized particle filters (RBPF) have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a RBPF for learning grid maps. We propose an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation. This drastically decreases the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations, which seriously reduces the problem of particle depletion. Experimental results carried out with real mobile robots in large-scale indoor, as well as outdoor, environments illustrate the advantages of our methods over previous approaches", "title": "" } ]
[ { "docid": "e09d45316d48894bcfb3c5657cd19118", "text": "In recent years, multiple-line acquisition (MLA) has been introduced to increase frame rate in cardiac ultrasound medical imaging. However, this method induces blocklike artifacts in the image. One approach suggested, synthetic transmit beamforming (STB), involves overlapping transmit beams which are then interpolated to remove the MLA blocking artifacts. Independently, the application of minimum variance (MV) beamforming has been suggested in the context of MLA. We demonstrate here that each approach is only a partial solution and that combining them provides a better result than applying either approach separately. This is demonstrated by using both simulated and real phantom data, as well as cardiac data. We also show that the STB-compensated MV beamfomer outperforms single-line acquisition (SLA) delay- and-sum in terms of lateral resolution.", "title": "" }, { "docid": "bb19e122737f08997585999575d2a394", "text": "In this paper, shadow detection and compensation are treated as image enhancement tasks. The principal components analysis (PCA) and luminance based multi-scale Retinex (LMSR) algorithm are explored to detect and compensate shadow in high resolution satellite image. PCA provides orthogonally channels, thus allow the color to remain stable despite the modification of luminance. Firstly, the PCA transform is used to obtain the luminance channel, which enables us to detect shadow regions using histogram threshold technique. After detection, the LMSR technique is used to enhance the image only in luminance channel to compensate for shadows. Then the enhanced image is obtained by inverse transform of PCA. The final shadow compensation image is obtained by comparison of the original image, the enhanced image and the shadow detection image. Experiment results show the effectiveness of the proposed method.", "title": "" }, { "docid": "365b95202095942c4b2b43a5e6f6e04e", "text": "Abstract. In this paper we use the contraction mapping theorem to obtain asymptotic stability results of the zero solution of a nonlinear neutral Volterra integro-differential equation with variable delays. Some conditions which allow the coefficient functions to change sign and do not ask the boundedness of delays are given. An asymptotic stability theorem with a necessary and sufficient condition is proved, which improve and extend the results in the literature. Two examples are also given to illustrate this work.", "title": "" }, { "docid": "884ea5137f9eefa78030608097938772", "text": "In this paper, we propose a new concept - the \"Reciprocal Velocity Obstacle\"- for real-time multi-agent navigation. We consider the case in which each agent navigates independently without explicit communication with other agents. Our formulation is an extension of the Velocity Obstacle concept [3], which was introduced for navigation among (passively) moving obstacles. Our approach takes into account the reactive behavior of the other agents by implicitly assuming that the other agents make a similar collision-avoidance reasoning. We show that this method guarantees safe and oscillation- free motions for each of the agents. We apply our concept to navigation of hundreds of agents in densely populated environments containing both static and moving obstacles, and we show that real-time and scalable performance is achieved in such challenging scenarios.", "title": "" }, { "docid": "2c667b86fffdcb69e35a21795fc0e3bd", "text": "We compiled details of over 8000 assessments of protected area management effectiveness across the world and developed a method for analyzing results across diverse assessment methodologies and indicators. Data was compiled and analyzed for over 4000 of these sites. Management of these protected areas varied from weak to effective, with about 40% showing major deficiencies. About 14% of the surveyed areas showed significant deficiencies across many management effectiveness indicators and hence lacked basic requirements to operate effectively. Strongest management factors recorded on average related to establishment of protected areas (legal establishment, design, legislation and boundary marking) and to effectiveness of governance; while the weakest aspects of management included community benefit programs, resourcing (funding reliability and adequacy, staff numbers and facility and equipment maintenance) and management effectiveness evaluation. Estimations of management outcomes, including both environmental values conservation and impact on communities, were positive. We conclude that in spite of inadequate funding and management process, there are indications that protected areas are contributing to biodiversity conservation and community well-being.", "title": "" }, { "docid": "233c9d97c70a95f71897b6f289c7d8a7", "text": "The group Steiner tree problem is a generalization of the Steiner tree problem where we are given several subsets (groups) of vertices in a weighted graph, and the goal is to find a minimum-weight connected subgraph containing at least one vertex from each group. The problem was introduced by Reich and Widmayer and linds applications in VLSI design. The group Steiner tree problem generalizes the set covering problem, and is therefore at least as hard. We give a randomized O(log3 n log k)-approximation algorithm for the group Steiner tree problem on an n-node graph, where k is the number of groups. The best previous performance guarantee was (1 + ?)a (Bateman, Helvig, Robins and Zelikovsky). Noting that the group Steiner problem also models the network design problems with location-theoretic constraints studied by Marathe, Bavi and Sundaram, our results also improve their bicriteria approximation results. Similarly, we improve previous results by Slavik on a tour version, called the errand scheduling problem. We use the result of Bartal on probabilistic approximation of finite metric spaces by tree metrics problem to one in a tree metric. To find a solution on a tree, we use a generalization of randomized rounding. Our approximation guarantees improve to O(log’ nlog k) in the case of graphs that exclude small minors by using a better alternative to Bartal’s result on probabilistic approximations of metrics induced by such graphs (Konjevod, Ravi and Salman) this improvement is valid for the group Steiner problem on planar graphs as well as on a set of points in the 2D-Euclidean case. -", "title": "" }, { "docid": "c48fa25b1e49d641efa08d3ce9960270", "text": "This paper presents a novel mobility metric for mobile ad hoc networks (MANET) that is based on the ratio between the received power levels of successive transmissions measured at any node from all its neighboring nodes. This mobility metric is subsequently used as a basis for cluster formation which can be used for improving the scalability of services such as routing in such networks. We propose a distributed clustering algorithm, MOBIC, based on the use of this mobility metric for selection of clusterheads, and demonstrate that it leads to more stable cluster formation than the Lowest-ID clustering algorithm ( “least clusterhead change” [3]) which is a well known clustering algorithms for MANETs. We show reduction of as much as 33% in the number of clusterhead changes owing to the use of the proposed technique. In a MANET that uses scalable cluster-based services, the network performance metrics such as throughput and delay are tightly coupled with the frequency of cluster reorganization. Therefore, we believe that since using MOBIC results in a more stable configuration, it will directly lead to improvement of performance.", "title": "" }, { "docid": "1b52822b76e7ace1f7e12a6f2c92b060", "text": "We treated the mandibular retrusion of a 20-year-old man by distraction osteogenesis. Our aim was to avoid any visible discontinuities in the soft tissue profile that may result from conventional \"one-step\" genioplasty. The result was excellent. In addition to a good aesthetic outcome, there was increased bone formation not only between the two surfaces of the osteotomy but also adjacent to the distraction zone, resulting in improved coverage of the roots of the lower incisors. Only a few patients have been treated so far, but the method seems to hold promise for the treatment of extreme retrognathism, as these patients often have insufficient buccal bone coverage.", "title": "" }, { "docid": "e11b4a08fc864112d4f68db1ea9703e9", "text": "Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.", "title": "" }, { "docid": "92b4d9c69969c66a1d523c38fd0495a4", "text": "A level designer typically creates the levels of a game to cater for a certain set of objectives, or mission. But in procedural content generation, it is common to treat the creation of missions and the generation of levels as two separate concerns. This often leads to generic levels that allow for various missions. However, this also creates a generic impression for the player, because the potential for synergy between the objectives and the level is not utilised. Following up on the mission-space generation concept, as described by Dormans [5], we explore the possibilities of procedurally generating a level from a designer-made mission. We use a generative grammar to transform a mission into a level in a mixed-initiative design setting. We provide two case studies, dungeon levels for a rogue-like game, and platformer levels for a metroidvania game. The generators differ in the way they use the mission to generate the space, but are created with the same tool for content generation based on model transformations. We discuss the differences between the two generation processes and compare it with a parameterized approach.", "title": "" }, { "docid": "ac0e5d2b50462a15928556bee7f8548e", "text": "The concept of “truth,” as a public good is the production of a collective understanding, which emerges from a complex network of social interactions. The recent impact of social networks on shaping the perception of truth in political arena shows how such perception is corroborated and established by the online users, collectively. However, investigative journalism for discovering truth is a costly option, given the vast spectrum of online information. In some cases, both journalist and online users choose not to investigate the authenticity of the news they receive, because they assume other actors of the network had carried the cost of validation. Therefore, the new phenomenon of “fake news” has emerged within the context of social networks. The online social networks, similarly to System of Systems, cause emergent properties, which makes authentication processes difficult, given availability of multiple sources. In this study, we show how this conflict can be modeled as a volunteer's dilemma. We also show how the public contribution through news subscription (shared rewards) can impact the dominance of truth over fake news in the network.", "title": "" }, { "docid": "0105070bd23400083850627b1603af0b", "text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.", "title": "" }, { "docid": "fb15647d528df8b8613376066d9f5e68", "text": "This article described the feature extraction methods of crop disease based on computer image processing technology in detail. Based on color, texture and shape feature extraction method in three aspects features and their respective problems were introduced start from the perspective of lesion leaves. Application research of image feature extraction in the filed of crop disease was reviewed in recent years. The results were analyzed that about feature extraction methods, and then the application of image feature extraction techniques in the future detection of crop diseases in the field of intelligent was prospected.", "title": "" }, { "docid": "0b06586502303b6796f1f512129b5cbe", "text": "This paper introduces an extension of collocational analysis that takes into account grammatical structure and is specifically geared to investigating the interaction of lexemes and the grammatical constructions associated with them. The method is framed in a construction-based approach to language, i.e. it assumes that grammar consists of signs (form-meaning pairs) and is thus not fundamentally different from the lexicon. The method is applied to linguistic expressions at various levels of abstraction (words, semi-fixed phrases, argument structures, tense, aspect and mood). The method has two main applications: first, to increase the adequacy of grammatical description by providing an objective way of identifying the meaning of a grammatical construction and determining the degree to which particular slots in it prefer or are restricted to a particular set of lexemes; second, to provide data for linguistic theory-building.", "title": "" }, { "docid": "a1a04d251e19a43455787cefa02bae53", "text": "This paper provides an overview of CMOS-based sensor technology with specific attention placed on devices made through micromachining of CMOS substrates and thin films. Microstructures may be formed using either pre-CMOS, intra-CMOS and post-CMOS fabrication approaches. To illustrate and motivate monolithic integration, a handful of microsystem examples, including inertial sensors, gravimetric chemical sensors, microphones, and a bone implantable sensor will be highlighted. Design constraints and challenges for CMOS-MEMS devices will be covered", "title": "" }, { "docid": "bb774fed5d447fdc181cb712c74925c2", "text": "Test-driven development is a discipline that helps professional software developers ship clean, flexible code that works, on time. In this article, the author discusses how test-driven development can help software developers achieve a higher degree of professionalism", "title": "" }, { "docid": "5bb9ca3c14dd84f1533789c3fe4bbd10", "text": "The field of spondyloarthritis (SpA) has experienced major progress in the last decade, especially with regard to new treatments, earlier diagnosis, imaging technology and a better definition of outcome parameters for clinical trials. In the present work, the Assessment in SpondyloArthritis international Society (ASAS) provides a comprehensive handbook on the most relevant aspects for the assessments of spondyloarthritis, covering classification criteria, MRI and x rays for sacroiliac joints and the spine, a complete set of all measurements relevant for clinical trials and international recommendations for the management of SpA. The handbook focuses at this time on axial SpA, with ankylosing spondylitis (AS) being the prototype disease, for which recent progress has been faster than in peripheral SpA. The target audience includes rheumatologists, trial methodologists and any doctor and/or medical student interested in SpA. The focus of this handbook is on practicality, with many examples of MRI and x ray images, which will help to standardise not only patient care but also the design of clinical studies.", "title": "" }, { "docid": "91713d85bdccb2c06d7c50365bd7022c", "text": "A 1 Mbit MRAM, a nonvolatile memory that uses magnetic tunnel junction (MJT) storage elements, has been characterized for total ionizing dose (TID) and single event latchup (SEL). Our results indicate that these devices show no single event latchup up to an effective LET of 84 MeV-cm2/mg (where our testing ended) and no bit failures to a TID of 75 krad (Si).", "title": "" }, { "docid": "4d405c1c2919be01209b820f61876d57", "text": "This paper presents a single-pole eight-throw switch, based on an eight-way power divider, using substrate integrate waveguide(SIW) technology. Eight sectorial-lines are formed by inserting radial slot-lines on the top plate of SIW power divider. Each sectorial-line can be controlled independently with high level of isolation. The switching is accomplished by altering the capacitance of the varactor on the line, which causes different input impedances to be seen at a central probe to each sectorial line. The proposed structure works as a switching circuit and an eight-way power divider depending on the bias condition. The change in resonant frequency and input impedance are estimated by adapting a tapered transmission line model. The detailed design, fabrication, and measurement are discussed.", "title": "" }, { "docid": "608ab1c58a84cd97f6444c5eff4bf8fc", "text": "Light detection and ranging (lidar) is becoming an increasingly popular technology among scientists for the development of predictive models of forest biophysical variables. However, before this technology can be adopted with confidence for long-term monitoring applications in Canada, robust models must be developed that can be applied and validated over large and complex forested areas. This will require “scaling-up” from current models developed from high-density lidar data to low-density data collected at higher altitudes. This paper investigates the effect of lowering the average point spacing of discrete lidar returns on models of forest biophysical variables. Validation of results revealed that high-density models are well correlated with mean dominant height, basal area, crown closure, and average aboveground biomass (R2 = 0.84, 0.89, 0.60, and 0.91, respectively). Low-density models could not accurately predict crown closure (R2 = 0.36). However, they did provide slightly improved estimates for mean dominant height, basal area, and average aboveground biomass (R2 = 0.90, 0.91, and 0.92, respectively). Maps were generated and validated for the entire study area from the low-density models. The ability of low-density models to accurately map key biophysical variables is a positive indicator for the utility of lidar data for monitoring large forested areas. Résumé : Le lidar est en voie de devenir une technique de plus en plus populaire parmi les chercheurs pour le développement de modèles de prédiction des variables biophysiques de la forêt. Cependant, avant que cette technologie puisse être adoptée avec confiance pour le suivi à long terme au Canada, des modèles robustes pouvant être appliqués et validés pour des superficies de forêt vastes et complexes doivent être développés. Cela va exiger de passer des modèles courants développés à partir d’une forte densité de données lidar à une plus faible densité de données collectées à plus haute altitude. Cet article se penche sur l’effet de la diminution de l’espacement ponctuel moyen des échos individuels du lidar sur les modèles de variables biophysiques de la forêt. La validation des résultats a montré que les modèles à forte densité sont bien corrélés avec la hauteur dominante moyenne, la surface terrière, la fermeture du couvert et la biomasse aérienne moyenne (R2 = 0,84, 0,89, 0,60 et 0,91 respectivement). Les modèles à faible densité ne pouvaient pas correctement (R2 = 0,36) prédire la fermeture du couvert. Cependant, ils ont fourni des estimations légèrement meilleures pour la hauteur dominante moyenne, la surface terrière et la biomasse aérienne moyenne (R2 = 0,90, 0,91 et 0,92 respectivement). Des cartes ont été générées et validées pour toute la zone d’étude à partir de modèles à faible densité. La capacité des modèles à faible densité à cartographier correctement les variables biophysiques importantes est une indication positive de l’utilité des données lidar pour le suivi de vastes zones de forêt. [Traduit par la Rédaction] Thomas et al. 47", "title": "" } ]
scidocsrr